science

The Pretence of Knowledge

Friedrich Hayek, in his Nobel Prize lecture of 1974, “The Pretence of Knowledge,” observes that

the great and rapid advance of the physical sciences took place in fields where it proved that explanation and prediction could be based on laws which accounted for the observed phenomena as functions of comparatively few variables.

Hayek’s particular target was the scientism then (and still) rampant in economics. In particular, there was (and is) a quasi-religious belief in the power of central planning (e.g., regulation, “stimulus” spending, control of the money supply) to attain outcomes superior to those that free markets would yield.

But, as Hayek says in closing,

There is danger in the exuberant feeling of ever growing power which the advance of the physical sciences has engendered and which tempts man to try, “dizzy with success” … to subject not only our natural but also our human environment to the control of a human will. The recognition of the insuperable limits to his knowledge ought indeed to teach the student of society a lesson of humility which should guard him against becoming an accomplice in men’s fatal striving to control society – a striving which makes him not only a tyrant over his fellows, but which may well make him the destroyer of a civilization which no brain has designed but which has grown from the free efforts of millions of individuals.

I was reminded of Hayek’s observations by John Cochrane’s post, “Groundhog Day” (The Grumpy Economist, May 11, 2014), wherein Cochrane presents this graph:

The fed's forecasting models are broken

Cochrane adds:

Every serious forecast looked like this — Fed, yes, but also CBO, private forecasters, and the term structure of forward rates. Everyone has expected bounce-back growth and rise in interest rates to start next year, for the last 6 years. And every year it has not happened. Welcome to the slump. Every year, Sonny and Cher wake us up, and it’s still cold, and it’s still grey. But we keep expecting spring tomorrow.

Whether the corrosive effects of government microeconomic and regulatory policy, or a failure of those (unprintable adjectives) Republicans to just vote enough wasted-spending Keynesian stimulus, or a failure of the Fed to buy another $3 trillion of bonds, the question of the day really should be why we have this slump — which, let us be honest, no serious forecaster expected.

(I add the “serious forecaster” qualification on purpose. I don’t want to hear randomly mined quotes from bloviating prognosticators who got lucky once, and don’t offer a methodology or a track record for their forecasts.)

The Fed’s forecasting models are nothing more than sophisticated charlatanism — a term that Hayek applied to pseudo-scientific endeavors like macroeconomic modeling. Nor is charlatanism confined to economics and the other social “sciences.” It’s rampant in climate “science,” as Roy Spencer has shown. Consider, for example, this graph from Spencers’s post, “95% of Climate Models Agree: The Observations Must Be Wrong” (Roy Spencer, Ph.D., February 7, 2014):

95% of climate models agree_the observations must be wrong

Spencer has a lot more to say about the pseudo-scientific aspects of climate “science.” This example is from “Top Ten Good Skeptical Arguments” (May 1, 2014):

1) No Recent Warming. If global warming science is so “settled”, why did global warming stop over 15 years ago (in most temperature datasets), contrary to all “consensus” predictions?

2) Natural or Manmade? If we don’t know how much of the warming in the longer term (say last 50 years) is natural, then how can we know how much is manmade?

3) IPCC Politics and Beliefs. Why does it take a political body (the IPCC) to tell us what scientists “believe”? And when did scientists’ “beliefs” translate into proof? And when was scientific truth determined by a vote…especially when those allowed to vote are from the Global Warming Believers Party?

4) Climate Models Can’t Even Hindcast How did climate modelers, who already knew the answer, still fail to explain the lack of a significant temperature rise over the last 30+ years? In other words, how to you botch a hindcast?

5) …But We Should Believe Model Forecasts? Why should we believe model predictions of the future, when they can’t even explain the past?

6) Modelers Lie About Their “Physics”. Why do modelers insist their models are based upon established physics, but then hide the fact that the strong warming their models produce is actually based upon very uncertain “fudge factor” tuning?

7) Is Warming Even Bad? Who decided that a small amount of warming is necessarily a bad thing?

8) Is CO2 Bad? How did carbon dioxide, necessary for life on Earth and only 4 parts in 10,000 of our atmosphere, get rebranded as some sort of dangerous gas?

9) Do We Look that Stupid? How do scientists expect to be taken seriously when their “theory” is supported by both floods AND droughts? Too much snow AND too little snow?

10) Selective Pseudo-Explanations. How can scientists claim that the Medieval Warm Period (which lasted hundreds of years), was just a regional fluke…yet claim the single-summer (2003) heat wave in Europe had global significance?

11) (Spinal Tap bonus) Just How Warm is it, Really? Why is it that every subsequent modification/adjustment to the global thermometer data leads to even more warming? What are the chances of that? Either a warmer-still present, or cooling down the past, both of which produce a greater warming trend over time. And none of the adjustments take out a gradual urban heat island (UHI) warming around thermometer sites, which likely exists at virtually all of them — because no one yet knows a good way to do that.

It is no coincidence that leftists believe in the efficacy of central planning and cling tenaciously to a belief in catastrophic anthropogenic global warming. The latter justifies the former, of course. And both beliefs exemplify the left’s penchant for magical thinking, about which I’ve written several times (e.g., here, here, here, here, and here).

Magical thinking is the pretense of knowledge in the nth degree. It conjures “knowledge” from ignorance and hope. And no one better exemplifies magical thinking than our hopey-changey president.

*     *     *

Related posts:
Modeling Is Not Science
The Left and Its Delusions
Economics: A Survey
AGW: The Death Knell
The Keynesian Multiplier: Phony Math
Modern Liberalism as Wishful Thinking

The Limits of Science (II)

The material of the universe — be it called matter or energy — has three essential properties: essence, emanation, and effect. Essence — what things really “are” — is the most elusive of the properties, and probably unknowable. Emanations are the perceptible aspects of things, such as their detectible motions and electromagnetic properties. Effects are what things “do” to other things, as in the effect that a stream of photons has on paper when the photons are focused through a magnifying glass. (You’ve lived a bland life if you’ve never started a fire that way.)

Science deals in emanations and effects. It seems that these can be described without knowing what matter-energy “really” consists of. But can they?

Take a baseball. Assume, for the sake of argument, that it can’t be opened and separated into constituent parts, which are many. (See the video at this page for details.) Taking the baseball as a fundamental particle, its attributes (seemingly) can be described without knowing what’s inside it. Those attributes include the distance that it will travel when hit by a bat, when the ball and bat (of a certain weight) meet at certain velocities and at certain angles, given the direction and speed of rotation of the ball when it meets the bat, ambient temperature and relative humidity, and so on.

And yet, the baseball can’t be treated as if it were a fundamental particle. The distance that it will travel, everything else being the same, depends on the material at its core, the size of the core, the tightness of the windings of yarn around the core, the types of yarn used in the windings, the tightness of the cover, the flatness of the stitches that hold the cover in place, and probably several other things.

This suggests to me that the emanations and effects of an object depend on its essence — at least in the everyday world of macroscopic objects. If that’s so, why shouldn’t it be the same for the world of objects called sub-atomic particles?

Which leads to some tough questions: Is it really the case that all of the particles now considered elementary are really indivisible? Are there other elementary particles yet to be discovered or hypothesized, and will some of those be constituents of particles now thought to be elementary? And even if all of the truly elementary particles are discovered, won’t scientists still be in the dark as to what those particles really “are”?

The progress of science should be judged by how much scientists know about the universe and its constituents. By that measure — and despite what may seem to be a rapid pace of discovery — it is fair to say that science has a long way to go — probably forever.

Scientists, who tend to be atheists, like to refer to the God of the gaps, a “theological perspective in which gaps in scientific knowledge are taken to be evidence or proof of God’s existence.” The smug assumption implicit in the use of the phrase by atheists is that science will close the gaps, and that there will be no room left for God.

It seems to me that the shoe is really on the other foot. Atheistic scientists assume that the gaps in their knowledge are relatively small ones, and that science will fill them. How wrong they are.

*     *     *

Related posts:
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
The Thing about Science
A Theory of Everything, Occam’s Razor, and Baseball
Evolution and Religion
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Science, Logic, and God
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
The Big Bang and Atheism
Einstein, Science, and God
Atheism, Religion, and Science Redux
The Greatest Mystery
More Thoughts about Evolutionary Teleology
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Mysteries: Sacred and Profane
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Nothingness
Spooky Numbers, Evolution, and Intelligent Design
Mind, Cosmos, and Consciousness

Not-So-Random Thoughts (IX)

Demystifying Science

In a post with that title, I wrote:

“Science” is an unnecessarily daunting concept to the uninitiated, which is to say, almost everyone. Because scientific illiteracy is rampant, advocates of policy positions — scientists and non-scientists alike — often are able to invoke “science” wantonly, thus lending unwarranted authority to their positions.

Just how unwarranted is the “authority” that is lent by publication in a scientific journal?

Academic scientists readily acknowledge that they often get things wrong. But they also hold fast to the idea that these errors get corrected over time as other scientists try to take the work further. Evidence that many more dodgy results are published than are subsequently corrected or withdrawn calls that much-vaunted capacity for self-correction into question. There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think. . . .

In 2005 John Ioannidis, an epidemiologist from Stanford University, caused a stir with a paper showing why, as a matter of statistical logic, the idea that only one . . . paper in 20 gives a false-positive result was hugely optimistic. Instead, he argued, “most published research findings are probably false.” As he told the quadrennial International Congress on Peer Review and Biomedical Publication, held this September [2013] in Chicago, the problem has not gone away. (The Economist, “Trouble at the Lab,” October 19, 2013)

Tell me again about anthropogenic global warming.

The “Little Ice Age” Redux?

Speaking of AGW, remember the “Little Ice Age” of the 1970s?

George Will does. As do I.

One Sunday morning in January or February of 1977, when I lived in western New York State, I drove to the news stand to pick up my Sunday Times. I had to drive my business van because my car wouldn’t start. (Odd, I thought.) I arrived at the stand around 8:00 a.m. The temperature sign on the bank across the street then read -16 degrees (Fahrneheit). The proprietor informed me that when he opened his shop at 6:00 a.m. the reading was -36 degrees.

That was the nadir of the coldest winter I can remember. The village reservoir froze in January and stayed frozen until March. (The fire department had to pump water from the Genesee River to the village’s water-treatment plant.) Water mains were freezing solid, even though they were 6 feet below the surface. Many homeowners had to keep their faucets open a trickle to ensure that their pipes didn’t freeze. And, for the reasons cited in Will’s article, many scientists — and many Americans — thought that a “little ice age” had arrived and would be with us for a while.

But science is often inconclusive and just as often slanted to serve a political agenda. (Also, see this.) That’s why I’m not ready to sacrifice economic growth and a good portion of humanity on the altar of global warming and other environmental fads.

Well, the “Little Ice Age” may return, soon:

[A] paper published today in Advances in Space Research predicts that if the current lull in solar activity “endures in the 21st century the Sun shall enter a Dalton-like grand minimum. It was a period of global cooling.” (Anthony Watts, “Study Predicts the Sun Is Headed for a Dalton-like Solar Minimum around 2050,” Watts Up With That?, December 2, 2013)

The Dalton Minimum, named after English astronomer John Dalton, lasted from 1790 to 1830.

Bring in your pets and plants, cover your pipes, and dress warmly.

Madison’s Fatal Error

Timothy Gordon writes:

After reading Montesquieu’s most important admonitions in Spirit of the Laws, Madison decided that he could outsmart him. The Montesquieuan admonitions were actually limitations on what a well-functioning republic could allow, and thus, be. And Madison got greedy, not wanting to abide by those limitations.

First, Montesquieu required republican governments to maintain limited geographic scale. Second, Montesquieu required republican governments to preside over a univocal people of one creed and one mind on most matters. A “res publica” is a public thing valued by each citizen, after all. “How could this work when a republic is peopled diversely?” the faithful Montesquieuan asks. (Nowadays in America, for example, half the public values liberty and the other half values equality, its eternal opposite.) Thirdly—and most important—Montesquieu mandated that the three branches of government were to hold three distinct, separate types of power, without overlap.

Before showing just how correct Montesquieu was—and thus, how incorrect Madison was—it must be articulated that in the great ratification contest of 1787-1788, there operated only one faithful band of Montesquieu devotees: the Antifederalists. They publicly pointed out how superficial and misleading were the Federalist appropriations of Montesquieu within the new Constitution and its partisan defenses.

The first two of these Montesquieuan admonitions went together logically: a) limiting a republic’s size to a small confederacy, b) populated by a people of one mind. In his third letter, Antifederalist Cato made the case best:

“whoever seriously considers the immense extent of territory within the limits of the United States, together with the variety of its climates, productions, and number of inhabitants in all; the dissimilitude of interest, morals, and policies, will receive it as an intuitive truth, that a consolidated republican form of government therein, can never form a perfect union.”

Then, to bulwark his claim, Cato goes on to quote two sacred sources of inestimable worth: the Bible… and Montesquieu. Attempting to fit so many creeds and beliefs into such a vast territory, Cato says, would be “like a house divided against itself.” That is, it would not be a res publica, oriented at sameness. Then Cato goes on: “It is natural, says Montesquieu, to a republic to have only a small territory, otherwise it cannot long subsist.”

The teaching Cato references is simple: big countries of diverse peoples cannot be governed locally, qua republics, but rather require a nerve center like Washington D.C. wherefrom all the decisions shall be made. The American Revolution, Cato reminded his contemporaries, was fought over the principle of local rule.

To be fair, Madison honestly—if wrongly—figured that he had dialed up the answer, such that the United States could be both vast and pluralistic, without the consequent troubles forecast by Montesquieu. He viewed the chief danger of this combination to lie in factionalization. One can either “remove the cause [of the problem] or control its effects,” Madison famously prescribed in “Federalist 10″.

The former solution (“remove the cause”) suggests the Montesquieuan way: i.e. remove the plurality of opinion and the vastness of geography. Keep American confederacies small and tightly knit. After all, victory in the War of Independence left the thirteen colonies thirteen small, separate countries, contrary to President Lincoln’s rhetoric four score later. Union, although one possible option, was not logically necessary.

But Madison opted for the latter solution (“control the effects”), viewing union as vitally indispensable and thus, Montesquieu’s teaching as regrettably dispensable: allow size, diversity, and the consequent factionalization. Do so, he suggested, by reducing them to nothing…with hyper-pluralism. Madison deserves credit: for all its oddity, the idea actually seemed to work… for a time. . . . (“James Madison’s Nonsense-Coup Against Montesqieu (and the Classics Too),” The Imaginative Conservative, December 2013)

The rot began with the advent of the Progressive Era in the late 1800s, and it became irreversible with the advent of the New Deal, in the 1930s. As I wrote here, Madison’s

fundamental error can be found in . . . Federalist No. 51. Madison was correct in this:

. . . It is of great importance in a republic not only to guard the society against the oppression of its rulers, but to guard one part of the society against the injustice of the other part. Different interests necessarily exist in different classes of citizens. If a majority be united by a common interest, the rights of the minority will be insecure. . . .

But Madison then made the error of assuming that, under a central government, liberty is guarded by a diversity of interests:

[One method] of providing against this evil [is] . . . by comprehending in the society so many separate descriptions of citizens as will render an unjust combination of a majority of the whole very improbable, if not impracticable. . . . [This] method will be exemplified in the federal republic of the United States. Whilst all authority in it will be derived from and dependent on the society, the society itself will be broken into so many parts, interests, and classes of citizens, that the rights of individuals, or of the minority, will be in little danger from interested combinations of the majority.

In a free government the security for civil rights must be the same as that for religious rights. It consists in the one case in the multiplicity of interests, and in the other in the multiplicity of sects. The degree of security in both cases will depend on the number of interests and sects; and this may be presumed to depend on the extent of country and number of people comprehended under the same government. This view of the subject must particularly recommend a proper federal system to all the sincere and considerate friends of republican government, since it shows that in exact proportion as the territory of the Union may be formed into more circumscribed Confederacies, or States oppressive combinations of a majority will be facilitated: the best security, under the republican forms, for the rights of every class of citizens, will be diminished: and consequently the stability and independence of some member of the government, the only other security, must be proportionately increased. . . .

In fact, as Montesqieu predicted, diversity — in the contemporary meaning of the word, is inimical to civil society and thus to ordered liberty. Exhibit A is a story by Michael Jonas about a study by Harvard political scientist Robert Putnam, “E Pluribus Unum: Diversity and Community in the Twenty-first Century“:

It has become increasingly popular to speak of racial and ethnic diversity as a civic strength. From multicultural festivals to pronouncements from political leaders, the message is the same: our differences make us stronger.

But a massive new study, based on detailed interviews of nearly 30,000 people across America, has concluded just the opposite. Harvard political scientist Robert Putnam — famous for “Bowling Alone,” his 2000 book on declining civic engagement — has found that the greater the diversity in a community, the fewer people vote and the less they volunteer, the less they give to charity and work on community projects. In the most diverse communities, neighbors trust one another about half as much as they do in the most homogenous settings. The study, the largest ever on civic engagement in America, found that virtually all measures of civic health are lower in more diverse settings. . . .

. . . Putnam’s work adds to a growing body of research indicating that more diverse populations seem to extend themselves less on behalf of collective needs and goals.

His findings on the downsides of diversity have also posed a challenge for Putnam, a liberal academic whose own values put him squarely in the pro-diversity camp. Suddenly finding himself the bearer of bad news, Putnam has struggled with how to present his work. He gathered the initial raw data in 2000 and issued a press release the following year outlining the results. He then spent several years testing other possible explanations.

When he finally published a detailed scholarly analysis in June in the journal Scandinavian Political Studies, he faced criticism for straying from data into advocacy. His paper argues strongly that the negative effects of diversity can be remedied, and says history suggests that ethnic diversity may eventually fade as a sharp line of social demarcation.

“Having aligned himself with the central planners intent on sustaining such social engineering, Putnam concludes the facts with a stern pep talk,” wrote conservative commentator Ilana Mercer, in a recent Orange County Register op-ed titled “Greater diversity equals more misery.”. . .

The results of his new study come from a survey Putnam directed among residents in 41 US communities, including Boston. Residents were sorted into the four principal categories used by the US Census: black, white, Hispanic, and Asian. They were asked how much they trusted their neighbors and those of each racial category, and questioned about a long list of civic attitudes and practices, including their views on local government, their involvement in community projects, and their friendships. What emerged in more diverse communities was a bleak picture of civic desolation, affecting everything from political engagement to the state of social ties. . . .

. . . In his findings, Putnam writes that those in more diverse communities tend to “distrust their neighbors, regardless of the color of their skin, to withdraw even from close friends, to expect the worst from their community and its leaders, to volunteer less, give less to charity and work on community projects less often, to register to vote less, to agitate for social reform more but have less faith that they can actually make a difference, and to huddle unhappily in front of the television.”“People living in ethnically diverse settings appear to ‘hunker down’ — that is, to pull in like a turtle,” Putnam writes. . . . (“The Downside of Diversity,” The Boston Globe (boston.com), August 5, 2007)

See also my posts, “Liberty and Society,” “The Eclipse of ‘Old America’,” and “Genetic Kinship and Society.” And these: “Caste, Crime, and the Rise of Post-Yankee America” (Theden, November 12, 2013) and “The New Tax Collectors for the Welfare State,” (Handle’s Haus, November 13, 2013).

Libertarian Statism

Finally, I refer you to David Friedman’s “Libertarian Arguments for Income Redistribution” (Ideas, December 6, 2013). Friedman notes that “Matt Zwolinski has recently posted some possible arguments in favor of a guaranteed basic income or something similar.” Friedman then dissects Zwolinski’s arguments.

Been there, done that. See my posts, “Bleeding-Heart Libertarians = Left-Statists” and “Not Guilty of Libertarian Purism,” wherein I tackle the statism of Zwolinski and some of his co-bloggers at Bleeding Heart Libertarians. In the second-linked post, I say that

I was wrong to imply that BHLs [Bleeding Heart Libertarians] are connivers; they (or too many of them) are just arrogant in their judgments about “social justice” and naive when they presume that the state can enact it. It follows that (most) BHLs are not witting left-statists; they are (too often) just unwitting accomplices of left-statism.

Accordingly, if I were to re-title [“Bleeding-Heart Libertarians = Left-Statists”] I would call it “Bleeding-Heart Libertarians: Crypto-Statists or Dupes for Statism?”.

*     *     *

Other posts in this series: I, II, III, IV, V, VI, VII, VIII

Pinker Commits Scientism

Steven Pinker, who seems determined to outdo Bryan Caplan in wrongheadedness, devotes “Science Is Not Your Enemy” (The New Republic,  August 6, 2013), to the defense of scientism. Actually, Pinker doesn’t overtly defend scientism, which is indefensible; he just redefines it to mean science:

The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine. Sometimes it is equated with lunatic positions, such as that “science is all that matters” or that “scientists should be entrusted to solve all problems.” Sometimes it is clarified with adjectives like “simplistic,” “naïve,” and “vulgar.” The definitional vacuum allows me to replicate gay activists’ flaunting of “queer” and appropriate the pejorative for a position I am prepared to defend.

Scientism, in this good sense, is not the belief that members of the occupational guild called “science” are particularly wise or noble. On the contrary, the defining practices of science, including open debate, peer review, and double-blind methods, are explicitly designed to circumvent the errors and sins to which scientists, being human, are vulnerable.

After that slippery performance, it’s all smooth sailing — or so Pinker thinks — because all he has to do is point out all the good things about science. And if scientism=science, then scientism is good, right?

Wrong. Scientism remains indefensible, and there’s a lot of scientism in what passes for science. You don’t need to take my word for it; Pinker’s own words tell the tale.

But, first, let’s get clear about the meaning and fallaciousness of scientism. The various writers cited by Pinker describe it well, but Hayek probably offers the most thorough indictment of it; for example:

[W]e shall, wherever we are concerned … with slavish imitation of the method and language of Science, speak of “scientism” or the “scientistic” prejudice…. It should be noted that, in the sense in which we shall use these terms, they describe, of course, an attitude which is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed. The scientistic as distinguished from the scientific view is not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it…..

The blind transfer of the striving for quantitative measurements to a field in which the specific conditions are not present which give it its basic importance in the natural sciences, is the result of an entirely unfounded prejudice. It is probably responsible for the worst aberrations and absurdities produced by scientism in the social sciences. It not only leads frequently to the selection for study of the most irrelevant aspects of the phenomena because they happen to be measurable, but also to “measurements” and assignments of numerical values which are absolutely meaningless. What a distinguished philosopher recently wrote about psychology is at least equally true of the social sciences, namely that it is only too easy “to rush off to measure something without considering what it is we are measuring, or what measurement means. In this respect some recent measurements are of the same logical type as Plato’s determination that a just ruler is 729 times as happy as an unjust one.”…

Closely connected with the “objectivism” of the scientistic approach is its methodological collectivism, its tendency to treat “wholes” like “society” or the “economy,” “capitalism” (as a given historical “phase”) or a particular “industry” or “class” or “country” as definitely given objects about which we can discover laws by observing their behavior as wholes. While the specific subjectivist approach of the social sciences starts … from our knowledge of the inside of these social complexes, the knowledge of the individual attitudes which form the elements of their structure, the objectivism of the natural sciences tries to view them from the outside ; it treats social phenomena not as something of which the human mind is a part and the principles of whose organization we can reconstruct from the familiar parts, but as if they were objects directly perceived by us as wholes….

The belief that human history, which is the result of the interaction of innumerable human minds, must yet be subject to simple laws accessible to human minds is now so widely held that few people are at all aware what an astonishing claim it really implies. Instead of working patiently at the humble task of rebuilding from the directly known elements the complex and unique structures which we find in the world, and of tracing from the changes in the relations between the elements the changes in the wholes, the authors of these pseudo-theories of history pretend to be able to arrive by a kind of mental short cut at a direct insight into the laws of succession of the immediately apprehended wholes. However doubtful their status, these theories of development have achieved a hold on public imagination much greater than any of the results of genuine systematic study. “Philosophies” or “theories” of history (or “historical theories”) have indeed become the characteristic feature, the “darling vice” of the 19th century. From Hegel and Comte, and particularly Marx, down to Sombart and Spengler these spurious theories came to be regarded as representative results of social science; and through the belief that one kind of “system” must as a matter of historical necessity be superseded by a new and different “system,” they have even exercised a profound influence on social evolution. This they achieved mainly because they looked like the kind of laws which the natural sciences produced; and in an age when these sciences set the standard by which all intellectual effort was measured, the claim of these theories of history to be able to predict future developments was regarded as evidence of their pre-eminently scientific character. Though merely one among many characteristic 19th century products of this kind, Marxism more than any of the others has become the vehicle through which this result of scientism has gained so wide an influence that many of the opponents of Marxism equally with its adherents are thinking in its terms. (Friedrich A. Hayek, The Counter Revolution Of Science [Kindle Locations 120-1180], The Free Press.)

After a barrage like that (and this), what’s a defender of scientism to do? Pinker’s tactic is to stop using “scientism” and start using “science.” This makes it seem as if he really isn’t defending scientism, but rather trying to show how science can shed light onto subjects that are usually not in the province of science. In reality, Pinker preaches scientism by calling it science.

For example:

The new sciences of the mind are reexamining the connections between politics and human nature, which were avidly discussed in Madison’s time but submerged during a long interlude in which humans were assumed to be blank slates or rational actors. Humans, we are increasingly appreciating, are moralistic actors, guided by norms and taboos about authority, tribe, and purity, and driven by conflicting inclinations toward revenge and reconciliation.

There is nothing new in this, as Pinker admits by adverting to Madison. Nor was the understanding of human nature “submerged” except in the writings of scientistic social “scientists.” We ordinary mortals were never fooled. Moreover, Pinker’s idea of scientific political science seems to be data-dredging:

With the advent of data science—the analysis of large, open-access data sets of numbers or text—signals can be extracted from the noise and debates in history and political science resolved more objectively.

As explained here, data-dredging is about as scientistic as it gets:

When enough hypotheses are tested, it is virtually certain that some falsely appear statistically significant, since every data set with any degree of randomness contains some spurious correlations. Researchers using data mining techniques if they are not careful can be easily misled by these apparently significant results, even though they are mere artifacts of random variation.

Turning to the humanities, Pinker writes:

[T]here can be no replacement for the varieties of close reading, thick description, and deep immersion that erudite scholars can apply to individual works. But must these be the only paths to understanding? A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate [sic] and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win. The humanities would enjoy more of the explanatory depth of the sciences, to say nothing of the kind of a progressive agenda that appeals to deans and donors. The sciences could challenge their theories with the natural experiments and ecologically valid phenomena that have been so richly characterized by humanists.

What on earth is Pinker talking about? This is over-the-top bafflegab worthy of Professor Irwin Corey. But because it comes from the keyboard of a noted (self-promoting) academic, we are meant to take it seriously.

Yes, art, culture, and society are products of human brains. So what? Poker is, too, and it’s a lot more amenable to explication by the mathematical tools of science. But the successful application of those tools depends on traits that are more art than science (bluffing, spotting “tells,” avoiding “tells,” for example).

More “explanatory depth” in the humanities means a deeper pile of B.S. Great art, literature, and music aren’t concocted formulaically. If they could be, modernism and postmodernism wouldn’t have yielded mountains of trash.

Oh, I know: It will be different next time. As if the tools of science are immune to misuse by obscurantists, relativists, and practitioners of political correctness. Tell it to those climatologists who dare to challenge the conventional wisdom about anthropogenic global warming. Tell it to the “sub-human” victims of the Third Reich’s medical experiments and gas chambers.

Pinker anticipates this kind of objection:

At a 2011 conference, [a] colleague summed up what she thought was the mixed legacy of science: the eradication of smallpox on the one hand; the Tuskegee syphilis study on the other. (In that study, another bloody shirt in the standard narrative about the evils of science, public-health researchers beginning in 1932 tracked the progression of untreated, latent syphilis in a sample of impoverished African Americans.) The comparison is obtuse. It assumes that the study was the unavoidable dark side of scientific progress as opposed to a universally deplored breach, and it compares a one-time failure to prevent harm to a few dozen people with the prevention of hundreds of millions of deaths per century, in perpetuity.

But the Tuskegee study was only a one-time failure in the sense that it was the only Tuskegee study. As a type of failure — the misuse of science (witting and unwitting) — it goes hand-in-hand with the advance of scientific knowledge. Should science be abandoned because of that? Of course not. But the hard fact is that science, qua science, is powerless against human nature, which defies scientific control.

Pinker plods on by describing ways in which science can contribute to the visual arts, music, and literary scholarship:

The visual arts could avail themselves of the explosion of knowledge in vision science, including the perception of color, shape, texture, and lighting, and the evolutionary aesthetics of faces and landscapes. Music scholars have much to discuss with the scientists who study the perception of speech and the brain’s analysis of the auditory world.

As for literary scholarship, where to begin? John Dryden wrote that a work of fiction is “a just and lively image of human nature, representing its passions and humours, and the changes of fortune to which it is subject, for the delight and instruction of mankind.” Linguistics can illuminate the resources of grammar and discourse that allow authors to manipulate a reader’s imaginary experience. Cognitive psychology can provide insight about readers’ ability to reconcile their own consciousness with those of the author and characters. Behavioral genetics can update folk theories of parental influence with discoveries about the effects of genes, peers, and chance, which have profound implications for the interpretation of biography and memoir—an endeavor that also has much to learn from the cognitive psychology of memory and the social psychology of self-presentation. Evolutionary psychologists can distinguish the obsessions that are universal from those that are exaggerated by a particular culture and can lay out the inherent conflicts and confluences of interest within families, couples, friendships, and rivalries that are the drivers of plot.

I wonder how Rembrandt and the Impressionists (among other pre-moderns) managed to create visual art of such evident excellence without relying on the kinds of scientific mechanisms invoked by Pinker. I wonder what music scholars would learn about excellence in composition that isn’t already evident in the general loathing of audiences for most “serious” modern and contemporary music.

As for literature, great writers know instinctively and through self-criticism how to tell stories that realistically depict character, social psychology, culture, conflict, and all the rest. Scholars (and critics), at best, can acknowledge what rings true and has dramatic or comedic merit. Scientistic pretensions in scholarship (and criticism) may result in promotions and raises for the pretentious, but they do not add to the sum of human enjoyment — which is the real aim of literature.

Pinker inveighs against critics of scientism (science, in Pinker’s vocabulary) who cry “reductionism” and “simplification.” With respect to the former, Pinker writes:

Demonizers of scientism often confuse intelligibility with a sin called reductionism. But to explain a complex happening in terms of deeper principles is not to discard its richness. No sane thinker would try to explain World War I in the language of physics, chemistry, and biology as opposed to the more perspicuous language of the perceptions and goals of leaders in 1914 Europe. At the same time, a curious person can legitimately ask why human minds are apt to have such perceptions and goals, including the tribalism, overconfidence, and sense of honor that fell into a deadly combination at that historical moment.

It is reductionist to explain a complex happening in terms of a deeper principle when that principle fails to account for the complex happening. Pinker obscures that essential point by offering a silly and irrelevant example about World War I. This bit of misdirection is unsurprising, given Pinker’s foray into reductionism, The Better Angels of Our Nature: Why Violence Has Declined, which I examine here.

As for simplification, Pinker says:

The complaint about simplification is misbegotten. To explain something is to subsume it under more general principles, which always entails a degree of simplification. Yet to simplify is not to be simplistic.

Pinker again dodges the issue. Simplification is simplistic when the “general principles” fail to account adequately for the phenomenon in question.

If Pinker is right about anything, it is when he says that “the intrusion of science into the territories of the humanities has been deeply resented.” The resentment, though some of it may be wrongly motivated, is fully justified.

Related reading (added 08/10/13 and 09/06/13):
Bill Vallicella, “Steven Pinker on Scientism, Part One,” Maverick Philosopher, August 10, 2013
Leon Wieseltier, “Crimes Against Humanities,” The New Republic, September 3, 2013 (gated)

Related posts about Pinker:
Nonsense about Presidents, IQ, and War
The Fallacy of Human Progress

Related posts about modernism:
Speaking of Modern Art
Making Sense about Classical Music
An Addendum about Classical Music
My Views on Classical Music, Vindicated
But It’s Not Music
A Quick Note about Music
Modernism in the Arts and Politics
Taste and Art
Modernism and the Arts

Related posts about science:
Science’s Anti-Scientific Bent
Modeling Is Not Science
Physics Envy
We, the Children of the Enlightenment
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
Scientism, Evolution, and the Meaning of Life
The Candle Problem: Balderdash Masquerading as Science
Mysteries: Sacred and Profane
The Glory of the Human Mind

Further Thoughts about Metaphysical Cosmology

I have stated my metaphysical cosmology:

1. There is necessarily a creator of the universe, which comprises all that exists in “nature.”

2. The creator is not part of nature; that is, he stands apart from his creation and is neither of its substance nor governed by its laws. (I use “he” as a term of convenience, not to suggest that the creator is some kind of human or animate being, as we know such beings.)

3. The creator designed the universe, if not in detail then in its parameters. The parameters are what we know as matter-energy (substance) and its various forms, motions, and combinations (the laws that govern the behavior of matter-energy).

4. The parameters determine everything that is possible in the universe. But they do not necessarily dictate precisely the unfolding of events in the universe. Randomness and free will are evidently part of the creator’s design.

5. The human mind and its ability to “do science” — to comprehend the laws of nature through observation and calculation — are artifacts of the creator’s design.

6. Two things probably cannot be known through science: the creator’s involvement in the unfolding of natural events; the essential character of the substance on which the laws of nature operate.

It follows that science can neither prove nor disprove the preceding statements. If that is so, why can I not say, with equal certainty, that the universe is made of pea soup and supported by undetectable green giants?

There are two answers to that question. The first answer is that my cosmology is based on logical necessity; there is nothing of logic or necessity in the claims about pea soup and undetectable green giants. The second and related answer is that claims about pea soup and green giants — and their ilk — are obviously outlandish. There is an essential difference between (a) positing a creator and making limited but reasonable claims about his role and (b) engaging in obviously outlandish speculation.

What about various mythologies (e.g., Norse and Greek) and creation legends, which nowadays seem outlandish even to persons who believe in a creator? Professional atheists (e.g., Richard Dawkins, Daniel Dennett, Christopher Hitchens, and Lawrence Krauss) point to the crudeness of those mythologies and legends as a reason to reject the idea of a creator who set the universe and its laws in motion. (See, for example, “Russell’s Teapot,” discussed here.) But logic is not on the side of the professional atheists. The crudeness of a myth or legend, when viewed through the lens of contemporary knowledge, cannot be taken as evidence against creation. The crudeness of a myth or legend merely reflects the crudeness of the state of knowledge when the myth or legend arose.

Related posts:
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Free Will: A Proof by Example?
A Theory of Everything, Occam’s Razor, and Baseball
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Science, Logic, and God
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
What Is Time?
Science’s Anti-Scientific Bent
The Tenth Dimension
The Big Bang and Atheism
Einstein, Science, and God
Atheism, Religion, and Science Redux
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Not-So-Random Thoughts (II) (first item)
Mysteries: Sacred and Profane
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology

My Metaphysical Cosmology

This post is a work in progress. It draws on and extends the posts listed at the bottom.

1. There is necessarily a creator of the universe, which comprises all that exists in “nature.”

2. The creator is not part of nature; that is, he stands apart from his creation and is neither of its substance nor governed by its laws. (I use “he” as a term of convenience, not to suggest that the creator is some kind of human or animate being, as we know such beings.)

3. The creator designed the universe, if not in detail then in its parameters. The parameters are what we know as matter-energy (substance) and its various forms, motions, and combinations (the laws that govern the behavior of matter-energy).

4. The parameters determine everything that is possible in the universe. But they do not necessarily dictate precisely the unfolding of events in the universe. Randomness and free will are evidently part of the creator’s design.

5. The human mind and its ability to “do science” — to comprehend the laws of nature through observation and calculation — are artifacts of the creator’s design.

6. Two things probably cannot be known through science: the creator’s involvement in the unfolding of natural events; the essential character of the substance on which the laws of nature operate.

Related posts:
Atheism, Religion, and Science
The Limits of Science
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Free Will: A Proof by Example?
A Theory of Everything, Occam’s Razor, and Baseball
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Science, Logic, and God
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
What Is Time?
Science’s Anti-Scientific Bent
The Tenth Dimension
The Big Bang and Atheism
Einstein, Science, and God
Atheism, Religion, and Science Redux
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Not-So-Random Thoughts (II) (first item)
Mysteries: Sacred and Profane
Something from Nothing?
Something or Nothing

Not-So-Random Thoughts (III)

Apropos Science

In the vein of “Something from Nothing?” there is this:

[Stephen] Meyer also argued [in a a recent talk at the University Club in D.C.] that biological evolutionary theory, which “attempts to explain how new forms of life evolved from simpler pre-existing forms,” faces formidable difficulties. In particular, the modern version of Darwin’s theory, neo-Darwinism, also has an information problem.

Mutations, or copying errors in the DNA, are analogous to copying errors in digital code, and they supposedly provide the grist for natural selection. But, Meyer said: “What we know from all codes and languages is that when specificity of sequence is a condition of function, random changes degrade function much faster than they come up with something new.”…

The problem is comparable to opening a big combination lock. He asked the audience to imagine a bike lock with ten dials and ten digits per dial. Such a lock would have 10 billion possibilities with only one that works. But the protein alphabet has 20 possibilities at each site, and the average protein has about 300 amino acids in sequence….

Remember: Not just any old jumble of amino acids makes a protein. Chimps typing at keyboards will have to type for a very long time before they get an error-free, meaningful sentence of 150 characters. “We have a small needle in a huge haystack.” Neo-Darwinism has not solved this problem, Meyer said. “There’s a mathematical rigor to this which has not been a part of the so-called evolution-creation debate.”…

“[L]eading U.S. biologists, including evolutionary biologists, are saying we need a new theory of evolution,” Meyer said. Many increasingly criticize Darwinism, even if they don’t accept design. One is the cell biologist James Shapiro of the University of Chicago. His new book is Evolution: A View From the 21st Century. He’s “looking for a new evolutionary theory.” David Depew (Iowa) and Bruce Weber (Cal State) recently wrote in Biological Theory that Darwinism “can no longer serve as a general framework for evolutionary theory.” Such criticisms have mounted in the technical literature. (Tom Bethell, “Intelligent Design at the University Club,” American Spectator, May 2012)

And this:

[I]t is startling to realize that the entire brief for demoting human beings, and organisms in general, to meaningless scraps of molecular machinery — a demotion that fuels the long-running science-religion wars and that, as “shocking” revelation, supposedly stands on a par with Copernicus’s heliocentric proposal — rests on the vague conjunction of two scarcely creditable concepts: the randomness of mutations and the fitness of organisms. And, strangely, this shocking revelation has been sold to us in the context of a descriptive biological literature that, from the molecular level on up, remains almost nothing buta documentation of the meaningfully organized, goal-directed stories of living creatures.

Here, then, is what the advocates of evolutionary mindlessness and meaninglessness would have us overlook. We must overlook, first of all, the fact that organisms are masterful participants in, and revisers of, their own genomes, taking a leading position in the most intricate, subtle, and intentional genomic “dance” one could possibly imagine. And then we must overlook the way the organism responds intelligently, and in accord with its own purposes, to whatever it encounters in its environment, including the environment of its own body, and including what we may prefer to view as “accidents.” Then, too, we are asked to ignore not only the living, reproducing creatures whose intensely directed lives provide the only basis we have ever known for the dynamic processes of evolution, but also all the meaning of the larger environment in which these creatures participate — an environment compounded of all the infinitely complex ecological interactions that play out in significant balances, imbalances, competition, cooperation, symbioses, and all the rest, yielding the marvelously varied and interwoven living communities we find in savannah and rainforest, desert and meadow, stream and ocean, mountain and valley. And then, finally, we must be sure to pay no heed to the fact that the fitness, against which we have assumed our notion of randomness could be defined, is one of the most obscure, ill-formed concepts in all of science.

Overlooking all this, we are supposed to see — somewhere — blind, mindless, random, purposeless automatisms at the ultimate explanatory root of all genetic variation leading to evolutionary change. (Stephen L. Talbott, “Evolution and the Illusion of Randomness,” The New Atlantis, Fall 2011)

My point is not to suggest that that the writers are correct in their conjectures. Rather, the force of their conjectures shows that supposedly “settled” science is (a) always far from settled (on big questions, at least) and (b) necessarily incomplete because it can never reach ultimate truths.

Trayvon, George, and Barack

Recent revelations about the case of Trayvon Martin and George Zimmerman suggest the following:

  • Martin was acting suspiciously and smelled of marijuana.
  • Zimmerman was rightly concerned about Martin’s behavior, given the history of break-ins in Zimmerman’s neighborhood.
  • Martin attacked Zimmerman, had him on the ground, was punching his face, and had broken his nose.
  • Zimmerman shot Martin in self-defense.

Whether the encounter was “ultimately avoidable,” as a police report asserts, is beside the point.  Zimmerman acted in self-defense, and the case against him should be dismissed. The special prosecutor should be admonished by the court for having succumbed to media and mob pressure in bringing a charge of second-degree murder against Zimmerman.

What we have here is the same old story: Black “victim”–>media frenzy to blame whites (or a “white Hispanic”), without benefit of all relevant facts–>facts exonerate whites. To paraphrase Shakespeare: The first thing we should do after the revolution is kill all the pundits (along with the lawyers).

Obama famously said, “”If I had a son, he would look like Trayvon.” Given the thuggish similarity between Trayvon and Obama (small sample here), it is more accurate to say that if Obama had a son, he would be like Trayvon.

Creepy People

Exhibit A is Richard Thaler, a self-proclaimed libertarian who is nothing of the kind. Thaler defends the individual mandate that is at the heart of Obamacare (by implication, at least), when he attacks the “slippery slope” argument against it. Annon Simon nails Thaler:

Richard Thaler’s NYT piece from a few days ago, Slippery-Slope Logic, Applied to Health Care, takes conservatives to task for relying on a “slippery slope” fallacy to argue that Obamacare’s individual mandate should be invalidated. Thaler believes that the hypothetical broccoli mandate — used by opponents of Obamacare to show that upholding the mandate would require the Court to acknowledge congressional authority to do all sorts of other things — would never be adopted by Congress or upheld by a federal court. This simplistic view of the Obamacare litigation obscures legitimate concerns over the amount of power that the Obama administration is claiming for the federal government. It also ignores the way creative judges can use previous cases as building blocks to justify outcomes that were perhaps unimaginable when those building blocks were initially formed….

[N]ot all slippery-slope claims are fallacious. The Supreme Court’s decisions are often informed by precedent, and, as every law student learned when studying the Court’s privacy cases, a decision today could be used by a judge ten years from now to justify outcomes no one had in mind.

In 1965, the Supreme Court in Griswold v. Connecticut, referencing penumbras and emanations, recognized a right to privacy in marriage that mandated striking down an anti-contraception law.

Seven years later, in Eisenstadt v. Baird, this right expanded to individual privacy, because after all, a marriage is made of individuals, and “[i]f the right of privacy means anything, it is the right of the individual . . . to be free from unwarranted governmental intrusion into matters so fundamentally affecting a person as the decision whether to bear or beget a child.”

By 1973 in Roe v. Wade, this precedent, which had started out as a right recognized in marriage, had mutated into a right to abortion that no one could really trace to any specific textual provision in the Constitution. Slippery slope anyone?

This also happened in Lawrence v. Texas in 2003, where the Supreme Court struck down an anti-sodomy law. The Court explained that the case did not involve gay marriage, and Justice O’Connor’s concurrence went further, distinguishing gay marriage from the case at hand. Despite those pronouncements, later decisions enshrining gay marriage as a constitutionally protected right have relied upon Lawrence. For instance, Goodridge v. Department of Public Health (Mass. 2003) cited Lawrence 9 times, Varnum v. Brien (Iowa 2009) cited Lawrence 4 times, and Perry v. Brown (N.D. Cal, 2010) cited Lawrence 9 times.

However the Court ultimately rules, there is no question that this case will serve as a major inflection point in our nation’s debate about the size and scope of the federal government. I hope it serves to clarify the limits on congressional power, and not as another stepping stone on the path away from limited, constitutional government. (“The Supreme Court’s Slippery Slope,” National Review Online, May 17, 2012)

Simon could have mentioned Wickard v. Filburn (1942), in which the Supreme Court brought purely private, intrastate activity within the reach of Congress’s power to regulate interstate commerce. The downward slope from Wickard v. Filburn to today’s intrusive regulatory regime has been been not merely slippery but precipitous.

Then there is Brian Leiter, some of whose statist musings I have addressed in the past. It seems that Leiter has taken to defending the idiotic Elizabeth Warren for her convenient adoption of a Native American identity. Todd Zywicki tears a new one for Leiter:

I was out of town most of last week and I wasn’t planning on blogging any more on the increasingly bizarre saga of Elizabeth Warren’s claim to Native American ancestry, which as of the current moment appears to be entirely unsubstantiated.  But I was surprised to see Brian Leiter’s post doubling-down in his defense of Warren–and calling me a “Stalinist” to boot (although I confess it is not clear why or how he is using that term).  So I hope you will indulge me while I respond.

First, let me say again what I expressed at the outset–I have known from highly-credible sources for a decade that in the past Warren identified herself as a Native American in order to put herself in a position to benefit from hiring preferences (I am certain that Brian knows this now too).  She was quite outspoken about it at times in the past and, as her current defenses have suggested, she believed that she was entitled to claim it.  So there would have been no reason for her to not identify as such and in fact she was apparently quite unapologetic about it at the time….

Second, Brian seems to believe for some reason that the issue here is whether Warren actually benefited from a hiring preference.  Of course it is not (as my post makes eminently clear).  The issue I raised is whether Warren made assertions as part of the law school hiring process in order to put herself in a position to benefit from a hiring preference for which she had no foundation….

Third, regardless of why she did it, Warren herself actually had no verifiable basis for her self-identification as Native American.  At the very least her initial claim was grossly reckless and with no objective foundation–it appears that she herself has never had any foundation for the claim beyond “family lore” and her “high cheekbones.”… Now it turns out that the New England Historical Genealogical Society, which had been the source for the widely-reported claim that she might be 1/32 Cherokee, has rescinded its earlier conclusion and now says “We have no proof that Elizabeth Warren’s great great great grandmother O.C. Sarah Smith either is or is not of Cherokee descent.”  The story adds, “Their announcement came in the wake of an official report from an Oklahoma county clerk that said a document purporting to prove Warren’s Cherokee roots — her great great great grandmother’s marriage license application — does not exist.”  A Cherokee genealogist has similarly stated that she can find no evidence to support Warren’s claim.  At this point her claim appears to be entirely unsupported as an objective matter and it appears that she herself had no basis for it originally.

Fourth, Brian’s post also states the obvious–that there is plenty of bad blood between Elizabeth and myself.  But, of course, the only reason that this issue is interesting and relevant today is because Warren is running for the U.S. Senate and is the most prominent law professor in America at this moment.

So, I guess I’ll conclude by asking the obvious question: if a very prominent conservative law professor (say, for example, John Yoo) had misrepresented himself throughout his professorial career in the manner that Elizabeth Warren has would Brian still consider it to be “the non-issue du jour“?  Really?

I’m not sure what a “Stalinist” is.  But I would think that ignoring a prominent person’s misdeeds just because you like her politics, and attacking the messenger instead, just might fit the bill. (“New England Genealogical Historical Society Rescinds Conclusion that Elizabeth Warren Might Be Cherokee,” The Volokh Conspiracy, May 17, 2012)

For another insight into Leiter’s character, read this and weep not for him.

Tea Party Sell-Outs

Business as usual in Washington:

This week the Club for Growth released a study of votes cast in 2011 by the 87 Republicans elected to the House in November 2010. The Club found that “In many cases, the rhetoric of the so-called “Tea Party” freshmen simply didn’t match their records.” Particularly disconcerting is the fact that so many GOP newcomers cast votes against spending cuts.

The study comes on the heels of three telling votes taken last week in the House that should have been slam-dunks for members who possess the slightest regard for limited government and free markets. Alas, only 26 of the 87 members of the “Tea Party class” voted to defund both the Economic Development Administration and the president’s new Advanced Manufacturing Technology Consortia program (see my previous discussion of these votes here) and against reauthorizing the Export-Import Bank (see my colleague Sallie James’s excoriation of that vote here).

I assembled the following table, which shows how each of the 87 freshman voted. The 26 who voted for liberty in all three cases are highlighted. Only 49 percent voted to defund the EDA. Only 56 percent voted to defund a new corporate welfare program requested by the Obama administration. And only a dismal 44 percent voted against reauthorizing “Boeing’s bank.” That’s pathetic. (Tad DeHaven, “Freshman Republicans Switch from Tea to Kool-Aid,” Cato@Liberty, May 17, 2012)

Lesson: Never trust a politician who seeks a position of power, unless that person earns trust by divesting the position of power.

PCness

Just a few of the recent outbreaks of PCness that enraged me:

Michigan Mayor Calls Pro-Lifers ‘Forces of Darkness’” (reported by LifeNews.com on May 11, 2012)

US Class Suspended for Its View on Islam” (reported by CourierMail.com.au, May 11, 2012)

House Democrats Politicize Trayvon Martin” (posted at Powerline, May 8, 2012)

Chronicle of Higher Education Fires Blogger for Questioning Seriousness of Black Studies Depts.” (posted at Reason.com/hit & run, May 8, 2012)

Technocracy, Externalities, and Statism

From a review of Robert Frank’s The Darwin Economy:

In many ways, economics is the discipline best suited to the technocratic mindset. This has nothing to do with its traditional subject matter. It is not about debating how to produce goods and services or how to distribute them. Instead, it relates to how economics has emerged as an approach that distances itself from democratic politics and provides little room for human agency.

Anyone who has done a high-school course in economics is likely to have learned the basics of its technocratic approach from the start. Students have long been taught that economics is a ‘positive science’ – one based on facts rather than values. Politicians are entitled to their preferences, so the argument went, but economists are supposed to give them impartial advice based on an objective examination of the facts.

More recently this approach has been taken even further. The supposedly objective role of the technocrat-economist has become supreme, while the role of politics has been sidelined….

The starting point of The Darwin Economy is what economists call the collective action problem: the divergence between individual and collective interests. A simple example is a fishermen fishing in a lake. For each individual, it might be rational to catch as many fish as possible, but if all fishermen follow the same path the lake will eventually be empty. It is therefore deemed necessary to find ways to negotiate this tension between individual and group interests.

Those who have followed the discussion of behavioural economics will recognise that this is an alternative way of viewing humans as irrational. Behavioural economists focus on individuals behaving in supposedly irrational ways. For example, they argue that people often do not invest enough to secure themselves a reasonable pension. For Frank, in contrast, individuals may behave rationally but the net result of group behaviour can still be irrational….

…From Frank’s premises, any activity considered harmful by experts could be deemed illegitimate and subjected to punitive measures….

…[I]t is … wrong to assume that there is no more scope for economic growth to be beneficial. Even in the West, there is a long way to go before scarcity is limited. This is not just a question of individuals having as many consumer goods as they desire – although that has a role. It also means having the resources to provide as many airports, art galleries, hospitals, power stations, roads, schools, universities and other facilities as are needed. There is still ample scope for absolute improvements in living standards…. (Daniel Ben-ami, “Delving into the Mind of the Technocrat,” The Spiked Review of Books, February 2012)

There is much to disagree with in the review, but the quoted material is right on. It leads me to quote myself:

…[L]ife is full of externalities — positive and negative. They often emanate from the same event, and cannot be separated. State action that attempts to undo negative externalities usually results in the negation or curtailment of positive ones. In terms of the preceding example, state action often is aimed at forcing the attractive woman to be less attractive, thus depriving quietly appreciative men of a positive externality, rather than penalizing the crude man if his actions cross the line from mere rudeness to assault.

The main argument against externalities is that they somehow result in something other than a “social optimum.” This argument is pure, economistic hokum. It rests on the unsupportable belief in a social-welfare function, which requires the balancing (by an omniscient being, I suppose) of the happiness and unhappiness that results from every action that affects another person, either directly or indirectly….

A believer in externalities might respond by saying that they are of “economic” importance only as they are imposed on bystanders as a spillover from economic transactions, as in the case of emissions from a power plant that can cause lung damage in susceptible persons. Such a reply is of a kind that only an omniscient being could make with impunity. What privileges an economistic thinker to say that the line of demarcation between relevant and irrelevant acts should be drawn in a certain place? The authors of campus speech codes evidently prefer to draw the line in such a way as to penalize the behavior of the crude man in the above example. Who is the economistic thinker to say that the authors of campus speech codes have it wrong? And who is the legalistic thinker to say that speech should be regulated by deferring to the “feelings” that it arouses in persons who may hear or read it?

Despite the intricacies that I have sketched, negative externalities are singled out for attention and rectification, to the detriment of social and economic intercourse. Remove the negative externalities of electric-power generation and you make more costly (and even inaccessible) a (perhaps the) key factor in America’s economic growth in the past century. Try to limit the supposed negative externality of human activity known as “greenhouse gases” and you limit the ability of humans to cope with that externality (if it exists) through invention, innovation, and entrepreneurship. Limit the supposed negative externality of “offensive” speech and you quickly limit the range of ideas that may be expressed in political discourse. Limit the supposed externalities of suburban sprawl and you, in effect, sentence people to suffer the crime, filth, crowding, contentiousness, heat-island effects, and other externalities of urban living.

The real problem is not externalities but economistic and legalistic reactions to them….

The main result of rationalistic thinking — because it yields vote-worthy slogans and empty promises to fix this and that “problem” — is the aggrandizement of the state, to the detriment of civil society.

The fundamental error of rationalists is to believe that “problems” call for collective action, and to identify collective action with state action. They lack the insight and imagination to understand that the social beings whose voluntary, cooperative efforts are responsible for mankind’s vast material progress are perfectly capable of adapting to and solving “problems,” and that the intrusions of the state simply complicate matters, when not making them worse. True collective action is found in voluntary social and economic intercourse, the complex, information-rich content of which rationalists cannot fathom. They are as useless as a blind man who is shouting directions to an Indy 500 driver….

Theodore Dalrymple

If you do not know of Theodore Dalrymple, you should. His book, In Praise of Prejudice: The Necessity of Preconceived Ideas, inspired  “On Liberty,” the first post at this blog. Without further ado, I commend these recent items by and about Dalrymple:

Rotting from the Head Down” (an article by Dalrymple about the social collapse of Britain, City Journal, March 8, 2012)

Symposium: Why Do Progressives Love Criminals?” (Dalrymple and others, FrontPageMag.com, March 9, 2012)

Doctors Should Not Vote for Industrial Action,” a strike, in American parlance (a post by Dalrymple, The Social Affairs Unit, March 22, 2012)

The third item ends with this:

The fact is that there has never been, is never, and never will be any industrial action over the manifold failures of the public service to provide what it is supposed to provide. Whoever heard of teachers going on strike because a fifth of our children emerge from 11 years of compulsory education unable to read fluently, despite large increases in expenditure on education?

If the doctors vote for industrial action, they will enter a downward spiral of public mistrust of their motives. They should think twice before doing so.

Amen.

The Higher-Eduction Bubble

The title of a post at The Right Coast tells the tale: “Under 25 College Educated More Unemployed than Non-college Educated for First Time.” As I wrote here,

When I entered college [in 1958], I was among the 28 percent of high-school graduates then attending college. It was evident to me that about half of my college classmates didn’t belong in an institution of higher learning. Despite that, the college-enrollment rate among high-school graduates has since doubled.

(Also see this.)

American taxpayers should be up in arms over the subsidization of an industry that wastes their money on the useless education of masses of indeducable persons. Then there is the fact that taxpayers are forced to subsidize the enemies of liberty who populate university faculties.

The news about unemployment among college grads may hasten the bursting of the higher-ed bubble. It cannot happen too soon.

Something from Nothing?

I do not know if Lawrence Krauss typifies scientists in his logical obtuseness, but he certainly exemplifies the breed of so-called scientists who proclaim atheism as a scientific necessity.  According to a review by David Albert of Krauss’s recent book, A Universe from Nothing,

the laws of quantum mechanics have in them the makings of a thoroughly scientific and adamantly secular explanation of why there is something rather than nothing.

Albert’s review, which I have quoted extensively elsewhere, comports with Edward Feser’s analysis:

The bulk of the book is devoted to exploring how the energy present in otherwise empty space, together with the laws of physics, might have given rise to the universe as it exists today. This is at first treated as if it were highly relevant to the question of how the universe might have come from nothing—until Krauss acknowledges toward the end of the book that energy, space, and the laws of physics don’t really count as “nothing” after all. Then it is proposed that the laws of physics alone might do the trick—though these too, as he implicitly allows, don’t really count as “nothing” either.

Bill Vallicella puts it this way:

[N]o one can have any objection to a replacement of the old Leibniz question — Why is there something rather than nothing? … — with a physically tractable question, a question of interest to cosmologists and one amenable to a  physics solution. Unfortunately, in the paragraph above, Krauss provides two different replacement questions while stating, absurdly, that the second is a more succinct version of the first:

K1. How can a physical universe arise from an initial condition in which there are no particles, no space and perhaps no time?

K2. Why is there ‘stuff’ instead of empty space?

These are obviously distinct questions.  To answer the first one would have to provide an account of how the universe originated from nothing physical: no particles, no space, and “perhaps” no time.  The second question would be easier to answer because it presupposes the existence of space and does not demand that empty space be itself explained.

Clearly, the questions are distinct.  But Krauss conflates them. Indeed, he waffles between them, reverting to something like the first question after raising the second.  To ask why there is something physical as opposed to nothing physical is quite different from asking why there is physical “stuff” as opposed to empty space.

Several years ago, I explained the futility of attempting to decide the fundamental question of creation and its cause on scientific grounds:

Consider these three categories of knowledge (which long pre-date their use by Secretary of Defense Donald Rumsfeld): known knowns, know unknowns, and unknown unknowns. Here’s how that trichotomy might be applied to a specific aspect of scientific knowledge, namely, Earth’s rotation about the Sun:

1. Known knowns — Earth rotates about the Sun, in accordance with Einstein’s theory of general relativity.

2. Known unknowns — Earth, Sun, and the space between them comprise myriad quantum phenomena (e.g., matter and its interactions of matter in, on, and above the Earth and Sun; the transmission of light from Sun to Earth). We don’t know whether and how quantum phenomena influence Earth’s rotation about the Sun; that is, whether Einsteinian gravity is a partial explanation of a more complete theory of gravity that has been dubbed quantum gravity.

3. Unknown unknowns — Other things might influence Earth’s rotation about the Sun, but we don’t know what those other things are, if there are any.

For the sake of argument, suppose that scientists were as certain about the origin of the universe in the Big Bang as they are about the fact of Earth’s rotation about the Sun. Then, I would write:

1. Known knowns — The universe was created in the Big Bang, and the universe — in the large — has since been “unfolding” in accordance with Einsteinian relativity.

2. Known unknowns — The Big Bang can be thought of as a meta-quantum event, but we don’t know if that event was a manifestation of quantum gravity. (Nor do we know how quantum gravity might be implicated in the subsequent unfolding of the universe.)

3. Unknown unknowns — Other things might have caused the Big Bang, but we don’t know if there were such things or what those other things were — or are.

Thus — to a scientist qua scientist — God and Creation are unknown unknowns because, as unfalsifiable hypotheses, they lie outside the scope of scientific inquiry. Any scientist who pronounces, one way or the other, on the existence of God and the reality of Creation has — for the moment, at least — ceased to be scientist.

Which is not to say that the question of creation is immune to logical analysis; thus:

To say that the world as we know it is the product of chance — and that it may exist only because it is one of vastly many different (but unobservable) worlds resulting from chance — is merely to state a theoretical possibility. Further, it is a possibility that is beyond empirical proof or disproof; it is on a par with science fiction, not with science.

If the world as we know it — our universe — is not the product of chance, what is it? A reasonable answer is found in another post of mine, “Existence and Creation.” Here is the succinct version:

  1. In the material universe, cause precedes effect.
  2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
  3. The existence of the universe therefore implies a separate, uncaused cause.

There is no reasonable basis — and certainly no empirical one — on which to prefer atheism to deism or theism. Strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side — and eons of belief against them.

Another blogger once said this about the final sentence of that quotation, which I lifted from another post of mine:

I would have to disagree with the last sentence. The problem is epistemology — how do we know what we know? Atheists, especially ‘scientistic’ atheists, take the position that the modern scientific methodology of observation, measurement, and extrapolation from observation and measurement, is sufficient to detect anything that Really Exists — and that the burden of proof is on those who propose that something Really Exists that cannot be reliably observed and measured; which is of course impossible within that mental framework. They have plenty of logic and science on their side, and their ‘evidence’ is the commonly-accepted maxim that it is impossible to prove a negative.

I agree that the problem of drawing conclusions about creation from science (as opposed to logic) is epistemological. The truth and nature of creation is an “unknown unknown” or, more accurately, an “unknowable unknown.” With regard to such questions, scientists do not have logic and science on their side when they asset that the existence of the universe is possible without a creator, as a matter of science (as Krauss does, for example). Moreover, it is scientists who are trying to prove a negative: that there is neither a creator nor the logical necessity of one.

“Something from nothing” is possible, but only if there is a creator who is not part of the “something” that is the proper subject of scientific exploration and explanation.

Related posts:
Atheism, Religion, and Science
The Limits of Science
Three Perspectives on Life: A Parable
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Evolution and Religion
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
The Legality of Teaching Intelligent Design
Science, Logic, and God
Capitalism, Liberty, and Christianity
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
Science, Axioms, and Economics
The Big Bang and Atheism
The Universe . . . Four Possibilities
Einstein, Science, and God
Atheism, Religion, and Science Redux
Pascal’s Wager, Morality, and the State
Evolution as God?
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps

Scientism, Evolution, and the Meaning of Life

Scientism is “the uncritical application of scientific or quasi-scientific methods to inappropriate fields of study or investigation.” When scientists proclaim truths outside the realm of their expertise, they are guilty of practicing scientism. Two notable scientistic scientists, of whom I have written several times (e.g., here and here), are Richard Dawkins and Peter Singer. It is unsurprising that Dawkins and Singer are practitioners of scientism. Both are strident atheists, and a strident atheists, as I have said,  “merely practice a ‘religion’ of their own. They have neither logic nor science nor evidence on their side — and eons of belief against them.”

Dawkins, Singer, and many other scientistic atheists share an especially “religious” view of evolution. In brief, they seem to believe that evolution rules out God. Evolution rules out nothing. Evolution may be true in outline but it does not bear close inspection. On that point, I turn to the late David Stove, a noted Australian philosopher and atheist. This is from his essay, “So You Think You Are a Darwinian?“:

Of course most educated people now are Darwinians, in the sense that they believe our species to have originated, not in a creative act of the Divine Will, but by evolution from other animals. But believing that proposition is not enough to make someone a Darwinian. It had been believed, as may be learnt from any history of biology, by very many people long before Darwinism, or Darwin, was born.

What is needed to make someone an adherent of a certain school of thought is belief in all or most of the propositions which are peculiar to that school, and are believed either by all of its adherents, or at least by the more thoroughgoing ones. In any large school of thought, there is always a minority who adhere more exclusively than most to the characteristic beliefs of the school: they are the ‘purists’ or ‘ultras’ of that school. What is needed and sufficient, then, to make a person a Darwinian, is belief in all or most of the propositions which are peculiar to Darwinians, and believed either by all of them, or at least by ultra-Darwinians.

I give below ten propositions which are all Darwinian beliefs in the sense just specified. Each of them is obviously false: either a direct falsity about our species or, where the proposition is a general one, obviously false in the case of our species, at least. Some of the ten propositions are quotations; all the others are paraphrases. The quotations are all from authors who are so well-known, at least in Darwinian circles, as spokesmen for Darwinism or ultra-Darwinism, that their names alone will be sufficient evidence that the proposition is a Darwinian one. Where the proposition is a paraphrase, I give quotations or other information which will, I think, suffice to establish its Darwinian credentials.

My ten propositions are nearly in reverse historical order. Thus, I start from the present day, and from the inferno-scene – like something by Hieronymus Bosch – which the ‘selfish gene’ theory makes of all life. Then I go back a bit to some of the falsities which, beginning in the 1960s, were contributed to Darwinism by the theory of ‘inclusive fitness’. And finally I get back to some of the falsities, more pedestrian though no less obvious, of the Darwinism of the 19th or early-20th century.

1. The truth is, ‘the total prostitution of all animal life, including Man and all his airs and graces, to the blind purposiveness of these minute virus-like substances’, genes.

This is a thumbnail-sketch, and an accurate one, of the contents of The Selfish Gene (1976) by Richard Dawkins….

2 ‘…it is, after all, to [a mother’s] advantage that her child should be adopted’ by another woman….

This quotation is from Dawkins’ The Selfish Gene, p. 110.

Obviously false though this proposition is, from the point of view of Darwinism it is well-founded

3. All communication is ‘manipulation of signal-receiver by signal-sender.’

This profound communication, though it might easily have come from any used-car salesman reflecting on life, was actually sent by Dawkins, (in The Extended Phenotype, (1982), p. 57), to the readers whom he was at that point engaged in manipulating….

9. The more privileged people are the more prolific: if one class in a society is less exposed than another to the misery due to food-shortage, disease, and war, then the members of the more fortunate class will have (on the average) more children than the members of the other class.

That this proposition is false, or rather, is the exact reverse of the truth, is not just obvious. It is notorious, and even proverbial….

10. If variations which are useful to their possessors in the struggle for life ‘do occur, can we doubt (remembering that many more individuals are born than can possibly survive), that individuals having any advantage, however slight, over others, would have the best chance of surviving and of procreating their kind? On the other hand, we may feel sure that any variation in the least degree injurious would be rigidly destroyed.’

This is from The Origin of Species, pp. 80-81. Exactly the same words occur in all the editions….

Since this passage expresses the essential idea of natural selection, no further evidence is needed to show that proposition 10 is a Darwinian one. But is it true? In particular, may we really feel sure that every attribute in the least degree injurious to its possessors would be rigidly destroyed by natural selection?

On the contrary, the proposition is (saving Darwin’s reverence) ridiculous. Any educated person can easily think of a hundred characteristics, commonly occurring in our species, which are not only ‘in the least degree’ injurious to their possessors, but seriously or even extremely injurious to them, which have not been ‘rigidly destroyed’, and concerning which there is not the smallest evidence that they are in the process of being destroyed. Here are ten such characteristics, without even going past the first letter of the alphabet. Abortion; adoption; fondness for alcohol; altruism; anal intercourse; respect for ancestors; susceptibility to aneurism; the love of animals; the importance attached to art; asceticism, whether sexual, dietary, or whatever.

Each of these characteristics tends, more or less strongly, to shorten our lives, or to lessen the number of children we have, or both. All of them are of extreme antiquity. Some of them are probably older than our species itself. Adoption, for example is practised by some species of chimpanzees: another adult female taking over the care of a baby whose mother has died. Why has not this ancient and gross ‘biological error’ been rigidly destroyed?…

The cream of the jest, concerning proposition 10, is that Darwinians themselves do not really believe it. Ask a Darwinian whether he actually believes that the fondness for alcoholic drinks is being destroyed now, or that abortion is, or adoption – and watch his face. Well, of course he does not believe it! Why would he? There is not a particle of evidence in its favour, and there is a great mountain of evidence against it. Absolutely the only thing it has in its favour is that Darwinism says it must be so. But (as Descartes said in another connection) ‘this reasoning cannot be presented to infidels, who might consider that it proceeded in a circle’.

What becomes, then, of the terrifying giant named Natural Selection, which can never sleep, can never fail to detect an attribute which is, even in the least degree, injurious to its possessors in the struggle for life, and can never fail to punish such an attribute with rigid destruction? Why, just that, like so much else in Darwinism, it is an obvious fairytale, at least as far as our species is concerned.

A science cannot be wrong in so many important ways and yet be taken seriously as a God-substitute.

Frederick Turner has this to say in “Darwin and Design: The Evolution of a Flawed Debate“:

Does the theory of evolution make God unnecessary to the very existence of the world?…

The polemical evolutionists are right about the truth of evolution. But the rightness of their cause has been deeply compromised by their own version of the creationists’ sin. The evolutionists’ sin, as I see it, is even greater, because it is three sins rolled into one….

The third sin is … dishonesty. In many cases it is clear that the beautiful and hard-won theory of evolution, now proved beyond reasonable doubt, is being cynically used by some — who do not much care about it as such — to support an ulterior purpose: a program of atheist indoctrination, and an assault on the moral and spiritual goals of religion. A truth used for unworthy purposes is quite as bad as a lie used for ends believed to be worthy. If religion can be undermined in the hearts and minds of the people, then the only authority left will be the state, and, not coincidentally, the state’s well-paid academic, legal, therapeutic and caring professions. If creationists cannot be trusted to give a fair hearing to evidence and logic because of their prior commitment to religious doctrine, some evolutionary partisans cannot be trusted because they would use a general social acceptance of the truth of evolution as a way to set in place a system of helpless moral license in the population and an intellectual elite to take care of them.

And that is my issue, not only with the likes of Dawkins and Singer but also with any so-called scientist who believes that evolution — or, more broadly, scientific knowledge — somehow justifies atheism.

Science is only about the knowable, and much of life’s meaning lies where science cannot reach. Maverick Philosopher puts it this way in “Why Science Will Never Put Religion Out of Business“:

We suffer from a lack of existential meaning, a meaning that we cannot supply from our own resources since any subjective acts of meaning-positing are themselves (objectively) meaningless….

…[T]he salvation religion promises is not to be understood in some crass physical sense the way the typical superficial and benighted atheist-materialist would take it but as salvation from meaninglessness, anomie, spiritual desolation, Unheimlichkeit, existential insecurity, Angst, ignorance and delusion, false value-prioritizations, moral corruption irremediable by any human effort, failure to live up to ideals, the vanity and transience of our lives, meaningless sufferings and cravings and attachments, the ultimate pointlessness of all efforts at moral and intellectual improvement in the face of death . . . .

…[I]t is self-evident that there are no technological solutions to moral evil, moral ignorance, and the apparent absurdity of life.  Is a longer life a morally better life?  Can mere longevity confer meaning?The notion that present or future science can solve the problems that religion addresses is utterly chimerical.

Related posts:
Atheism, Religion, and Science
The Limits of Science
Three Perspectives on Life: A Parable
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Evolution and Religion
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
The Legality of Teaching Intelligent Design
Science, Logic, and God
Capitalism, Liberty, and Christianity
Is “Nothing” Possible?
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
Science, Axioms, and Economics
The Big Bang and Atheism
The Universe . . . Four Possibilities
Einstein, Science, and God
Atheism, Religion, and Science Redux
Pascal’s Wager, Morality, and the State
Evolution as God?
The Greatest Mystery
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science

Analysis for Government Decision-Making: Demi-Science, Hemi-Demi-Science, and Sophistry

Taking a “hard science” like classical mechanics as an epitome of science and, say, mechanical engineering, as a rigorous application of it, one travels a goodly conceptual distance before arriving at operations research (OR). Philip M. Morse and George E. Kimball, pioneers of OR in World War II, put it this way:

[S]uccessful application of operations research usually results in improvements by factors of 3 or 10 or more…. In our first study of any operation we are looking for these large factors of possible improvement…. They can be discovered if the [variables] are given only one significant figure,…any greater accuracy simply adds unessential detail.

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. (Philip M. Morse and George E. Kimball, Methods of Operations Research, originally published as Operations Evaluation Group Report 54, 1946, p. 38)

This is science-speak for the following proposition: Where there is much variability in the particular circumstances of combat, there is much uncertainty about the contributions of various factors (human, mechanical, and meteorological) the the outcome of combat. It is therefore difficult to assign precise numerical values to the various factors.

OR, even in wartime, is therefore, and at best, a demi-science. From there, we descend to cost-effectiveness analysis and its constituent branches: techniques for designing and estimating the costs of systems that do not yet exist and the effectiveness of such systems in combat. These methods, taken separately and together, are (to coin a term) hemi-demi-scientific — a fact that the application of “rigorous” mathematical and statistical techniques cannot alter.

There is no need to elaborate on the wild inaccuracy of estimates about the costs and physical performance of government-owned and operated systems, whether they are intended for military or civilian use. The gross errors of estimation have been amply documented in the public press for decades.

What is less well known is the difficulty of predicting the performance of systems — especially combat systems — years before they are commanded, operated, and maintained by human beings, under conditions that are likely to be far different than those envisioned when the systems were first proposed. A paper that I wrote thirty years ago gives my view of the great uncertainty that surrounds estimates of the effectiveness of systems that have yet to be developed, or built, or used in combat:

Aside from a natural urge for certainty, faith in quantitative models of warfare springs from the experience of World War II, when they seemed to lead to more effective tactics and equipment. But the foundation of this success was not the quantitative methods themselves. Rather, it was the fact that the methods were applied in wartime. Morse and Kimball put it well:

Operations research done separately from an administrator in charge of operations becomes an empty exercise. To be valuable it must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. [Methods of Operations Research, p. 10]

Contrast this attitude with the attempts of analysts … to evaluate weapons, forces, and strategies with abstract models of combat. However elegant and internally consistent the models, they have remained as untested and untestable as the postulates of theology.

There is, of course, no valid test to apply to a warfare model. In peacetime, there is no enemy; in wartime, the enemy’s actions cannot be controlled. Morse and Kimball, accordingly, urge “hemibel thinking”:

Having obtained the constants of the operations under study… we compare the value of the constants obtained in actual operations with the optimum theoretical value, if this can be computed. If the actual value is within a hemibel (…a factor of 3) of the theoretical value, then it is extremely unlikely that any improvement in the details of the operation will result in significant improvement. [When] there is a wide gap between the actual and theoretical results … a hint as to the possible means of improvement can usually be obtained by a crude sorting of the operational data to see whether changes in personnel, equipment, or tactics produce a significant change in the constants. [Ibid., p. 38]

….

Much as we would like to fold the many different parameters of a weapon, a force, or a strategy into a single number, we can not. An analyst’s notion of which variables matter and how they interact is no substitute for data. Such data as exist, of course, represent observations of discrete events — usually peacetime events. It remains for the analyst to calibrate the observations, but without a benchmark to go by. Calibration by past battles is a method of reconstruction –of cutting one of several coats to fit a single form — but not a method of validation. Lacking pertinent data, an analyst is likely to resort to models of great complexity. Thus, if useful estimates of detection probabilities are unavailable, the detection process is modeled; if estimates of the outcomes of dogfights are unavailable, aerial combat is reduced to minutiae. Spurious accuracy replaces obvious inaccuracy; untestable hypotheses and unchecked calibrations multiply apace. Yet the analyst claims relative if not absolute accuracy, certifying that he has identified, measured, and properly linked, a priori, the parameters that differentiate weapons, forces, and strategies….

Should we really attach little significance to differences of less than a hemibel? Consider a five-parameter model, involving the conditional probabilities of detecting, shooting at, hitting, and killing an opponent — and surviving, in the first place, to do any of these things. Such a model might easily yield a cumulative error of a hemibel, given a twenty-five percent error in each parameter. My intuition is that one would be lucky if relative errors in the probabilities assigned to alternative weapons and forces were as low as twenty-five percent.

The further that one travels from an empirical question, such as the likely effectiveness of an extant weapon system under specific, quantifiable conditions, the more likely one is to encounter the kind of sophistry known as policy analysis. It is in this kind of analysis that one  — more often than not — encounters in the context of broad policy issues (e.g., government policy toward health care, energy, or defense spending). Such analysis is constructed so that it favors the prejudices of the analyst or his client, or support the client’s political case for a certain policy.

Policy analysis often seems credible, especially on first hearing or reading it. But, on inspection, it is usually found to have at least two of these characteristics:

  • It stipulates or quickly arrives at a preferred policy, then marshals facts, calculations, and opinions that are selected because they support the preferred policy.
  • If it offers and assesses alternative policies, they are not placed on an equal footing with the preferred policy. They are, for example, assessed against criteria that favor the preferred policy, while other criteria (which might be important ones) are ignored or given short shrift.
  • It is wrapped in breathless prose, dripping with words and phrases like “aggressive action,”grave consequences,” and “sense of urgency.”

No discipline or quantitative method is rigorous enough to redeem policy analysis, but two disciplines are especially suited to it: political “science” and macroeconomics. Both are couched in the language of real science, but both lend themselves perfectly to the old adage: garbage in, garbage out.

Do I mean to suggest that broad policy issues should not be addressed as analytically as possible? Not at all. What I mean to suggest is that because such issues cannot be illuminated with scientific rigor, they are especially fertile ground for sophists with preconceived positions.

In that respect, the model of cost-effectiveness analysis, with all of its limitations, is to be emulated. Put simply, it is to state a clear objective in a way that does not drive the answer; reveal the assumptions underlying the analysis; state the relevant variables (factors influencing the attainment of the objective); disclose fully the data, the sources of data, and analytic methods; and explore openly and candidly the effects of variations in key assumptions and critical variables.

Demystifying Science

“Science” is an unnecessarily daunting concept to the uninitiated, which is to say, almost everyone. Because scientific illiteracy is rampant, advocates of policy positions — scientists and non-scientists alike — often are able to invoke “science” wantonly, thus lending unwarranted authority to their positions.

WHAT IS SCIENCE?

Science is knowledge, but not all knowledge is science. A scientific body of knowledge is systematic; that is, the granular facts or phenomena which comprise the body of knowledge are connected in patterned ways. Moreover, the facts or phenomena represent reality; they are not mere concepts, which may be tools of science but are not science. Beyond that, science — unless it is a purely descriptive body of knowledge — is predictive about the characteristics of as-yet unobserved phenomena. These may be things that exist but have not yet been measured (in terms of the applicable science), or things that are yet to be (as in the effects of new drug on a disease).

Above all, science is not a matter of “consensus” — AGW zealots to the contrary notwithstanding. Science is a matter of rigorously testing theories against facts, and doing it openly. Imagine the state of physics today if Galileo had been unable to question Aristotle’s theory of gravitation, if Newton had been unable to extend and generalize Galileo’s work, and if Einstein had deferred to Newton. The effort to “deny” a prevailing or popular theory is as old as science. There have been “deniers’ in the thousands, each of them responsible for advancing some aspect of knowledge. Not all “deniers” have been as prominent as Einstein (consider Dan Schectman, for example), but each is potentially as important as Einstein.

It is hard for scientists to rise above their human impulses. Einstein, for example, so much wanted quantum physics to be deterministic rather than probabilistic that he said “God does not play dice with the universe.” To which Nils Bohr replied, “Einstein, stop telling God what to do.” But the human urge to be “right” or to be on the “right side” of an issue does not excuse anti-scientific behavior, such as that of so-called scientists who have become invested in AGW.

There are many so-called scientists who subscribe to AGW without having done relevant research. Why? Because AGW is the “in” thing, and they do not wish to be left out. This is the stuff of which “scientific consensus” is made. If you would not buy a make of automobile just because it is endorsed by a celebrity who knows nothing about automotive engineering, why would you “buy” AGW just because it is endorsed by a herd of so-called scientists who have never done research that bears directly on it?

There are two lessons to take from this. The first is  that no theory is ever proven. (A theory may, if it is well and openly tested, be useful guide to action in certain rigorous disciplines, such as engineering and medicine.) Any theory — to be a truly scientific one — must be capable of being tested, even by (and especially by) others who are skeptical of the theory. Those others must be able to verify the facts upon which the theory is predicated, and to replicate the tests and calculations that seem to validate the theory. So-called scientists who restrict access to their data and methods are properly thought of as cultists with a political agenda, not scientists. Their theories are not to be believed — and certainly are not to be taken as guides to action.

The second lesson is that scientists are human and fallible. It is in the best tradition of science to distrust their claims and to dismiss their non-scientific utterances.

THE ROLE OF MATHEMATICS AND STATISTICS IN SCIENCE

Mathematics and statistics are not sciences, despite their vast and organized complexity. They offer ways of thinking about and expressing knowledge, but they are not knowledge. They are languages that enable scientists to converse with each other and outsiders who are fluent in the same languages.

Expressing a theory in mathematical terms may lend the theory a scientific aura. But a theory couched in mathematics (or its verbal equivalent) is not a scientific one unless (a) it can be tested against observable facts by rigorous statistical methods, (b) it is found, consistently, to accord with those facts, and (c) the introduction of new facts does not require adjustment or outright rejection of the theory. If the introduction of new facts requires the adjustment of a theory, then it is a new theory, which must be tested against new facts, and so on.

This “inconvenient fact” — that an adjusted theory is a new theory —  is ignored routinely, especially in the application of regression analysis to a data set for the purpose of quantifying relationships among variables. If a “model” thus derived does a poor job when applied to data outside the original set, it is not an uncommon practice to combine the original and new data and derive a new “model” based on the combined set. This practice (sometimes called data-mining) does not yield scientific theories with predictive power; it yields information (of dubious value) about the the data employed in the regression analysis. As a critic of regression models once put it: Regression is a way of predicting the past with great certainty.

A science may be descriptive rather than mathematical. In a descriptive science (e.g., plant taxonomy), particular phenomena sometimes are described numerically (e.g., the number of leaves on the stem of a species), but the relations among various phenomena are not reducible to mathematics. Nevertheless, a predominantly descriptive discipline will be scientific if the phenomena within its compass are connected in patterned ways.

NON-SCIENCE, SCIENCE, AND PSEUDO-SCIENCE

Non-scientific disciplines can be useful, whereas some purportedly scientific disciplines verge on charlatanism. Thus, for example:

  • History, by my reckoning, is not a science. But a knowledge of history is valuable, nevertheless, for the insights it offers into the influence of human nature on the outcomes of economic and political processes. I call the lessons of history “insights,” not scientific relationships, because history is influenced by so many factors that it does not allow for the rigorous testing of hypotheses.
  • Physics is a science in most of its sub-disciplines, but there are some (e.g., cosmology and certain interpretations of quantum mechanics) where it descends into the realm of speculation. Informed, fascinating speculation to be sure, but speculation all the same. It avoids being pseudo-scientific only because it might give rise to testable hypotheses.
  • Economics is a science only to the extent that it yields valid, statistical insights about specific microeconomic issues (e.g., the effects of laws and regulations on the prices and outputs of goods and services). The postulates of macroeconomics, except to the extent that they are truisms, have no demonstrable validity. (See, for example, my treatment of the Keynesian multiplier.) Macroeconomics is a pseudo-science.

CONCLUSION

There is no such thing as “science,” writ large; that is, no one may appeal, legitimately, to “science” in the abstract. A particular discipline may be a science, but it is a science only to the extent that it comprises a factual body of knowledge and testable theories. Further, its data and methods must be open to verification and testing. And only a particular theory — one that has been put to the proper tests — can be called a scientific one.

For the reasons adduced in this post, scientists who claim to “know” that there is no God are not practicing science when they make that claim. They are practicing the religion that is known as atheism. The existence or non-existence of God is beyond testing, at least by any means yet known to man.

Related posts:
About Economic Forecasting
Is Economics a Science?
Economics as Science
Hemibel Thinking
Climatology
Physics Envy
Global Warming: Realities and Benefits
Words of Caution for the Cautious
Scientists in a Snit
Another Blow to Climatology?
A Telling Truth
Proof That “Smart” Economists Can Be Stupid
Bad News for Politically Correct Science
Another Blow to Chicken-Little Science
Same Old Story, Same Old Song and Dance
Atheism, Religion, and Science
The Limits of Science
Three Perspectives on Life: A Parable
Beware of Irrational Atheism
The Hockey Stick Is Broken
Talk about Brainwaves!
The Creation Model
The Thing about Science
Science in Politics, Politics in Science
Global Warming and Life
Evolution and Religion
Speaking of Religion…
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
Global Warming and the Liberal Agenda
Science, Logic, and God
Debunking “Scientific Objectivity”
Pseudo-Science in the Service of Political Correctness
This Is Objectivism?
Objectivism: Tautologies in Search of Reality
Science’s Anti-Scientific Bent
Science, Axioms, and Economics
Global Warming in Perspective
Mathematical Economics
Economics: The Dismal (Non) Science
The Big Bang and Atheism
More Bad News for Global Warming Zealots

The Universe . . . Four Possibilities
Einstein, Science, and God
Atheism, Religion, and Science Redux
Warming, Anyone?
“Warmism”: The Myth of Anthropogenic Global Warming
Re: Climate “Science”
More Evidence against Anthropogenic Global Warming
Yet More Evidence against Anthropogenic Global Warming
A Non-Believer Defends Religion
Evolution as God?
Modeling Is Not Science
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
Beware the Rare Event
Landsburg Is Half-Right
Physics Envy
The Unreality of Objectivism
What Is Truth?
Evolution, Human Nature, and “Natural Rights”
More Thoughts about Evolutionary Teleology
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
We, the Children of the Enlightenment
Probability, Existence, and Creation
The Atheism of the Gaps
Probability, Existence, and Creation: A Footnote

What Is Truth?

There are four kinds of truth: physical, logical-mathematical, psychological-emotional, and judgmental. The first two are closely related, as are the last two. After considering each of the two closely related pairs, I will link all four kinds of truth.

PHYSICAL AND LOGICAL-MATHEMATICAL TRUTH

Physical truth is, seemingly, the most straightforward of the lot. Physical truth seems to consist of that which humans are able to apprehend with their senses, aided sometimes by instruments. And yet, widely accepted notions of physical truth have changed drastically over the eons, not only because of improvements in the instruments of observation but also because of changes in the interpretation of data obtained with the aid of those instruments.

The latter point brings me to logical-mathematical truth. It is logic and mathematics that translates specific physical truths — or what are taken to be truths — into constructs (theories) such as quantum mechanics, general relativity, the Big Bang, and evolution. Of the relationship between specific physical truth and logical-mathematical truth, G.K. Chesterton said:

Logic and truth, as a matter of fact, have very little to do with each other. Logic is concerned merely with the fidelity and accuracy with which a certain process is performed, a process which can be performed with any materials, with any assumption. You can be as logical about griffins and basilisks as about sheep and pigs. On the assumption that a man has two ears, it is good logic that three men have six ears, but on the assumption that a man has four ears, it is equally good logic that three men have twelve. And the power of seeing how many ears the average man, as a fact, possesses, the power of counting a gentleman’s ears accurately and without mathematical confusion, is not a logical thing but a primary and direct experience, like a physical sense, like a religious vision. The power of counting ears may be limited by a blow on the head; it may be disturbed and even augmented by two bottles of champagne; but it cannot be affected by argument. Logic has again and again been expended, and expended most brilliantly and effectively, on things that do not exist at all. There is far more logic, more sustained consistency of the mind, in the science of heraldry than in the science of biology. There is more logic in Alice in Wonderland than in the Statute Book or the Blue Books. The relations of logic to truth depend, then, not upon its perfection as logic, but upon certain pre-logical faculties and certain pre-logical discoveries, upon the possession of those faculties, upon the power of making those discoveries. If a man starts with certain assumptions, he may be a good logician and a good citizen, a wise man, a successful figure. If he starts with certain other assumptions, he may be an equally good logician and a bankrupt, a criminal, a raving lunatic. Logic, then, is not necessarily an instrument for finding truth; on the contrary, truth is necessarily an instrument for using logic—for using it, that is, for the discovery of further truth and for the profit of humanity. Briefly, you can only find truth with logic if you have already found truth without it. [Thanks to The Fourth Checkraise for making me aware of Chesterton’s aperçu.]

To put it another way, logical-mathematical truth is only as valid as the axioms (principles) from which it is derived. Given an axiom, or a set of them, one can deduce “true” statements (assuming that one’s logical-mathematical processes are sound). But axioms are not pre-existing truths with independent existence (like Platonic ideals). They are products, in one way or another, of observation and reckoning. The truth of statements derived from axioms depends, first and foremost, on the truth of the axioms, which is the thrust of Chesterton’s aperçu.

It is usual to divide reasoning into two types of logical process:

  • Induction is “The process of deriving general principles from particular facts or instances.” That is how scientific theories are developed, in principle. A scientist begins with observations and devises a theory from them. Or a scientist may begin with an existing theory, note that new observations do not comport with the theory, and devise a new theory to fit all the observations, old and new.
  • Deduction is “The process of reasoning in which a conclusion follows necessarily from the stated premises; inference by reasoning from the general to the specific.” That is how scientific theories are tested, in principle. A theory (a “stated premise”) should lead to certain conclusions (“observations”). If it does not, the theory is falsified. If it does, the theory lives for another day.

But the stated premises (axioms) of a scientific theory (or exercise in logic or mathematical operation) do not arise out of nothing. In one way or another, directly or indirectly, they are the result of observation and reckoning (induction). Get the observation and reckoning wrong, and what follows is wrong; get them right and what follows is right. Chesterton, again.

PSYCHOLOGICAL-EMOTIONAL AND JUDGMENTAL TRUTH

A psychological-emotional truth is one that depends on more than physical observations. A judgmental truth is one that arises from a psychological-emotional truth and results in a consequential judgment about its subject.

A common psychological-emotional truth, one that finds its way into judgmental truth, is an individual’s conception of beauty.  The emotional aspect of beauty is evident in the tendency, especially among young persons, to consider their lovers and spouses beautiful, even as persons outside the intimate relationship would find their judgments risible.

A more serious psychological-emotional truth — or one that has public-policy implications — has to do with race. There are persons who simply have negative views about races other than their own, for reasons that are irrelevant here. What is relevant is the close link between the psychological-emotional views about persons of other races — that they are untrustworthy, stupid, lazy, violent, etc. — and judgments that adversely affect those persons. Those judgments range from refusal to hire a person of a different race (still quite common, if well disguised to avoid legal problems) to the unjust convictions and executions because of prejudices held by victims, witnesses, police officers, prosecutors, judges, and jurors. (My examples point to anti-black prejudices on the part of whites, but there are plenty of others to go around: anti-white, anti-Latino, anti-Asian, etc. Nor do I mean to impugn prudential judgments that implicate race, as in the avoidance by whites of certain parts of a city.)

A close parallel is found in the linkage between the psychological-emotional truth that underlies a jury’s verdict and the legal truth of a judge’s sentence. There is an even tighter linkage between psychological-emotional truth and legal truth in the deliberations and rulings of higher courts, which operated without juries.

PUTTING TRUTH AND TRUTH TOGETHER

Psychological-emotional proclivities, and the judgmental truths that arise from them, impinge on physical and mathematical-logical truth. Because humans are limited (by time, ability, and inclination), they often accept as axiomatic statements about the world that are tenuous, if not downright false. Scientists, mathematicians, and logicians are not exempt from the tendency to credit dubious statements. And that tendency can arise not just from expediency and ignorance but also from psychological-emotional proclivities.

Albert Einstein, for example, refused to believe that very small particles of matter-energy (quanta) behave probabilistically, as described by the branch of physics known as quantum mechanics. Put simply, sub-atomic particles do not seem to behave according to the same physical laws that describe the actions of the visible universe; their behavior is discontinuous (“jumpy”) and described probabilistically, not by the kinds of continuous (“smooth”) mathematical formulae that apply to the macroscopic world.

Einstein refused to believe that different parts of the same universe could operate according to different physical laws. Thus he saw quantum mechanics as incomplete and in need of reconciliation with the rest of physics. At one point in his long-running debate with the defenders of quantum mechanics, Einstein wrote: “I, at any rate, am convinced that He [God] does not throw dice.” And yet, quantum mechanics — albeit refined and elaborated from the version Einstein knew — survives and continues to describe the sub-atomic world with accuracy.

Ironically, Einstein’s two greatest contributions to physics — special and general relativity — were met with initial skepticism by other physicists. Special relativity rejects absolute space-time; general relativity depicts a universe whose “shape” depends on the masses and motions of the bodies within it. These are not intuitive concepts, given man’s instinctive preference for certainty.

The point of the vignettes about Einstein is that science is not a sterile occupation; it can be (and often is) fraught with psychological-emotional visions of truth. What scientists believe to be true depends, to some degree, on what they want to believe is true. Scientists are simply human beings who happen to be more capable than the average person when it comes to the manipulation of abstract concepts. And yet, scientists are like most of their fellow beings in their need for acceptance and approval. They are fully capable of subscribing to a “truth” if to do otherwise would subject them to the scorn of their peers. Einstein was willing and able to question quantum mechanics because he had long since established himself as a premier physicist, and because he was among that rare breed of humans who are (visibly) unaffected by the opinions of their peers.

Such are the scientists who, today, question their peers’ psychological-emotional attachment to the hypothesis of anthropogenic global warming (AGW). The questioners are not “deniers” or “skeptics”; they are scientists who are willing to look deeper than the facile hypothesis that, more than two decades ago, gave rise to the AGW craze.

It was then that a scientist noted the coincidence of an apparent rise in global temperatures since the late 1800s (or is it since 1975?) and an apparent increase in the atmospheric concentration of CO2. And thus a hypothesis was formed. It was embraced and elaborated by scientists (and others) eager to be au courant, to obtain government grants (conveniently aimed at research “proving” AGW), to be “right” by being in the majority, and — let it be said — to curtail or stamp out human activities which they find unaesthetic. Evidence to the contrary be damned.

Where else have we seen this kind of behavior, albeit in a more murderous guise? At the risk of invoking Hitler, I must answer with this link: Nazi Eugenics. Again, science is not a sterile occupation, exempt from human flaws and foibles.

CONCLUSION

What is truth? Is it an absolute reality that lies beyond human perception? Is it those “answers” that flow logically or mathematically from unproven assumptions? Is it the “answers” that, in some way, please us? Or is it the ways in which we reshape the world to conform it with those “answers”?

Truth, as we are able to know it, is like the human condition: fragile and prone to error.

Atheism, Agnosticism, and Science

I just came across Ron Rosenbaum’s “An Agnostic Manifesto.” Much of what Rosenbaum says accords with my many posts on the subject of atheism, agnosticism, and science:

Atheism, Religion, and Science
The Limits of Science
Three Perspectives on Life: A Parable
Beware of Irrational Atheism
The Creation Model
The Thing about Science
Evolution and Religion
Words of Caution for Scientific Dogmatists
Science, Evolution, Religion, and Liberty
The Legality of Teaching Intelligent Design
Science, Logic, and God
Capitalism, Liberty, and Christianity
Is “Nothing” Possible?
A Dissonant Vision
Debunking “Scientific Objectivity”
Science’s Anti-Scientific Bent
Science, Axioms, and Economics
The Big Bang and Atheism
The Universe . . . Four Possibilities
Einstein, Science, and God
Atheism, Religion, and Science Redux
Pascal’s Wager, Morality, and the State
Evolution as God?
The Greatest Mystery

Modeling, Science, and Physics Envy

Climate Skeptic notes the similarity of climate models and macroeconometric models:

The climate modeling approach is so similar to that used by the CEA to score the stimulus that there is even a climate equivalent to the multiplier found in macro-economic models. In climate models, small amounts of warming from man-made CO2 are multiplied many-fold to catastrophic levels by hypothetical positive feedbacks, in the same way that the first-order effects of government spending are multiplied in Keynesian economic models. In both cases, while these multipliers are the single most important drivers of the models’ results, they also tend to be the most controversial assumptions. In an odd parallel, you can find both stimulus and climate debates arguing whether their multiplier is above or below one.

Here is my take, from “Modeling Is Not Science“:

The principal lesson to be drawn from the history of massive government programs is that those who were skeptical of those programs were entirely justified in their skepticism. Informed, articulate skepticism of the kind I counsel here is the best weapon — perhaps the only effective one — in the fight to defend what remains of liberty and property against the depredations of massive government programs.

Skepticism often is met with the claim that such-and-such a model is the “best available” on a subject. But the “best available” model — even if it is the best available one — may be terrible indeed. Relying on the “best available” model for the sake of government action is like sending an army into battle — and likely to defeat — on the basis of rumors about the enemy’s position and strength.

With respect to the economy and the climate, there are too many rumor-mongers (“scientists” with an agenda), too many gullible and compliant generals (politicians), and far too many soldiers available as cannon-fodder (the paying public).

Scientists and politicians who stand by models of unfathomably complex processes are guilty of physics envy, at best, and fraud, at worst.

Fooled by Non-Randomness

Nassim Nicholas Taleb, in his best-selling Fooled by Randomness, charges human beings with the commission of many perceptual and logical errors. One reviewer captures the point of the book, which is to

explore luck “disguised and perceived as non-luck (that is, skills).” So many of the successful among us, he argues, are successful due to luck rather than reason. This is true in areas beyond business (e.g. Science, Politics), though it is more obvious in business.

Our inability to recognize the randomness and luck that had to do with making successful people successful is a direct result of our search for pattern. Taleb points to the importance of symbolism in our lives as an example of our unwillingness to accept randomness. We cling to biographies of great people in order to learn how to achieve greatness, and we relentlessly interpret the past in hopes of shaping our future.

Only recently has science produced probability theory, which helps embrace randomness. Though the use of probability theory in practice is almost nonexistent.

Taleb says the confusion between luck and skill is our inability to think critically. We enjoy presenting conjectures as truth and are not equipped to handle probabilities, so we attribute our success to skill rather than luck.

Taleb writes in a style found all too often on best-seller lists: pseudo-academic theorizing “supported” by selective (often anecdotal) evidence. I sometimes enjoy such writing, but only for its entertainment value. Fooled by Randomness leaves me unfooled, for several reasons.

THE FUNDAMENTAL FLAW

The first reason that I am unfooled by Fooled… might be called a meta-reason. Standing back from the book, I am able to perceive its essential defect: According to Taleb, human affairs — especially economic affairs, and particularly the operations of financial markets — are dominated by randomness. But if that is so, only a delusional person can truly claim to understand the conduct of human affairs. Taleb claims to understand the conduct of human affairs. Taleb is therefore either delusional or omniscient.

Given Taleb’s humanity, it is more likely that he is delusional — or simply fooled, but not by randomness. He is fooled because he proceeds from the assumption of randomness instead of exploring the ways and means by which humans are actually capable of shaping events. Taleb gives no more than scant attention to those traits which, in combination, set humans apart from other animals: self-awareness, empathy, forward thinking, imagination, abstraction, intentionality, adaptability, complex communication skills, and sheer brain power. Given those traits (in combination) the world of human affairs cannot be random. Yes, human plans can fail of realization for many reasons, including those attributable to human flaws (conflict, imperfect knowledge, the triumph of hope over experience, etc.). But the failure of human plans is due to those flaws — not to the randomness of human behavior.

What Taleb sees as randomness is something else entirely. The trajectory of human affairs often is unpredictable, but it is not random. For it is possible to find patterns in the conduct of human affairs, as Taleb admits (implicitly) when he discusses such phenomena as survivorship bias, skewness, anchoring, and regression to the mean.

A DISCOURSE ON RANDOMNESS

What Is It?

Taleb, having bloviated for dozens of pages about the failure of humans to recognize randomness, finally gets around to (sort of) defining randomness on pages 168 and 169 (of the 2005 paperback edition):

…Professor Karl Pearson … devised the first test of nonrandomness (it was in reality a test of deviation from normality, which for all intents and purposes, was the same thing). He examined millions of runs of [a roulette wheel] during the month of July 1902. He discovered that, with high degree of statistical significance … the runs were not purely random…. Philosophers of statistics call this the reference case problem to explain that there is no true attainable randomness in practice, only in theory….

…Even the fathers of statistical science forgot that a random series of runs need not exhibit a pattern to look random…. A single random run is bound to exhibit some pattern — if one looks hard enough…. [R]eal randomness does not look random.

The quoted passage illustrates nicely the superficiality of Fooled by Randomness, and (I must assume) the muddledness of Taleb’s thinking:

  • He accepts a definition of randomness which describes the observation of outcomes of mechanical processes (e.g., the turning of a roulette wheel, the throwing of dice) that are designed to yield random outcomes. That is, randomness of the kind cited by Taleb is in fact the result of human intentions.
  • If “there is no true attainable randomness,” why has Taleb written a 200-plus page book about randomness?
  • What can he mean when he says “a random series of runs need not exhibit a pattern to look random”? The only sensible interpretation of that bit of nonsense would be this: It is possible for a random series of runs to contain what looks like a pattern. But remember that the random series of runs to which Taleb refers is random only because humans intended its randomness.
  • It is true enough that “A single random run is bound to exhibit some pattern — if one looks hard enough.” Sure it will. But it remains a single random run of a process that is intended to produce randomness, which is utterly unlike such events as transactions in financial markets.

One of the “fathers of statistical science” mentioned by Taleb (deep in the book’s appendix) is Richard von Mises, who in Probability Statistics and Truth defines randomness as follows:

First, the relative frequencies of the attributes [e.g. heads and tails] must possess limiting values [i.e., converge on 0.5, in the case of coin tosses]. Second, these limiting values must remain the same in all partial sequences which may be selected from the original one in an arbitrary way. Of course, only such partial sequences can be taken into consideration as can be extended indefinitely, in the same way as the original sequence itself. Examples of this kind are, for instance, the partial sequences formed by all odd members of the original sequence, or by all members for which the place number in the sequence is the square of an integer, or a prime number, or a number selected according to some other rule, whatever it may be. (pp. 24-25 of the 1981 Dover edition, which is based on the author’s 1951 edition)

Gregory J. Chaitin, writing in Scientific American (“Randomness and Mathematical Proof,” vol. 232, no. 5 (May 1975), pp. 47-52), offers this:

We are now able to describe more precisely the differences between the[se] two series of digits … :

01010101010101010101
01101100110111100010

The first could be specified to a computer by a very simple algorithm, such as “Print 01 ten times.” If the series were extended according to the same rule, the algorithm would have to be only slightly larger; it might be made to read, for example, “Print 01 a million times.” The number of bits in such an algorithm is a small fraction of the number of bits in the series it specifies, and as the series grows larger the size of the program increases at a much slower rate.

For the second series of digits there is no corresponding shortcut. The most economical way to express the series is to write it out in full, and the shortest algorithm for introducing the series into a computer would be “Print 01101100110111100010.” If the series were much larger (but still apparently patternless), the algorithm would have to be expanded to the corresponding size. This “incompressibility” is a property of all random numbers; indeed, we can proceed directly to define randomness in terms of incompressibility: A series of numbers is random if the smallest algorithm capable of specifying it to a computer has about the same number of bits of information as the series itself [emphasis added].

This is another way of saying that if you toss a balanced coin 1,000 times the only way to describe the outcome of the tosses is to list the 1,000 outcomes of those tosses. But, again, the thing that is random is the outcome of a process designed for randomness.

Taking Mises and Chaitin’s definitions together, we can define random events as events which are repeatable, convergent on a limiting value, and truly patternless over a large number of repetitions. Evolving economic events (e.g., stock-market trades, economic growth) are not alike (in the way that dice are, for example), they do not converge on limiting values, and they are not patternless, as I will show.

In short, Taleb fails to demonstrate that human affairs in general or financial markets in particular exhibit randomness, properly understood.

Randomness and the Physical World

Nor are we trapped in a random universe. Returning to Mises, I quote from the final chapter of Probability, Statistics and Truth:

We can only sketch here the consequences of these new concepts [e.g., quantum mechanics and Heisenberg’s principle of uncertainty] for our general scientific outlook. First of all, we have no cause to doubt the usefulness of the deterministic theories in large domains of physics. These theories, built on a solid body of experience, lead to results that are well confirmed by observation. By allowing us to predict future physical events, these physical theories have fundamentally changed the conditions of human life. The main part of modern technology, using this word in its broadest sense, is still based on the predictions of classical mechanics and physics. (p. 217)

Even now, almost 60 years on, the field of nanotechnology is beginning to hardness quantum mechanical effects in the service of a long list of useful purposes.

The physical world, in other words, is not dominated by randomness, even though its underlying structures must be described probabilistically rather than deterministically.

Summation and Preview

A bit of unpredictability (or “luck”) here and there does not make for a random universe, random lives, or random markets. If a bit of unpredictability here and there dominated our actions, we wouldn’t be here to talk about randomness — and Taleb wouldn’t have been able to marshal his thoughts into a published, marketed, and well-sold book.

Human beings are not “designed” for randomness. Human endeavors can yield unpredictable results, but those results do not arise from random processes, they derive from skill or the lack therof, knowledge or the lack thereof (including the kinds of self-delusions about which Taleb writes), and conflicting objectives.

An Illustration from Life

To illustrate my position on randomness, I offer the following digression about the game of baseball.

At the professional level, the game’s poorest players seldom rise above the low minor leagues. But even those poorest players are paragons of excellence when compared with the vast majority of American males of about the same age. Did those poorest players get where they were because of luck? Perhaps some of them were in the right place at the right time, and so were signed to minor league contracts. But their luck runs out when they are called upon to perform in more than a few games. What about those players who weren’t in the right place at the right time, and so were overlooked in spite of skills that would have advanced them beyond the rookie leagues? I have no doubt that there have been many such players. But, in the main, professional baseball abounds with the lion’s share of skilled baseball players who are there because they intend to be there, and because baseball clubs intend for them to be there.

Now, most minor leaguers fail to advance to the major leagues, even for the proverbial “cup of coffee” (appearing in few games at the end of the major-league season, when teams are allowed to expand their rosters following the end of the minor-league season). Does “luck” prevent some minor leaguers from advancement to “the show” (the major leagues)? Of course. Does “luck” result in the advancement of some minor leaguers to “the show”? Of course. But “luck,” in this context, means injury, illness, a slump, a “hot” streak, and the other kinds of unpredictable events that ballplayers are subject to. Are the events random? Yes, in the sense that they are unpredictable, but I daresay that most baseball players do not succumb to bad luck or advance very for or for very long because of good luck. In fact, ballplayers who advance to the major leagues, and then stay there for more than a few seasons, do so because they possess (and apply) greater skill than their minor-league counterparts. And make no mistake, each player’s actions are so closely watched and so extensively quantified that it isn’t hard to tell when a player is ready to be replaced.

It is true that a player may experience “luck” for a while during a season, and sometimes for a whole season. But a player will not be consistently “lucky” for several seasons. The length of his career (barring illness, injury, or voluntary retirement), and his accomplishments during that career, will depend mainly on his inherent skills and his assiduousness in applying those skills.

No one believes that Ty Cobb, Babe Ruth, Ted Williams, Christy Matthewson, Warren Spahn, and the dozens of other baseball players who rank among the truly great were lucky. No one believes that the vast majority of the the tens of thousands of minor leaguers who never enjoyed more than the proverbial cup of coffee were unlucky. No one believes that the vast majority of the millions of American males who never made it to the minor leagues were unlucky. Most of them never sought a career in baseball; those who did simply lacked the requisite skills.

In baseball, as in life, “luck” is mainly an excuse and rarely an explanation. We prefer to apply “luck” to outcomes when we don’t like the true explanations for them. In the realm of economic activity and financial markets, one such explanation (to which I will come) is the exogenous imposition of governmental power.

ARE ECONOMIC AND FINANCIAL OUTCOMES TRULY RANDOM?

They Cannot Be, Given Competition

Returning to Taleb’s main theme — the randomness of economic and financial events — I quote this key passage (my comments are in brackets and boldface):

…Most of [Bill] Gates'[s] rivals have an obsessive jealousy of his success. They are maddened by the fact that he managed to win so big while many of them are struggling to make their companies survive. [These are unsupported claims that I include only because they set the stage for what follows.]

Such ideas go against classical economic models, in which results either come from a precise reason (there is no account for uncertainty) or the good guy wins (the good guy is the one who is most skilled and has some technical superiority). [The “good guy” theory would come as a great surprise to “classical” economists, who quite well understood imperfect competition based on product differentiation and monopoly based on (among other things) early entry into a market.] Economists discovered path-dependent effects late in their game [There is no “late” in a “game” that had no distinct beginning and has no pre-ordained end.], then tried to publish wholesale on the topic that otherwise be bland and obvious. For instance, Brian Arthur, an economist concerned with nonlinearities at the Santa Fe Institute [What kinds of nonlinearities are found at the Santa Fe Institute?], wrote that chance events coupled with positive feedback other than technological superiority will determine economic superiority — not some abstrusely defined edge in a given area of expertise. [It would come as no surprise to economists — even “classical” ones — that many factors aside from technical superiority determine market outcomes.] While early economic models excluded randomness, Arthur explained how “unexpected orders, chance meetings with lawyers, managerial whims … would help determine which ones acheived early sales and, over time, which firms dominated.”

Regarding the final sentence of the quoted passage, I refer back to the example of baseball. A person or a firm may gain an opportunity to succeed because of the kinds of “luck” cited by Brian Arthur, but “good luck” cannot sustain an incompetent performer for very long.  And when “bad luck” happens to competent individuals and firms they are often (perhaps usually) able to overcome it.

While overplaying the role of luck in human affairs, Taleb underplays the role of competition when he denigrates “classical economic models,” in which competition plays a central role. “Luck” cannot forever outrun competition, unless the game is rigged by governmental intervention, namely, the writing of regulations that tend to favor certain competitors (usually market incumbents) over others (usually would-be entrants). The propensity to regulate at the behest of incumbents (who plead “public interest,” of course) is a proof of the power of competition to shape economic outcomes. It is loathed and feared, and yet it leads us in the direction to which classical economic theory points: greater output and lower prices.

Competition is what ensures that (for the most part) the best ballplayers advance to the major leagues. It’s what keeps “monopolists” like Microsoft hopping (unless they have a government-guaranteed monopoly), because even a monopolist (or oligopolist) can face competition, and eventually lose to it — witness the former “Big Three” auto makers, many formerly thriving chain stores (from Kresge’s to Montgomery Ward’s), and numerous other brand names of days gone by. If Microsoft survives and thrives, it will be because it actually offers consumers more value for their money, either in the way of products similar to those marketed by Microsoft or in entirely new products that supplant those offered by Microsoft.

Monopolists and oligopolists cannot survive without constant innovation and attention to their customers’ needs.Why? Because they must compete with the offerors of all the other goods and services upon which consumers might spend their money. There is nothing — not even water — which cannot be produced or delivered in competitive ways. (For more, see this.)

The names of the particular firms that survive the competitive struggle may be unpredictable, but what is predictable is the tendency of competitive forces toward economic efficiency. In other words, the specific outcomes of economic competition may be unpredictable (which is not a bad thing), but the general result — efficiency — is neither unpredictable nor a manifestation of randomness or “luck.”

Taleb, had he broached the subject of competition would (with his hero George Soros) denigrate it, on the ground that there is no such thing as perfect competition. But the failure of competitive forces to mimic the model of perfect competition does not negate the power of competition, as I have summarized it here. Indeed, the failure of competitive forces to mimic the model of perfect competition is not a failure, for perfect competition is unattainable in practice, and to hold it up as a measure of the effectiveness of market forces is to indulge in the Nirvana fallacy.

In any event, Taleb’s myopia with respect to competition is so complete that he fails to mention it, let alone address its beneficial effects (even when it is less than perfect). And yet Taleb dares to dismiss as a utopist Milton Friedman (p. 272) — the same Milton Friedman who was among the twentieth century’s foremost advocates of the benefits of competition.

Are Financial Markets Random?

Given what I have said thus far, I find it almost incredible that anyone believes in the randomness of financial markets. It is unclear where Taleb stands on the random-walk hypothesis, but it is clear that he believes financial markets to be driven by randomness. Yet, contradictorily, he seems to attack the efficient-markets hypothesis (see pp. 61-62), which is the foundation of the random-walk hypothesis.

What is the random-walk hypothesis? In brief, it is this: Financial markets are so efficient that they instantaneously reflect all information bearing on the prices of financial instruments that is then available to persons buying and selling those instruments. (The qualifier “then available to persons buying and selling those instruments” leaves the door open for [a] insider trading and [b] arbitrage, due to imperfect knowledge on the part of some buyers and/or sellers.) Because information can change rapidly and in unpredictable ways, the prices of financial instruments move randomly. But the random movement is of a very special kind:

If a stock goes up one day, no stock market participant can accurately predict that it will rise again the next. Just as a basketball player with the “hot hand” can miss the next shot, the stock that seems to be on the rise can fall at any time, making it completely random.

And, therefore, changes in stock prices cannot be predicted.

Note, however, the focus on changes. It is that focus which creates the illusion of randomness and unpredictability. It is like hoping to understand the movements of the planets around the sun by looking at the random movements of a particle in a cloud chamber.

When we step back from day-to-day price changes, we are able to see the underlying reality: prices (instead of changes) and price trends (which are the opposite of randomness). This (correct) perspective enables us to see that stock prices (on the whole) are not random, and to identify the factors that influence the broad movements of the stock market.
For one thing, if you look at stock prices correctly, you can see that they vary cyclically. Here is a telling graphic (from “Efficient-market hypothesis” at Wikipedia):

Returns on stocks vs. PE ratioPrice-Earnings ratios as a predictor of twenty-year returns based upon the plot by Robert Shiller (Figure 10.1,[18] source). The horizontal axis shows the real price-earnings ratio of the S&P Composite Stock Price Index as computed in Irrational Exuberance (inflation adjusted price divided by the prior ten-year mean of inflation-adjusted earnings). The vertical axis shows the geometric average real annual return on investing in the S&P Composite Stock Price Index, reinvesting dividends, and selling twenty years later. Data from different twenty-year periods is color-coded as shown in the key. See also ten-year returns. Shiller states that this plot “confirms that long-term investors—investors who commit their money to an investment for ten full years—did do well when prices were low relative to earnings at the beginning of the ten years. Long-term investors would be well advised, individually, to lower their exposure to the stock market when it is high, as it has been recently, and get into the market when it is low.”[18] This correlation between price to earnings ratios and long-term returns is not explained by the efficient-market hypothesis.

Why should stock prices tend to vary cyclically? Because stock prices generally are driven by economic growth (i.e., changes in GDP), and economic growth is strongly cyclical. (See this post.)

More fundamentally, the economic outcomes reflected in stock prices aren’t random, for they depend mainly on intentional behavior along well-rehearsed lines (i.e., the production and consumption of goods and services in ways that evolve over time). Variations in economic behavior, even when they are unpredictable, have explanations; for example:

  • Innovation and capital investment spur the growth of economic output.
  • Natural disasters slow the growth of economic output (at least temporarily) because they absorb resources that could have gone to investment  (as well as consumption).
  • Governmental interventions (taxation and regulation), if not reversed, dampen growth permanently.

There is nothing in those three statements that hasn’t been understood since the days of Adam Smith. Regarding the third statement, the general slowing of America’s economic growth since the advent of the Progressive Era around 1900 is certainly not due to randomness, it is due to the ever-increasing burden of taxation and regulation imposed on the economy — an entirely predictable result, and certainly not a random one.

In fact, the long-term trend of the stock market (as measured by the S&P 500) is strongly correlated with GDP. And broad swings around that trend can be traced to governmental intervention in the economy. The following graph shows how the S&P 500, reconstructed to 1870, parallel constant-dollar GDP:

The next graph shows the relationship more clearly.

090711_Real S&P 500 vs Real GDP

090711_Real S&P 500 vs Real GDP_2

The wild swings around the trend line began in the uncertain aftermath of World War I, which saw the imposition of production and price controls. The swings continued with the onset of the Great Depression (which can be traced to governmental action), the advent of the anti-business New Deal, and the imposition of production and price controls on a grand scale during World War II. The next downswing was occasioned by the culmination the Great Society, the “oil shocks” of the early 1970s, and the raging inflation that was touched off by — you guessed it — government policy. The latest downswing is owed mainly to the financial crisis born of yet more government policy: loose money and easy loans to low-income borrowers.

And so it goes, wildly but predictably enough if you have the faintest sense of history. The moral of the story: Keep your eye on government and a hand on your wallet.

CONCLUSION

There is randomness in economic affairs, but they are not dominated by randomness. They are dominated by intentions, including especially the intentions of the politicians and bureaucrats who run governments. Yet, Taleb has no space in his book for the influence of their deeds economic activity and financial markets.

Taleb is right to disparage those traders (professional and amateur) who are lucky enough to catch upswings, but are unprepared for downswings. And he is right to scoff at their readiness to believe that the current upswing (uniquely) will not be followed by a downswing (“this time it’s different”).

But Taleb is wrong to suggest that traders are fooled by randomness. They are fooled to some extent by false hope, but more profoundly by their inablity to perceive the economic damage wrought by government. They are not alone of course; most of the rest of humanity shares their perceptual failings.

Taleb, in that respect, is only somewhat different than most of the rest of humanity. He is not fooled by false hope, but he is fooled by non-randomness — the non-randomness of government’s decisive influence on economic activity and financial markets. In overlooking that influence he overlooks the single most powerful explanation for the behavior of markets in the past 90 years.

Freedom of Will and Political Action

INTRODUCTION

Without freedom of will, human beings could not make choices, let alone “good” or “bad” ones. Without freedom of will there would be no point in arguing about political philosophies, for — as the song says — whatever will be, will be.

This post examines freedom of will from the vantage point of indeterminacy. Instead of offering a direct proof of freedom of will, I suggest that we might as well believe as if and act as though we possess it, given our inability to delve the depths of the physical universe or the human psyche.

Why focus on indeterminacy? Think of my argument as a variant of Pascal’s wager, which can be summarized as follows:

Even though the existence of God cannot be determined through reason, a person should wager as though God exists, because so living has everything to gain, and nothing to lose.

Whatever its faults — and it has many — Pascal’s wager suggests a way out of an indeterminate situation.

The wager I make in this post is as follows:

  • We cannot discern the deepest physical and psychological truths.
  • Therefore, we cannot say with certainty whether we have freedom of will.
  • We might as well act as if we have freedom of will; if we do not have it, our (illusory) choices cannot make us worse off, but if we do have it our choices may make us better off.

The critical word in the conclusion is “may.” Our choices may make us better off, but only if they are wise choices. It is “may” which gives weight to our moral and political choices. The wrong ones can make us worse off; the right ones, better off.

PHYSICAL INDETERMINACY

Our Inherent Limitations as Humans

I begin with the anthropic principle, which (as summarized and discussed here),

refers to the idea that the attributes of the universe must be consistent with the requirements of our own existence.

In fact, there is no scientific reason to believe that the universe was created in order that human beings might exist. From a scientific standpoint, we are creatures of the universe, not its raison d’etre.

The view that we, the human inhabitants of Earth, have a privileged position is a bias that distorts our observations about the universe. Philosopher-physicist-mathematician Nick Bostrom explains the bias:

[T]here are selection effects that arise not from the limitations of some measuring device but from the fact that all observations require the existence of an appropriately positioned observer. Our data is [sic] filtered not only by limitations in our instrumentation but also by the precondition that somebody be there to “have” the data yielded by the instruments (and to build the instruments in the first place). The biases that occur due to that precondition … we shall call … observation selection effects….

Even trivial selection effects can sometimes easily be overlooked:

It was a good answer that was made by one who when they showed him hanging in a temple a picture of those who had paid their vows as having escaped shipwreck, and would have him say whether he did not now acknowledge the power of the gods,—‘Aye,’ asked he again, ‘but where are they painted that were drowned after their vows?’ And such is the way of all superstition, whether in astrology, dreams, omens, divine judgments, or the like; wherein men, having a delight in such vanities, mark the events where they are fulfilled, but where they fail, though this happens much oftener, neglect and pass them by. (Bacon 1620)

When even a plain and simple selection effect, such as the one that Francis Bacon comments on in the quoted passage, can escape a mind that is not paying attention, it is perhaps unsurprising that observation selection effects, which tend to be more abstruse, have only quite recently been given a name and become a subject of systematic study.

The term “anthropic principle” … is less than three decades old. There are, however, precursors from much earlier dates. For example, in Hume’s Dialogues Concerning Natural Religion, one can find early expressions of some ideas of anthropic selection effects. Some of the core elements of Kant’s philosophy about how the world of our experience is conditioned on the forms of our sensory and intellectual faculties are not completely unrelated to modern ideas about observation selection effects as important methodological considerations in theory-evaluation, although there are also fundamental differences. In Ludwig Boltzmann’s attempt to give a thermodynamic account of time’s arrow …, we find for perhaps the first time a scientific argument that makes clever use of observation selection effects…. A more successful invocation of observation selection effects was made by R. H. Dicke (Dicke 1961), who used it to explain away some of the “large-number coincidences”, rough order-of-magnitude matches between some seemingly unrelated physical constants and cosmic parameters, that had previously misled such eminent physicists as Eddington and Dirac into a futile quest for an explanation involving bold physical postulations.

The modern era of anthropic reasoning dawned quite recently, with a series of papers by Brandon Carter, another cosmologist. Carter coined the term “anthropic principle” in 1974, clearly intending it to convey some useful guidance about how to reason under observation selection effects….

The term “anthropic” is a misnomer. Reasoning about observation selection effects has nothing in particular to do with homo sapiens, but rather with observers in general…

We humans, as the relevant observers of the physical world, can perceive only those patterns that we are capable of perceiving, given the wiring of our brains and the instruments that we design with the use of our brains. Because of our inherent limitations, the limitations that our limitations impose on our instruments, and the inherent limitations of the instruments, we may never be able to see all that there is to see in the universe, even in that part of the universe which is close at hand.

We may never know, for example, whether physical laws change or remain the same in all places and for all time. We may never know (as a matter of scientific observation) how the universe originated, given that its cause(s) (whether Divine or otherwise) may lie outside the boundaries of the universe.

Implications for the Physical Sciences

It follows that the order which we find in the universe may bear no resemblance to the real order of the universe. It may simply be the case that we are incapable of perceiving certain phenomena and the physical laws that guide them, which — for all we know — may change from place to place and time to time.

A good case in point involves the existence of God, which many doubt and many others deny. The doubters and deniers are unable to perceive the existence of God, whereas many believers claim that they can do so. But the inability of doubters and deniers to perceive the existence of God does not disprove God’s existence, as an honest doubter or denier will admit.

It is trite but true to say that we do not know what we do not know; that is, there are unknown unknowns. Given our limitations as observers, the universe likely contains many unknown unknowns that will never become known unknowns.

Given our limitations, we must make do with our perceptions of the universe. Making do means that we learn what we are able to learn (imperfectly) about the universe and its components, and we then use our imperfect knowledge to our advantage wherever possible. (A crude analogy occurs in baseball, where a batter who doesn’t understand why a curveball curves is nevertheless able to hit one.)

THE INDETERMINACY OF HUMAN BEHAVIOR

The tautologous assumption that individuals act in such a way as to maximize their happiness tells us nothing about political or economic outcomes. (The assumption remains tautologous despite altruism, which is nothing more than another way of enhancing the happiness of altruistic individuals.) We can know nothing about the likely course of political and economic events until we know something about the psychological drives that shape those events. Even if we know something (or a great deal) about psychological drives, can we ever know enough to say that human behavior is (or is not) deterministic? The answer I offer here is “no.”

A Conflict of Visions

Economic and political behavior depends greatly on human psychology. For example, Thomas Sowell, in A Conflict of Visions, posits two opposing visions: the unconstrained vision (I would call idealism) and the constrained vision (which I would call realism). At the end of chapter 2, Sowell summarizes the difference between the two visions:

The dichotomy between constrained and unconstrained visions is based on whether or not inherent limitations of man are among the key elements included in each vision…. These different ways of conceiving man and the world lead not merely to different conclusions but to sharply divergent, often diametrically opposed, conclusions on issues ranging from justice to war.

Thus, in chapter 5, Sowell writes:

The enormous importance of evolved systemic interactions in the constrained vision does not make it a vision of collective choice, for the end results are not chosen at all — the prices, output, employment, and interest rates emerging from competition under laissez-faire economics being the classic example. Judges adhering closely to the written law — avoiding the choosing of results per se — would be the analogue in law. Laissez-faire economics and “black letter” law are essentially frameworks, with the locus of substantive discretion being innumerable individuals.

By contrast,

those in the tradition of the unconstrained vision almost invariably assume that some intellectual and moral pioneers advance far beyond their contemporaries, and in one way or another lead them toward ever-higher levels of understanding and practice. These intellectual and moral pioneers become the surrogate decision-makers, pending the eventual progress of mankind to the point where all can make moral decisions.

Digging Deeper

Sowell’s analysis is enlightening, but not comprehensive. The human psyche has many more facets than political realism and idealism. Consider the “Big Five” personality traits:

In psychology, the “Big Five” personality traits are five broad factors or dimensions of personality developed through lexical analysis. This is the rational and statistical analysis of words related to personality as found in natural-language dictionaries.[1] The traits are also referred to as the “Five Factor Model” (FFM).

The model is considered to be the most comprehensive empirical or data-driven enquiry into personality. The first public mention of the model was in 1933, by L. L. Thurstone in his presidential address to the American Psychological Association. Thurstone’s comments were published in Psychological Review the next year.[2]

The five factors are Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (OCEAN, or CANOE if rearranged). Some disagreement remains about how to interpret the Openness factor, which is sometimes called “Intellect.” [3] Each factor consists of a cluster of more specific traits that correlate together. For example, extraversion includes such related qualities as sociability, excitement seeking, impulsiveness, and positive emotions.

The “Big Five” model is open to criticism, but even assuming its perfection we are left with an unpredictable human psyche. For example, I tested myself (here), with the following results:

Extraversion — 4th percentile
Agreeableness — 4th percentile
Conscientiousness — 99th percentile
Emotional stability — 12th percentile
Openness — 93rd percentile

(NOTE: “Emotional stability” is also called “neuroticism,” “a tendency to experience unpleasant emotions easily, such as anger, anxiety, depression, or vulnerability.” My “neuroticism” doesn’t involve anxiety, except to the extent that I am super-conscientious and, therefore, bothered by unfinished business. Nor does it involve depression or vulnerability. But I am easily angered by incompetence, stupidity, and carelessness. There is far too much of that stuff in the world, which explains my low scores on “extraversion” and “agreeableness.” “Openness” measures my intellectual openness, of course, and not my openness to people.)

I daresay that anyone else who happens to have the same scores as mine (which are only transitory) will have arrived at those scores by an entirely different route. That is, he or she probably differs from me on many of the following dimensions: age, race, ethic/genetic inheritance, income and education of parents and self, location of residence, marital status, number and gender of children (if any), tastes in food, drink, and entertainment. The list could go on, but the principle should be obvious: There is no accounting for psychological differences, or if there is, the accounting is beyond our ken.

Is everyone with my psychological-genetic-demographic profile a radical-right-minarchist like me? I doubt it very much. But even if that were so, it would be impossible to collect the data to prove it, whereas the (likely) case of a single exception would disprove it.

A Caveat, of Sorts

There is something in the human psyche that seems to drive us toward statism. What that says about human nature is almost trite: Happiness — for many humans — involves neither not wealth-maximization or liberty. It involves attitudes that can be expressed as “safety in numbers,” “going along with the crowd,” and “harm is worse than gain.” And it involves the political manipulation of those attitudes in the service of a drive that is not universal but which can dominate events, namely, the drive to power.

CONCLUSION

The preceding caveat notwithstanding, I have made the case that I set out to make:

We might as well act as if we have freedom of will; if we do not have it, our (illusory) choices cannot make us worse off, but if we do have it our choices may make us better off.

In fact, the caveat points to the necessity of acting as if we have freedom of will. Only by doing so can we hope to overcome the psychological tendencies that cause us political and economic harm. For those tendencies are just that — tendencies. They are not iron rules of conduct. And they have been overcome before.

Modeling Is Not Science

The title of this post applies, inter alia, to econometric models — especially those that purport to forecast macroeconomic activity — and climate models — especially those that purport to forecast global temperatures. I have elsewhere essayed my assessments of macroeconomic and climate models. (See this and this, for example.) My purpose here is to offer a general warning about models that claim to depict and forecast the behavior of connected sets of phenomena (systems) that are large, complex, and dynamic. I draw, in part, on a paper that I wrote 28 years ago. That paper is about warfare models, but it has general applicability.

HEMIBEL THINKING

Philip M. Morse and George E. Kimball, pioneers in the field of military operations research — the analysis and modeling of military operations — wrote that the

successful application of operations research usually results in improvements by factors of 3 or 10 or more. . . . In our first study of any operation we are looking for these large factors of possible improvement. . . .

One might term this type of thinking “hemibel thinking.” A bel is defined as a unit in a logarithmic scale corresponding to a factor of 10. Consequently a hemibel corresponds to a factor of the square root of 10, or approximately 3. (Methods of Operations Research, 1946, p. 38)

This is science-speak for the following proposition: In large, complex, and dynamic systems (e.g., war, economy, climate) there is much uncertainty about the relevant parameters, about how to characterize their interactions mathematically, and about their numerical values.

Hemibel thinking assumes great importance in light of the imprecision inherent in models of large, complex, and dynamic systems. Consider, for example, a simple model with only 10 parameters. Even if such a model doesn’t omit crucial parameters or mischaracterize their interactions,  its results must be taken with large doses of salt. Simple mathematics tells the cautionary tale: An error of about 12 percent in the value of each parameter can produce a result that is off by a factor of 3 (a hemibel); An error of about 25 percent in the value of each parameter can produce a result that is off by a factor of 10. (Remember, this is a model of a relatively small system.)

If you think that models and “data” about such things as macroeconomic activity and climatic conditions cannot be as inaccurate as that, you have no idea how such models are devised or how such data are collected and reported. It would be kind to say that such models are incomplete, inaccurate guesswork. It would be fair to say that all too many of them reflect their developers’ policy biases.

Of course, given a (miraculously) complete model, data errors might (miraculously) be offsetting, but don’t bet on it. It’s not that simple: Some errors will be large and some errors will be small (but which are which?), and the errors may lie in either direction (but in which direction?). In any event, no amount of luck can prevent a modeler from constructing a model whose estimates advance a favored agenda (e.g., massive, indiscriminate government spending; massive, futile, and costly efforts to cool the planet).

NO MODEL IS EVER PROVEN

The construction of a model is only one part of the scientific method. A model means nothing unless it can be tested repeatedly against facts (facts not already employed in the development of the model) and, through such tests, is found to be more accurate than alternative explanations of the same facts.As Morse and Kimball put it,

[t]o be valuable [operations research] must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. (Op. cit., p. 10)

Even after rigorous testing, a model is never proven. It is, at best, a plausible working hypothesis about relations between the phenomena that it encompasses.

A model is never proven for two reasons. First, new facts may be discovered that do not comport with the model. Second, the facts upon which a model is based may be open to a different interpretation, that is, they may support a new model that yields better predictions than its predecessor.

The fact that a model cannot be proven can be take as an excuse for action: “We must act on the best information we have.”  That excuse — which justifies an entire industry, namely, government-funded analysis — does not fly, as I discuss below.

MODELS LIE WHEN LIARS MODEL

Any model is dangerous in the hands of a skilled, persuasive advocate. A numerical model is especially dangerous because:

  • There is abroad a naïve belief in the authoritativeness of numbers. A bad guess (even if unverifiable) seems to carry more weight than an honest “I don’t know.”
  • Relatively few people are both qualified and willing to examine the parameters of a numerical model, the interactions among those parameters, and the data underlying the values of the parameters and magnitudes of their interaction.
  • It is easy to “torture” or “mine” the data underlying a numerical model so as to produce a model that comports with the modeler’s biases (stated or unstated).

There are many ways to torture or mine data; for example: by omitting certain variables in favor of others; by focusing on data for a selected period of time (and not testing the results against all the data); by adjusting data without fully explaining or justifying the basis for the adjustment; by using proxies for missing data without examining the biases that result from the use of particular proxies.

So, the next time you read about research that purports to “prove” or “predict” such-and-such about a complex phenomenon — be it the future course of economic activity or global temperatures — take a deep breath and ask these questions:

  • Is the “proof” or “prediction” based on an explicit model, one that is or can be written down? (If the answer is “no,” you can confidently reject the “proof” or “prediction” without further ado.)
  • Are the data underlying the model available to the public? If there is some basis for confidentiality (e.g., where the data reveal information about individuals or are derived from proprietary processes) are the data available to researchers upon the execution of confidentiality agreements?
  • Are significant portions of the data reconstructed, adjusted, or represented by proxies? If the answer is “yes,” it is likely that the model was intended to yield “proofs” or “predictions” of a certain type (e.g., global temperatures are rising because of human activity).
  • Are there well-documented objections to the model? (It takes only one well-founded objection to disprove a model, regardless of how many so-called scientists stand behind it.) If there are such objections, have they been answered fully, with factual evidence, or merely dismissed (perhaps with accompanying scorn)?
  • Has the model been tested rigorously by researchers who are unaffiliated with the model’s developers? With what results? Are the results highly sensitive to the data underlying the model; for example, does the omission or addition of another year’s worth of data change the model or its statistical robustness? Does the model comport with observations made after the model was developed?

For two masterful demonstrations of the role of data manipulation and concealment in the debate about climate change, read Steve McIntyre’s presentation and this paper by Syun-Ichi Akasofu. For a masterful demonstration of a model that proves what it was designed to prove by the assumptions built into it, see this.

IMPLICATIONS

Government policies can be dangerous and impoverishing things. Despite that, it is hard (if not impossible) to modify and reverse government policies. Consider, for example, the establishment of public schools more than a century ago, the establishment of Social Security more than 70 years ago, and the establishment of Medicare and Medicaid more than 40 years ago. There is plenty of evidence that all four institutions are monumentally expensive failures. But all four institutions have become so entrenched that to call for their abolition is to be thought of as an eccentric, if not an uncaring anti-government zealot. (For the latest about public schools, see this.)

The principal lesson to be drawn from the history of massive government programs is that those who were skeptical of those programs were entirely justified in their skepticism. Informed, articulate skepticism of the kind I counsel here is the best weapon — perhaps the only effective one — in the fight to defend what remains of liberty and property against the depredations of massive government programs.

Skepticism often is met with the claim that such-and-such a model is the “best available” on a subject. But the “best available” model — even if it is the best available one — may be terrible indeed. Relying on the “best available” model for the sake of government action is like sending an army into battle — and likely to defeat — on the basis of rumors about the enemy’s position and strength.

With respect to the economy and the climate, there are too many rumor-mongers (“scientists” with an agenda), too many gullible and compliant generals (politicians), and far too many soldiers available as cannon-fodder (the paying public).

CLOSING THOUGHTS

The average person is so mystified and awed by “science” that he has little if any understanding of its limitations and pitfalls, some of which I have addressed here in the context of modeling. The average person’s mystification and awe are unjustified, given that many so-called scientists exploit the public’s mystification and awe in order to advance personal biases, gain the approval of other scientists (whence “consensus”), and garner funding for research that yields results congenial to its sponsors (e.g., global warming is an artifact of human activity).

Isaac Newton, who must be numbered among the greatest scientists in human history, was not a flawless scientist. (Has there ever been one?) But scientists and non-scientists alike should heed Newton on the subject of scientific humility:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me. (Quoted in Horace Freeland Judson,The Search for Solutions, 1980, p. 5.)