Science and Understanding

The Fragility of Knowledge

A recent addition to the collection of essays at “Einstein’s Errors” relies mainly on Christoph von Mettenheim’s Popper versus Einstein. One of Mettenheim’s key witnesses for the prosecution of Einstein’s special theory of relativity (STR) is Alfred Tarski, a Polish-born logician and mathematician. According to Mettenheim, Tarski showed

that all the axioms of geometry [upon which STR is built] are in fact nominalistic definitions, and therefore have nothing to do with truth, but only with expedience. [p. 86]

Later:

Tarski has demonstrated that logical and mathematical inferences can never yield an increase of empirical information because they are based on nominalistic definitions of the most simple terms of our language. We ourselves give them their meaning and cannot,therefore, get out of them anything but what we ourselves have put into them. They are tautological in the sense that any information contained in the conclusion must also have been contained in the premises. This is why logic and mathematics alone can never lead to scientific discoveries. [p. 100]

Mettenheim refers also to Alfred North Whitehead, a great English mathematician and philosopher who preceded Tarski. I am reading Whitehead’s Science and the Modern World thanks to my son, who recently wrote about it. I had heretofore only encountered the book in bits and snatches. I will have more to say about it in future posts. For now, I am content to quote this relevant passage, which presages Tarski’s theme and goes beyond it:

Thought is abstract; and the the intolerant use of abstractions is the major vice of the intellect. this vice is not wholly corrected by the recurrence to concrete experience. For after all, you need only attend to those aspects of your concrete experience which lie within some limited scheme. There are two methods for the purification of ideas. One of them is dispassionate observation by means of the bodily senses. But observation is selection. [p. 18]

More to come.

Mettenheim on Einstein’s Relativity

I have added “Mettenheim on Einstein’s Relativity – Part I” to “Einstein’s Errors“. The new material draws on the Part I of Christoph von Mettenheim’s Popper versus Einstein: On the Philosophical Foundations of Physics (Tübingen: Mohr Siebeck, 1998). Mettenheim strikes many telling blows against STR. These go to the heart of STR and Einstein’s view of science:

[T[o Einstein the axiomatic method of Euclidean geometry was the method of all science; and the task of the scientist was to find those fundamental truths from which all other statement of science could then be derived by purely logical inference. He explicitly said that the step from geometry to physics was to be achieved by simply adding to the axioms of Euclidean geometry one single further axiom, namely the sentence

Regarding the possibilities of their position solid physical bodies will behave like the bodies of Euclidean geometry.

Popper versus Einstein, p. 30

*     *     *

[T]he theory of relativity as Einstein stated it was a mathematical theory. To him the logical necessity of his theory served as an explanation of its results. He believed that nature itself will observe the rules of logic. His words were that

experience of course remains the sole criterion of the serviceability of a mathematical construction for physics, but the truly creative principle resides in mathematics.

Popper versus Einstein, pp. 61-62

*     *     *

There’s much, much more. Go there and see for yourself.

Bayesian Irrationality

I just came across a strange and revealing statement by Tyler Cowen:

I am frustrated by the lack of Bayesianism in most of the religious belief I observe. I’ve never met a believer who asserted: “I’m really not sure here. But I think Lutheranism is true with p = .018, and the next strongest contender comes in only at .014, so call me Lutheran.” The religious people I’ve known rebel against that manner of framing, even though during times of conversion they may act on such a basis.

I don’t expect all or even most religious believers to present their views this way, but hardly any of them do. That in turn inclines me to think they are using belief for psychological, self-support, and social functions.

I wouldn’t expect anyone to say something like “Lutheranism is true with p = .018”. Lutheranism is either true or false. Just as a person on trial is either guilty or innocent. One may have doubts about the truth of Lutheranism or the guilt of a defendant, but those doubts have nothing to do with probability. Neither does Bayesianism.

In defense of probability, I will borrow heavily from myself. According to Wikipedia (as of December 19, 2014):

Bayesian probability represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials.

An event with Bayesian probability of .6 (or 60%) should be interpreted as stating “With confidence 60%, this event contains the true outcome”, whereas a frequentist interpretation would view it as stating “Over 100 trials, we should observe event X approximately 60 times.”

Or consider this account:

The Bayesian approach to learning is based on the subjective interpretation of probability.   The value of the proportion p is unknown, and a person expresses his or her opinion about the uncertainty in the proportion by means of a probability distribution placed on a set of possible values of p….

“Level of certainty” and “subjective interpretation” mean “guess.” The guess may be “educated.” It’s well known, for example, that a balanced coin will come up heads about half the time, in the long run. But to say that “I’m 50-percent confident that the coin will come up heads” is to say nothing meaningful about the outcome of a single coin toss. There are as many probable outcomes of a coin toss as there are bystanders who are willing to make a statement like “I’m x-percent confident that the coin will come up heads.” Which means that a single toss doesn’t have a probability, though it can be the subject of many opinions as to the outcome.

Returning to reality, Richard von Mises eloquently explains frequentism in Probability, Statistics and Truth (second revised English edition, 1957). Here are some excerpts:

The rational concept of probability, which is the only basis of probability calculus, applies only to problems in which either the same event repeats itself again and again, or a great number of uniform elements are involved at the same time. Using the language of physics, we may say that in order to apply the theory of probability we must have a practically unlimited sequence of uniform observations. [P. 11]

*     *     *

In games of dice, the individual event is a single throw of the dice from the box and the attribute is the observation of the number of points shown by the dice. In the game of “heads or tails”, each toss of the coin is an individual event, and the side of the coin which is uppermost is the attribute. [P. 11]

*     *     *

We must now introduce a new term…. This term is “the collective”, and it denotes a sequence of uniform events or processes which differ by certain observable attributes…. All the throws of dice made in the course of a game [of many throws] from a collective wherein the attribute of the single event is the number of points thrown…. The definition of probability which we shall give is concerned with ‘the probability of encountering a single attribute in a given collective’. [Pp. 11-12]

*     *     *

[A] collective is a mass phenomenon or a repetitive event, or, simply, a long sequence of observations for which there are sufficient reasons to believe that the relative frequency of the observed attribute would tend to a fixed limit if the observations were indefinitely continued. The limit will be called the probability of the attribute considered within the collective. [P. 15, emphasis in the original]

*     *     *

The result of each calculation … is always … nothing else but a probability, or, using our general definition, the relative frequency of a certain event in a sufficiently long (theoretically, infinitely long) sequence of observations. The theory of probability can never lead to a definite statement concerning a single event. The only question that it can answer is: what is to be expected in the course of a very long sequence of observations? [P. 33, emphasis added]

Cowen has always struck me a intellectually askew — looking at things from odd angles just for the sake of doing so. In that respect he reminds me of a local news anchor whose suits, shirts, ties, and pocket handkerchiefs almost invariably clash in color and pattern. If there’s a method to his madness, other than attention-getting, it’s lost on me — as is Cowen’s skewed, attention-getting way of thinking.

Modeling Revisited

Arnold Kling comments on a post by John Taylor, who writes about the Macroeconomic Modelling and Model Comparison Network (MMCN), which

is one part of a larger project called the Macroeconomic Model Comparison Initiative (MMCI)…. That initiative includes the Macroeconomic Model Data Base, which already has 82 models that have been developed by researchers at central banks, international institutions, and universities. Key activities of the initiative are comparing solution methods for speed and accuracy, performing robustness studies of policy evaluations, and providing more powerful and user-friendly tools for modelers.

Kling says: “Why limit the comparison to models? Why not compare models with verbal reasoning?” I say: a pox on economic models, whether they are mathematical or verbal.

That said, I do harbor special disdain for mathematical models, including statistical estimates of such models. Reality is nuanced. Verbal descriptions of reality, being more nuanced than mathematics, can more closely represent reality than can be done with mathematics.

Mathematical modelers are quick to point out that a mathematical model can express complex relationships which are difficult to express in words. True, but the words must always precede the mathematics. Long usage may enable a person to grasp the meaning of 2 + 2 = 4 without consciously putting it into words, but only because he already done so and committed the formula to memory.

Do you remember word problems? As I remember them, the words came first:

John is twenty years younger than Amy, and in five years’ time he will be half her age. What is John’s age now?

Then came the math:

Solve for J [John’s age]:

J = A − 20
J + 5 = (A + 5) / 2

[where A = Amy’s age]

What would be the point of presenting the math, then asking for the words?

Mathematics is a man-made tool. It probably started with counting. Sheep? Goats? Bananas? It doesn’t matter what it was. What matters is that the actual thing, which had a spoken name, came before the numbering convention that enabled people to refer to three sheep without having to draw or produce three actual sheep.

But … when it came to bartering sheep for loaves of bread, or whatever, those wily ancestors of ours knew that sheep come in many sizes, ages, fecundity, and states of health, and in two sexes. (Though I suppose that the LGBTQ movement has by now “discovered” homosexual and transgender sheep, and transsexual sheep may be in the offing.) Anyway, there are so many possible combinations of sizes, ages, fecundity, and states of health that it was (and is) impractical to reduce them to numbers. A quick, verbal approximation would have to do in the absence of the real thing. And the real thing would have to be produced before Grog and Grok actually exchanged X sheep for Y loaves of bread, unless they absolutely trusted each other’s honesty and descriptive ability.

Things are somewhat different in this age of mass production and commodification. But even if it’s possible to add sheep that have been bred for near-uniformity or nearly identical loaves of bread or Paper Mate Mirado Woodcase Pencils, HB 2, Yellow Barrel, it’s not possible to add those pencils to the the sheep and the loaves of bread. The best that one could do is to list the components of such a conglomeration by name and number, with the caveat that there’s a lot of variability in the sheep, goats, banana, and bread.

An economist would say that it is possible to add a collection of disparate things: Just take the sales price of each one, multiply it by the quantity sold, and if you do that for every product and service produced in the U.S. during a year you have an estimate of GDP. (I’m being a bit loose with the definition of GDP, but it’s good enough for the point I wish to make.) Further, some economists will tout this or that model which estimates changes in the value of GDP as a function of such things as interest rates, the rate of government spending, and estimates of projected consumer spending.

I don’t disagree that GDP can be computed or that economic models can be concocted. But it is to say that such computations and models, aside from being notoriously inaccurate (even though they deal in dollars, not in quantities of various products and services), are essentially meaningless. Aside from the errors that are inevitable in the use of sampling to estimate the dollar value of billions of transactions, there is the essential meaninglessness of the dollar value. Every transaction represented in an estimate of GDP (or any lesser aggregation) has a different real value to each participant in the transaction. Further, those real values, even if they could be measured and expressed in “utils“, can’t be summed because “utils” are incommensurate — there is no such thing as a social-welfare function.

Quantitative aggregations are not only meaningless, but their existence simply encourages destructive government interference in economic affairs. Mathematical modeling of “aggregate economic activity” (there is no such thing) may serve as an amusing and even lucrative pastime, but it does nothing to advance the lives and fortunes of the vast majority of Americans. In fact, it serves to retard their lives and fortunes.

All of that because pointy-headed academics, power-lusting politicians, and bamboozled bureaucrats believe that economic aggregates and quantitative economic models are meaningful. If they spent more than a few minutes thinking about what those models are supposed to represent — and don’t and can’t represent — they would at least use them with a slight pang of conscience. (I hold little hope that they would abandon them. The allure of power and the urge to “do something” are just too strong.)

Economic aggregates and models gain meaning and precision only as their compass shrinks to discrete markets for closely similar products and services. But even in the quantification of such markets there will always be some kind of misrepresentation by aggregation, if only because tastes, preferences, materials, processes, and relative prices change constantly. Only a fool believes that a quantitative economic model (of any kind) is more than a rough approximation of past reality — an approximation that will fade quickly as time marches on.

Economist Tony Lawson puts it this way:

Given the modern emphasis on mathematical modelling it is important to determine the conditions in which such tools are appropriate or useful. In other words we need to uncover the ontological presuppositions involved in the insistence that mathematical methods of a certain sort be everywhere employed. The first thing to note is that all these mathematical methods that economists use presuppose event regularities or correlations. This makes modern economics a form of deductivism. A closed system in this context just means any situation in which an event regularity occurs. Deductivism is a form of explanation that requires event regularities. Now event regularities can just be assumed to hold, even if they cannot be theorised, and some econometricians do just that and dedicate their time to trying to uncover them. But most economists want to theorise in economic terms as well. But clearly they must do so in terms that guarantee event regularity results. The way to do this is to formulate theories in terms of isolated atoms. By an atom I just mean a factor that has the same independent effect whatever the context. Typically human individuals are portrayed as the atoms in question, though there is nothing essential about this. Notice too that most debates about the nature of rationality are beside the point. Mainstream modellers just need to fix the actions of the individual of their analyses to render them atomistic, i.e., to fix their responses to given conditions. It is this implausible fixing of actions that tends to be expressed though, or is the task of, any rationality axiom. But in truth any old specification will do, including fixed rule or algorithm following as in, say, agent based modelling; the precise assumption used to achieve this matters little. Once some such axiom or assumption-fixing behaviour is made economists can predict/deduce what the factor in question will do if stimulated. Finally the specification in this way of what any such atom does in given conditions allows the prediction activities of economists ONLY if nothing is allowed to counteract the actions of the atoms of analysis. Hence these atoms must additionally be assumed to act in isolation. It is easy to show that this ontology of closed systems of isolated atoms characterises all of the substantive theorising of mainstream economists.

It is also easy enough to show that the real world, the social reality in which we actually live, is of a nature that is anything but a set of closed systems of isolated atoms (see Lawson, [Economics and Reality, London and New York: Routledge] 1997, [Reorienting Economics, London and New York: Routledge] 2003).

Mathematical-statistical descriptions of economic phenomena are either faithful (if selective) depictions of one-off events (which are unlikely to recur) or highly stylized renditions of complex chains of events (which almost certainly won’t recur). As Arnold Kling says in his review of Richard Bookstaber’s The End of Theory,

people are assumed to know, now and for the indefinite future, the entire range of possibilities, and the likelihood of each. The alternative assumption, that the future has aspects that are not foreseeable today, goes by the name of “radical uncertainty.” But we might just call it the human condition. Bookstaber writes that radical uncertainty “leads the world to go in directions we had never imagined…. The world could be changing right now in ways that will blindside you down the road.”

I’m picking on economics because it’s an easy target. But the “hard sciences” have their problems, too. See, for example, my work in progress about Einstein’s special theory of relativity.


Related reading:

John Cochrane, “Mallaby, the Fed, and Technocratic Illusions“, The Grumpy Economist, July 5, 2017

Vincent Randall: “The Uncertainty Monster: Lessons from Non-Orthodox Economics“, Climate Etc., July 5, 2017

Related posts:

Modeling Is Not Science
Microeconomics and Macroeconomics
Why the “Stimulus” Failed to Stimulate
Baseball Statistics and the Consumer Price Index
The Keynesian Multiplier: Phony Math
Further Thoughts about the Keynesian Multiplier
The Wages of Simplistic Economics
The Essence of Economics
Economics and Science
Economists As Scientists
Mathematical Economics
Economic Modeling: A Case of Unrewarded Complexity
Economics from the Bottom Up
Unorthodox Economics: 1. What Is Economics?
Unorthodox Economics: 2. Pitfalls
Unorthodox Economics: 3. What Is Scientific about Economics?
Unorthodox Economics 4: A Parable of Political Economy

“Science” vs. Science: The Case of Evolution, Race, and Intelligence

If you were to ask those people who marched for science if they believe in evolution, they would have answered with a resounding “yes”. Ask them if they believe that all branches of the human race evolved identically and you will be met with hostility. The problem, for them, is that an admission of the obvious — differential evolution, resulting in broad racial differences — leads to a fact that they don’t want to admit: there are broad racial differences in intelligence, differences that must have evolutionary origins.

“Science” — the cherished totem of left-wing ideologues — isn’t the same thing as science. The totemized version consists of whatever set of facts and hypotheses suit the left’s agenda. In the case of “climate change”, for example, the observation that in the late 1900s temperatures rose for a period of about 25 years coincident with a reported rise in the level of atmospheric CO2 occasioned the hypothesis that the generation of CO2 by humans causes temperatures to rise. This is a reasonable hypothesis, given the long-understood, positive relationship between temperature and so-called greenhouse gases. But it comes nowhere close to confirming what leftists seem bent on believing and “proving” with hand-tweaked models, which is that if humans continue to emit CO2, and do so at a higher rate than in the past, temperatures will rise to the point that life on Earth will become difficult if not impossible to sustain. There is ample evidence to support the null hypothesis (that “climate change” isn’t catastrophic) and the alternative view (that recent warming is natural and caused mainly by things other than human activity).

Leftists want to believe in catastrophic anthropogenic global warming because it suits the left’s puritanical agenda, as did Paul Ehrlich’s discredited thesis that population growth would outstrip the availability of food and resources, leading to mass starvation and greater poverty. Population control therefore became a leftist mantra, and remains one despite the generally rising prosperity of the human race and the diminution of scarcity (except where leftist governments, like Venezuela’s, create misery).

Why are leftists so eager to believe in problems that portend catastrophic consequences which “must” be averted through draconian measures, such as enforced population control, taxes on soft drinks above a certain size, the prohibition of smoking not only in government buildings but in all buildings, and decreed reductions in CO2-emitting activities (which would, in fact, help to impoverish humans)? The common denominator of such measures is control. And yet, by the process of psychological projection, leftists are always screaming “fascist” at libertarians and conservatives who resist control.

Returning to evolution, why are leftists so eager to eager to embrace it or, rather, what they choose to believe about it? My answers are that (a) it’s “science” (it’s only science when it’s spelled out in detail, uncertainties and all) and (b) it gives leftists (who usually are atheists) a stick with which to beat “creationists”.

But when it comes to race, leftists insist on denying what’s in front of their eyes: evolutionary disparities in such phenomena as skin color, hair texture, facial structure, running and jumping ability, cranial capacity, and intelligence.

Why? Because the urge to control others is of a piece with the superiority with which leftists believe they’re endowed because they are mainly white persons of European descent and above-average intelligence (just smart enough to be dangerous). Blacks and Hispanics who vote left do so mainly for the privileges it brings them. White leftists are their useful idiots.

Leftism, in other words, is a manifestation of “white privilege”, which white leftists feel compelled to overcome through paternalistic condescension toward blacks and other persons of color. (But not East Asians or the South Asians who have emigrated to the U.S., because the high intelligence of those groups is threatening to white leftists’ feelings of superiority.) What could be more condescending, and less scientific, than to deny what evolution has wrought in order to advance a political agenda?

Leftist race-denial, which has found its way into government policy, is akin to Stalin’s support of Lysenkoism, which its author cleverly aligned with Marxism. Lysenkoism

rejected Mendelian inheritance and the concept of the “gene”; it departed from Darwinian evolutionary theory by rejecting natural selection.

This brings me to Stephen Jay Gould, a leading neo-Lysenkoist and a fraudster of “science” who did much to deflect science from the question of race and intelligence:

[In The Mismeasure of Man] Gould took the work of a 19th century physical anthropologist named Samuel George Morton and made it ridiculous. In his telling, Morton was a fool and an unconscious racist — his project of measuring skull sizes of different ethnic groups conceived in racism and executed in same. Why, Morton clearly must have thought Caucasians had bigger brains than Africans, Indians, and Asians, and then subconsciously mismeasured the skulls to prove they were smarter.

The book then casts the entire project of measuring brain function — psychometrics — in the same light of primitivism.

Gould’s antiracist book was a hit with reviewers in the popular press, and many of its ideas about the morality and validity of testing intelligence became conventional wisdom, persisting today among the educated folks. If you’ve got some notion that IQ doesn’t measure anything but the ability to take IQ tests, that intelligence can’t be defined or may not be real at all, that multiple intelligences exist rather than a general intelligence, you can thank Gould….

Then, in 2011, a funny thing happened. Researchers at the University of Pennsylvania went and measured old Morton’s skulls, which turned out to be just the size he had recorded. Gould, according to one of the co-authors, was nothing but a “charlatan.”

The study itself couldn’t matter, though, could it? Well, recent work using MRI technology has established that descendants of East Asia have slightly more cranial capacity than descendants of Europe, who in turn have a little more than descendants of Africa. Another meta-analysis finds a mild correlation between brain size and IQ performance.

You see where this is going, especially if you already know about the racial disparities in IQ testing, and you’d probably like to hit the brakes before anybody says… what, exactly? It sounds like we’re perilously close to invoking science to argue for genetic racial superiority.

Am I serious? Is this a joke?…

… The reason the joke feels dangerous is that it incorporates a fact that is rarely mentioned in public life. In America, white people on average score higher than black people on IQ tests, by a margin of 12-15 points. And there’s one man who has been made to pay the price for that fact — the scholar Charles Murray.

Murray didn’t come up with a hypothesis of racial disparity in intelligence testing. He simply co-wrote a book, The Bell Curve, that publicized a fact well known within the field of psychometrics, a fact that makes the rest of us feel tremendously uncomfortable.

Nobody bears more responsibility for the misunderstanding of Murray’s work than Gould, who reviewed The Bell Curve savagely in the New Yorker. The IQ tests couldn’t be explained away — here he is acknowledging the IQ gap in 1995 — but the validity of IQ testing could be challenged. That was no trouble for the old Marxist.

Gould should have known that he was dead wrong about his central claim — that general intelligence, or g, as psychologists call it, was unreal. In fact, “Psychologists generally agree that the greatest success of their field has been in intelligence testing,” biologist Bernard D. Davis wrote in the Public Interest in 1983, in a long excoriation of Gould’s strange ideas.

Psychologists have found that performance on almost any test of cognition will have some correlation to other tests of cognition, even in areas that might seem distant from pure logic, such as recognizing musical notes. The more demanding tests have a higher correlation, or a high g load, as they term it.

IQ is very closely related to this measure, and turns out to be extraordinarily predictive not just for how well one does on tests, but on all sorts of real-life outcomes.

Since the publication of The Bell Curve, the data have demonstrated not just those points, but that intelligence is highly heritable (around 50 to 80 percent, Murray says), and that there’s little that can be done to permanently change the part that’s dependent on the environment….

The liberal explainer website Vox took a swing at Murray earlier this year, publishing a rambling 3,300-word hit job on Murray that made zero references to the scientific literature….

Vox might have gotten the last word, but a new outlet called Quillette published a first-rate rebuttal this week, which sent me down a three-day rabbit hole. I came across some of the most troubling facts I’ve ever encountered — IQ scores by country — and then came across some more reassuring ones from Thomas Sowell, suggesting that environment could be the main or exclusive factor after all.

The classic analogy from the environment-only crowd is of two handfuls of genetically identical seed corn, one planted in Iowa and the other in the Mojave Desert. One group flourishes; the other is stunted. While all of the variation within one group will be due to genetics, its flourishing relative to the other group will be strictly due to environment.

Nobody doubts that the United States is richer soil than Equatorial Guinea, but the analogy doesn’t prove the case. The idea that there exists a mean for human intelligence and that all racial subgroups would share it given identical environments remains a metaphysical proposition. We may want this to be true quite desperately, but it’s not something we know to be true.

For all the lines of attack, all the brutal slander thrown Murray’s way, his real crime is having an opinion on this one key issue that’s open to debate. Is there a genetic influence on the IQ testing gap? Murray has written that it’s “likely” genetics explains “some” of the difference. For this, he’s been crucified….

Murray said [in a recent interview] that the assumption “that everyone is equal above the neck” is written into social policy, employment policy, academic policy and more.

He’s right, of course, especially as ideas like “disparate impact” come to be taken as proof of discrimination. There’s no scientifically valid reason to expect different ethnic groups to have a particular representation in this area or that. That much is utterly clear.

The universities, however, are going to keep hollering about institutional racism. They are not going to accept Murray’s views, no matter what develops. [Jon Cassidy, “Mau Mau Redux: Charles Murray Comes in for Abuse, Again“, The American Spectator, June 9, 2017]

And so it goes in the brave new world of alternative facts, most of which seem to come from the left. But the left, with its penchant for pseudo-intellectualism (“science” vs. science) calls it postmodernism:

Postmodernists … eschew any notion of objectivity, perceiving knowledge as a construct of power differentials rather than anything that could possibly be mutually agreed upon…. [S]cience therefore becomes an instrument of Western oppression; indeed, all discourse is a power struggle between oppressors and oppressed. In this scheme, there is no Western civilization to preserve—as the more powerful force in the world, it automatically takes on the role of oppressor and therefore any form of equity must consequently then involve the overthrow of Western “hegemony.” These folks form the current Far Left, including those who would be described as communists, socialists, anarchists, Antifa, as well as social justice warriors (SJWs). These are all very different groups, but they all share a postmodernist ethos. [Michael Aaron, “Evergreen State and the Battle for Modernity“, Quillette, June 8, 2017]


Other related reading (listed chronologically):

Molly Hensley-Clancy, “Asians With “Very Familiar Profiles”: How Princeton’s Admissions Officers Talk About Race“, BuzzFeed News, May 19, 2017

Warren Meyer, “Princeton Appears To Penalize Minority Candidates for Not Obsessing About Their Race“, Coyote Blog, May 24, 2017

B. Wineguard et al., “Getting Voxed: Charles Murray, Ideology, and the Science of IQ“, Quillette, June 2, 2017

James Thompson, “Genetics of Racial Differences in Intelligence: Updated“, The Unz Review: James Thompson Archive, June 5, 2017

Raymond Wolters, “We Are Living in a New Dark Age“, American Renaissance, June 5, 2017

F. Roger Devlin, “A Tactical Retreat for Race Denial“, American Renaissance, June 9, 2017

Scott Johnson, “Mugging Mr. Murray: Mr. Murray Speaks“, Power Line, June 9, 2017


Related posts:
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Round Up the Usual Suspects
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Let’s Have That “Conversation” about Race
Affirmative Action Comes Home to Roost
The IQ of Nations
Race and Social Engineering
Some Notes about Psychology and Intelligence

Quantum Mechanics and Free Will

Physicist Adam Frank, in “Minding Matter” (Aeon, March 13, 2017), visits subjects that I have approached from several angles in various posts. Frank addresses the manifestation of brain activity — more properly, the activity of the central nervous system (CNS) — which is known as consciousness. But there’s a lot more to CNS activity than that. What it all adds up to is generally called “mind”, which has conscious components (things we are aware of, including being aware of being aware) and subconscious components (things that go on in the background that we might or might not become aware of).

In the traditional (non-mystical) view, each person’s mind is separate from the minds of other persons. Mind (or the concepts, perceptions, feelings, memories, etc. that comprise it) therefore defines self. I am my self (i.e., not you) because my mind is a manifestation of my body’s CNS, which isn’t physically linked to yours.

With those definitional matters in hand, Frank’s essay can be summarized and interpreted as follows:

According to materialists, mind is nothing more than a manifestation of CNS activity.

The underlying physical properties of the CNS are unknown because the nature of matter is unknown.

Matter, whatever it is, doesn’t behave in billiard-ball fashion, where cause and effect are tightly linked.

Instead, according to quantum mechanics, matter has probabilistic properties that supposedly rule out strict cause-and-effect relationships. The act of measuring matter resolves the uncertainty, but in an unpredictable way.

Mind is therefore a mysterious manifestation of quantum-mechanical processes. One’s state of mind is affected by how one “samples” those processes, that is, by one’s deliberate, conscious attempt to use one’s CNS in formulating the mind’s output (e.g., thoughts and interpretations of the world around us).

Because of the ability of mind to affect mind (“mind over matter”), it is more than merely a passive manifestation of the physical state of one’s CNS. It is, rather, a meta-state — a physical state that is created by “mental” processes that are themselves physical.

In sum, mind really isn’t immaterial. It’s just a manifestation of poorly understood material processes that can be influenced by the possessor of a mind. It’s the ultimate self-referential system, a system that can monitor and change itself to some degree.

None of this means that human beings lack free will. In fact, the complexity of mind argues for free will. This is from a 12-year-old post of mine:

Suppose I think that I might want to eat some ice cream. I go to the freezer compartment and pull out an unopened half-gallon of vanilla ice cream and an unopened half-gallon of chocolate ice cream. I can’t decide between vanilla, chocolate, some of each, or none. I ask a friend to decide for me by using his random-number generator, according to rules of his creation. He chooses the following rules:

  • If the random number begins in an odd digit and ends in an odd digit, I will eat vanilla.
  • If the random number begins in an even digit and ends in an even digit, I will eat chocolate.
  • If the random number begins in an odd digit and ends in an even digit, I will eat some of each flavor.
  • If the random number begins in an even digit and ends in an odd digit, I will not eat ice cream.

Suppose that the number generated by my friend begins in an even digit and ends in an even digit: the choice is chocolate. I act accordingly.

I didn’t inevitably choose chocolate because of events that led to the present state of my body’s chemistry, which might otherwise have dictated my choice. That is, I broke any link between my past and my choice about a future action.I call that free will.

I suspect that our brains are constructed in such a way as to produce the same kind of result in many situations, though certainly not in all situations. That is, we have within us the equivalent of an impartial friend and an (informed) decision-making routine, which together enable us to exercise something we can call free will.

This rudimentary metaphor is consistent with the quantum nature of the material that underlies mind. But I don’t believe that free will depends on quantum mechanics. I believe that there is a part of mind — a part with a physical location — which makes independent judgments and arrives at decisions based on those judgments.

To extend the ice-cream metaphor, I would say that my brain’s executive function, having become aware of my craving for ice cream, taps my knowledge (memory) of snacks on hand, or directs the part of my brain that controls my movements to look in the cupboard and freezer. My executive function, having determined that my craving isn’t so urgent that I will drive to a grocery store, then compiles the available options and chooses the one that seems best suited to the satisfaction of my craving at that moment. It may be ice cream, or it may be something else. If it is ice cream, it will consult my “taste preferences” and choose between the flavors then available to me.

Given the ways in which people are seen to behave, it seems obvious that the executive function, like consciousness, is on a “different circuit” from other functions (memory, motor control, autonomic responses, etc.), just as the software programs that drive my computer’s operations are functionally separate from the data stored on the hard drive and in memory. The software programs would still be on my computer even if I erased all the data on my hard drive and in memory. So, too, would my executive function (and consciousness) remain even I lost all memory of everything that happened to me before I awoke this morning.

Given this separateness, there should be no question that a person has free will. That is why I can sometimes resist a craving for ice cream. That is why most people are often willing and able to overcome urges, from eating candy to smoking a cigarette to punching a jerk.

Conditioning, which leads to addiction, makes it hard to resist urges — sometimes nigh unto impossible. But the ability of human beings to overcome conditioning, even severe addictions, argues for the separateness of the executive function from other functions. In short, it argues for free will.


Related posts:
Free Will: A Proof by Example?
Free Will, Crime, and Punishment
Mind, Cosmos, and Consciousness
“Feelings, Nothing More than Feelings”
Hayek’s Anticipatory Account of Consciousness
Is Consciousness an Illusion?

Special Relativity

I have removed my four posts about special relativity and incorporated them in a new page, “Einstein’s Errors.” I will update that page occasionally rather than post about special relativity, which is rather “off the subject” for this blog.

A True Scientist Speaks

I am reading, with great delight, Old Physics for New: A Worldview Alternative to Einstein’s Relativity Theory, by Thomas E. Phipps Jr. (1925-2016). Dr. Phipps was a physicist who happened to have been a member of a World War II operations research unit that evolved into the think-tank where I worked for 30 years.

Phipps challenged the basic tenets of Einstein’s special theory of relativity (STR) in Old Physics for New, an earlier book (Heretical Verities: Mathematical Themes in Physical Description), and many of his scholarly articles. I have drawn on Old Physics for New in two of my posts about STR (this and this), and will do so in future posts on the subject. But aside from STR, about which Phipps is refreshingly skeptical, I admire his honesty and clear-minded view of science.

Regarding Phipps’s honesty, I turn to his preface to the second edition of Old Physics for New:

[I]n the first edition I wrongly claimed awareness of two “crucial” experiments that would decide between Einstein’s special relativity theory and my proposed alternative. These two were (1) an accurate assessment of stellar aberration and (2) a measurement of light speed in orbit. Only the first of these is valid. The other was an error on my part, which I am obligated and privileged to correct here. [pp. xi-xii]

Phipps’s clear-minded view of science is evident throughout the book. In the preface, he scores a direct hit on the pseudo-scientific faddism:

The attitude of the traditional scientist toward lies and errors has always been that it is his job to tell the truth and to eradicate mistakes. Lately, scientists, with climate science in the van, have begun openly to espouse an opposite view, a different paradigm, which marches under the black banner of “post-normal science.”

According to this new perception, before the scientist goes into his laboratory it is his duty, for the sake of mankind, to study the worldwide political situation and to decide what errors need promulgating and what lies need telling. Then he goes into his laboratory, interrogates his computer, fiddles his theory, fabricates or massages his data, etc., and produces the results required to support those predetermined lies and errors. Finally he emerges into the light of publicity and writes reports acceptable to like-minded bureaucrats in such government agencies as the National Science Foundation, offers interviews to reporters working for like-minded bosses in the media, testifies before Congress, etc., all in such a way as to suppress traditional science and ultimately to make it impossible….

In this way post-normal science wages pre-emptive war on what Thomas Kuhn famously called “normal science,” because the latter fails to promote with adequate zeal those political and social goals that the post-normal scientist happens to recognize as deserving promotion…. Post-normal behavior seamlessly blends the implacable arrogance of the up-to-date terrorist with the technique of The Big Lie, pioneered by Hitler and Goebbels…. [pp. xii-xiii]

I regret deeply that I never met or corresponded with Dr. Phipps.

Nature, Nurture, and Leniency

I recently came across an article by Brian Boutwell, “Why Parenting May not Matter and Why Most Social Science Research Is Probably Wrong” (Quillette, December 1, 2015). Boutwell is an associate professor of criminology and criminal justice at Saint Louis University. Here’s some of what he has to say about nature, nurture, and behavior:

Despite how it feels, your mother and father (or whoever raised you) likely imprinted almost nothing on your personality that has persisted into adulthood…. I do have evidence, though, and by the time we’ve strolled through the menagerie of reasons to doubt parenting effects, I think another point will also become evident: the problems with parenting research are just a symptom of a larger malady plaguing the social and health sciences. A malady that needs to be dealt with….

[L]et’s start with a study published recently in the prestigious journal Nature Genetics.1 Tinca Polderman and colleagues just completed the Herculean task of reviewing nearly all twin studies published by behavior geneticists over the past 50 years….

Genetic factors were consistently relevant, differentiating humans on a range of health and psychological outcomes (in technical parlance, human differences are heritable). The environment, not surprisingly, was also clearly and convincingly implicated….

[B]ehavioral geneticists make a finer grain distinction than most about the environment, subdividing it into shared and non-shared components. Not much is really complicated about this. The shared environment makes children raised together similar to each other. The term encompasses the typical parenting effects that we normally envision when we think about environmental variables. Non-shared influences capture the unique experiences of siblings raised in the same home; they make siblings different from one another….

Based on the results of classical twin studies, it just doesn’t appear that parenting—whether mom and dad are permissive or not, read to their kid or not, or whatever else—impacts development as much as we might like to think. Regarding the cross-validation that I mentioned, studies examining identical twins separated at birth and reared apart have repeatedly revealed (in shocking ways) the same thing: these individuals are remarkably similar when in fact they should be utterly different (they have completely different environments, but the same genes).3 Alternatively, non-biologically related adopted children (who have no genetic commonalities) raised together are utterly dissimilar to each other—despite in many cases having decades of exposure to the same parents and home environments.

One logical explanation for this is a lack of parenting influence for psychological development. Judith Rich Harris made this point forcefully in her book The Nurture Assumption…. As Harris notes, parents are not to blame for their children’s neuroses (beyond the genes they contribute to the manufacturing of that child), nor can they take much credit for their successful psychological adjustment. To put a finer point on what Harris argued, children do not transport the effects of parenting (whatever they might be) outside the home. The socialization of children certainly matters (remember, neither personality nor temperament is 100 percent heritable), but it is not the parents who are the primary “socializers”, that honor goes to the child’s peer group….

Is it possible that parents really do shape children in deep and meaningful ways? Sure it is…. The trouble is that most research on parenting will not help you in the slightest because it doesn’t control for genetic factors….

Natural selection has wired into us a sense of attachment for our offspring. There is no need to graft on beliefs about “the power of parenting” in order to justify our instinct that being a good parent is important. Consider this: what if parenting really doesn’t matter? Then what? The evidence for pervasive parenting effects, after all, looks like a foundation of sand likely to slide out from under us at any second. If your moral constitution requires that you exert god-like control over your kid’s psychological development in order to treat them with the dignity afforded any other human being, then perhaps it is time to recalibrate your moral compass…. If you want happy children, and you desire a relationship with them that lasts beyond when they’re old enough to fly the nest, then be good to your kids. Just know that it probably will have little effect on the person they will grow into.

Color me unconvinced. There’s a lot of hand-waving in Boutwell’s piece, but little in the way of crucial facts, such as:

  • How is behavior quantified?
  • Does the quantification account for all aspects of behavior (unlikely), or only those aspects that are routinely quantified (e.g., criminal convictions)?
  • Is it meaningful to say that about 50 percent of behavior is genetically determined, 45 percent is peer-driven, and 0-5 percent is due to “parenting” (as Judith Rich Harris does)? Which 50 percent, 45 percent, and 0-5 percent? And how does one add various types of behavior?
  • How does one determine (outside an unrealistic experiment) the extent to which “children do not transport the effects of parenting (whatever they might be) outside the home”?

The measurement of behavior can’t possibly be as rigorous and comprehensive as the measurement of intelligence. And even those researchers who are willing to countenance and estimate the heritability of intelligence give varying estimates of its magnitude, ranging from 50 to 80 percent.

I wonder if Boutwell, Harris, et al. would like to live in a world in which parents quit teaching their children to obey the law; refrain from lying, stealing, and hurting others; honor their obligations; respect old people; treat babies with care; and work for a living (“money doesn’t grow on trees”).

Unfortunately, the world in which we live — even in the United States — seems more and more to resemble the kind of world in which parents have failed in their duty to inculcate in their children the values of honesty, respect, and hard work. This is from a post at Dyspepsia Generation, “The Spoiled Children of Capitalism“ (no longer online):

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it. And, sadly, they passed their principles, if one may use the term so loosely, down the generations to the point where young people today are scarcely worth using for fertilizer.

In 1919, or 1929, or especially 1939, the adolescents of 1969 would have had neither the leisure nor the money to create the Woodstock Nation. But mommy and daddy shelled out because they didn’t want their little darlings to be caught short, and consequently their little darlings became the worthless whiners who voted for people like Bill Clinton and Barack Obama [and who were people like Bill Clinton and Barack Obama: ED.], with results as you see them. Now that history is catching up to them, a third generation of losers can think of nothing better to do than camp out on Wall Street in hopes that the Cargo will suddenly begin to arrive again.

Good luck with that.

I subscribe to the view that the rot set in after World War II. That rot, in the form of slackerism, is more prevalent now than it ever was. It is not for nothing that Gen Y is also known as the Boomerang Generation.

Nor is it unsurprising that campuses have become hotbeds of petulant and violent behavior. And it’s not just students, but also faculty and administrators — many of whom are boomers. Where were these people before the 1960s, when the boomers came of age? Do you suppose that their sudden emergence was the result of a massive genetic mutation that swept across the nation in the late 1940s? I doubt it very much.

Their sudden emergence was due to the failure of too many members of the so-called Greatest Generation to inculcate in their children the values of honesty, respect, and hard work. How does one do that? By being clear about expectations and by setting limits on behavior — limits that are enforced swiftly, unequivocally, and sometimes with the palm of a hand. When children learn that they can “get away” with dishonesty, disrespect, and sloth, guess what? They become dishonest, disrespectful, and slothful. They give vent to their disrespect through whining, tantrum-like behavior, and even violence.

The leniency that’s being shown toward campus jerks — students, faculty, and administrators — is especially disgusting to this pre-boomer. University presidents need to grow backbones. Campus and municipal police should be out in force, maintaining order and arresting whoever fails to provide a “safe space” for a speaker who might offend their delicate sensibilities. Disruptive and violent behavior should be met with expulsions, firings, and criminal charges.

“My genes made me do it” is neither a valid explanation nor an acceptable excuse.


Related reading: There is a page on Judith Rich Harris’s website with a long list of links to reviews, broadcast commentary, and other discussions of The Nurture Assumption. It is to Harris’s credit that she links to negative as well as positive views of her work.

Institutional Bias

Arnold Kling:

On the question of whether Federal workers are overpaid relative to private sector workers, [Justin Fox] writes,

The Federal Salary Council, a government advisory body composed of labor experts and government-employee representatives, regularly finds that federal employees make about a third less than people doing similar work in the private sector. The conservative American Enterprise Institute and Heritage Foundation, on the other hand, have estimated that federal employees make 14 percent and 22 percent more, respectively, than comparable private-sector workers….

… Could you have predicted ahead of time which organization’s “research” would find a result favorable to Federal workers and which organization would find unfavorable results? Of course you could. So how do you sustain the belief that normative economics and positive economics are distinct from one another, that economic research cleanly separates facts from values?

I saw institutional bias at work many times in my career as an analyst at a tax-funded think-tank. My first experience with it came in the first project to which I was assigned. The issue at hand was a hot one on those days: whether the defense budget should be altered to increase the size of the Air Force’s land-based tactical air (tacair)  forces while reducing the size of Navy’s carrier-based counterpart. The Air Force’s think-tank had issued a report favorable to land-based tacair (surprise!), so the Navy turned to its think-tank (where I worked). Our report favored carrier-based tacair (surprise!).

How could two supposedly objective institutions study the same issue and come to opposite conclusions? Analytical fraud abetted by overt bias? No, that would be too obvious to the “neutral” referees in the Office of the Secretary of Defense. (Why “neutral”? Read this.)

Subtle bias is easily introduced when the issue is complex, as the tacair issue was. Where would tacair forces be required? What payloads would fighters and bombers carry? How easy would it be to set up land bases? How vulnerable would they be to an enemy’s land and air forces? How vulnerable would carriers be to enemy submarines and long-range bombers? How close to shore could carriers approach? How much would new aircraft, bases, and carriers cost to buy and maintain? What kinds of logistical support would they need, and how much would it cost? And on and on.

Hundreds, if not thousands, of assumptions underlay the results of the studies. Analysts at the Air Force’s think-tank chose those assumptions that favored the Air Force; analysts at the Navy’s think-tank chose those assumptions that favored the Navy.

Why? Not because analysts’ jobs were at stake; they weren’t. Not because the Air Force and Navy directed the outcomes of the studies; they didn’t. They didn’t have to because “objective” analysts are human beings who want “their side” to win. When you work for an institution you tend to identify with it; its success becomes your success, and its failure becomes your failure.

The same was true of the “neutral” analysts in the Office of the Secretary of Defense. They knew which way Mr. McNamara leaned on any issue, and they found themselves drawn to the assumptions that would justify his biases.

And so it goes. Bias is a rampant and ineradicable aspect of human striving. It’s ever-present in the political arena The current state of affairs in Washington, D.C., is just the tip of the proverbial iceberg.

The prevalence and influence of bias in matters that affect hundreds of millions of Americans is yet another good reason to limit the power of government.

Not-So-Random Thoughts (XX)

An occasional survey of web material that’s related to subjects about which I’ve posted. Links to the other posts in this series may be found at “Favorite Posts,” just below the list of topics.

In “The Capitalist Paradox Meets the Interest-Group Paradox,” I quote from Frédéric Bastiat’s “What Is Seen and What Is Not Seen“:

[A] law produces not only one effect, but a series of effects. Of these effects, the first alone is immediate; it appears simultaneously with its cause; it is seen. The other effects emerge only subsequently; they are not seen; we are fortunate if we foresee them.

This might also be called the law of unintended consequences. It explains why so much “liberal” legislation is passed: the benefits are focused a particular group and obvious (if overestimated); the costs are borne by taxpayers in general, many of whom fail to see that the sum of “liberal” legislation is a huge tax bill.

Ross Douthat understands:

[A] new paper, just released through the National Bureau of Economic Research, that tries to look at the Affordable Care Act in full. Its authors find, as you would expect, a substantial increase in insurance coverage across the country. What they don’t find is a clear relationship between that expansion and, again, public health. The paper shows no change in unhealthy behaviors (in terms of obesity, drinking and smoking) under
Obamacare, and no statistically significant improvement in self-reported health since the law went into effect….

[T]he health and mortality data [are] still important information for policy makers, because [they] indicate[] that subsidies for health insurance are not a uniquely death-defying and therefore sacrosanct form of social spending. Instead, they’re more like other forms of redistribution, with costs and benefits that have to be weighed against one another, and against other ways to design a safety net. Subsidies for employer-provided coverage crowd out wages, Medicaid coverage creates benefit cliffs and work disincentives…. [“Is Obamacare a Lifesaver?The New York Times, March 29, 2017]

So does Roy Spencer:

In a theoretical sense, we can always work to make the environment “cleaner”, that is, reduce human pollution. So, any attempts to reduce the EPA’s efforts will be viewed by some as just cozying up to big, polluting corporate interests. As I heard one EPA official state at a conference years ago, “We can’t stop making the environment ever cleaner”.

The question no one is asking, though, is “But at what cost?

It was relatively inexpensive to design and install scrubbers on smokestacks at coal-fired power plants to greatly reduce sulfur emissions. The cost was easily absorbed, and electricty rates were not increased that much.

The same is not true of carbon dioxide emissions. Efforts to remove CO2 from combustion byproducts have been extremely difficult, expensive, and with little hope of large-scale success.

There is a saying: don’t let perfect be the enemy of good enough.

In the case of reducing CO2 emissions to fight global warming, I could discuss the science which says it’s not the huge problem it’s portrayed to be — how warming is only progressing at half the rate forecast by those computerized climate models which are guiding our energy policy; how there have been no obvious long-term changes in severe weather; and how nature actually enjoys the extra CO2, with satellites now showing a “global greening” phenomenon with its contribution to increases in agricultural yields.

But it’s the economics which should kill the Clean Power Plan and the alleged Social “Cost” of Carbon. Not the science.

There is no reasonable pathway by which we can meet more than about 20% of global energy demand with renewable energy…the rest must come mostly from fossil fuels. Yes, renewable energy sources are increasing each year, usually because rate payers or taxpayers are forced to subsidize them by the government or by public service commissions. But global energy demand is rising much faster than renewable energy sources can supply. So, for decades to come, we are stuck with fossil fuels as our main energy source.

The fact is, the more we impose high-priced energy on the masses, the more it will hurt the poor. And poverty is arguably the biggest threat to human health and welfare on the planet. [“Trump’s Rollback of EPA Overreach: What No One Is Talking About,” Roy Spencer, Ph.D.[blog], March 29, 2017]

*     *     *

I mentioned the Benedict Option in “Independence Day 2016: The Way Ahead,” quoting Bruce Frohnen in tacit agreement:

[Rod] Dreher has been writing a good deal, of late, about what he calls the Benedict Option, by which he means a tactical withdrawal by people of faith from the mainstream culture into religious communities where they will seek to nurture and strengthen the faithful for reemergence and reengagement at a later date….

The problem with this view is that it underestimates the hostility of the new, non-Christian society [e.g., this and this]….

Leaders of this [new, non-Christian] society will not leave Christians alone if we simply surrender the public square to them. And they will deny they are persecuting anyone for simply applying the law to revoke tax exemptions, force the hiring of nonbelievers, and even jail those who fail to abide by laws they consider eminently reasonable, fair, and just.

Exactly. John Horvat II makes the same point:

For [Dreher], the only response that still remains is to form intentional communities amid the neo-barbarians to “provide an unintentional political witness to secular culture,” which will overwhelm the barbarian by the “sheer humanity of Christian compassion, and the image of human dignity it honors.” He believes that setting up parallel structures inside society will serve to protect and preserve Christian communities under the new neo-barbarian dispensation. We are told we should work with the political establishment to “secure and expand the space within which we can be ourselves and our own institutions” inside an umbrella of religious liberty.

However, barbarians don’t like parallel structures; they don’t like structures at all. They don’t co-exist well with anyone. They don’t keep their agreements or respect religious liberty. They are not impressed by the holy lives of the monks whose monastery they are plundering. You can trust barbarians to always be barbarians. [“Is the Benedict Option the Answer to Neo-Barbarianism?Crisis Magazine, March 29, 2017]

As I say in “The Authoritarianism of Modern Liberalism, and the Conservative Antidote,”

Modern liberalism attracts persons who wish to exert control over others. The stated reasons for exerting control amount to “because I know better” or “because it’s good for you (the person being controlled)” or “because ‘social justice’ demands it.”

Leftists will not countenance a political arrangement that allows anyone to escape the state’s grasp — unless, of course, the state is controlled by the “wrong” party, In which case, leftists (or many of them) would like to exercise their own version of the Benedict Option. See “Polarization and De Facto Partition.”

*     *     *

Theodore Dalrymple understands the difference between terrorism and accidents:

Statistically speaking, I am much more at risk of being killed when I get into my car than when I walk in the streets of the capital cities that I visit. Yet this fact, no matter how often I repeat it, does not reassure me much; the truth is that one terrorist attack affects a society more deeply than a thousand road accidents….

Statistics tell me that I am still safe from it, as are all my fellow citizens, individually considered. But it is precisely the object of terrorism to create fear, dismay, and reaction out of all proportion to its volume and frequency, to change everyone’s way of thinking and behavior. Little by little, it is succeeding. [“How Serious Is the Terrorist Threat?City Journal, March 26, 2017]

Which reminds me of several things I’ve written, beginning with this entry from “Not-So-Random Thoughts (VI)“:

Cato’s loony libertarians (on matters of defense) once again trot out Herr Doktor Professor John Mueller. He writes:

We have calculated that, for the 12-year period from 1999 through 2010 (which includes 9/11, of course), there was one chance in 22 million that an airplane flight would be hijacked or otherwise attacked by terrorists. (“Serial Innumeracy on Homeland Security,” Cato@Liberty, July 24, 2012)

Mueller’s “calculation” consists of an recitation of known terrorist attacks pre-Benghazi and speculation about the status of Al-Qaeda. Note to Mueller: It is the unknown unknowns that kill you. I refer Herr Doktor Professor to “Riots, Culture, and the Final Showdown” and “Mission Not Accomplished.”

See also my posts “Getting It All Wrong about the Risk of Terrorism” and “A Skewed Perspective on Terrorism.”

*     *     *

This is from my post, “A Reflection on the Greatest Generation“:

The Greatest tried to compensate for their own privations by giving their children what they, the parents, had never had in the way of material possessions and “fun”. And that is where the Greatest Generation failed its children — especially the Baby Boomers — in large degree. A large proportion of Boomers grew up believing that they should have whatever they want, when they want it, with no strings attached. Thus many of them divorced, drank, and used drugs almost wantonly….

The Greatest Generation — having grown up believing that FDR was a secular messiah, and having learned comradeship in World War II — also bequeathed us governmental self-indulgence in the form of the welfare-regulatory state. Meddling in others’ affairs seems to be a predilection of the Greatest Generation, a predilection that the Millenials may be shrugging off.

We owe the Greatest Generation a great debt for its service during World War II. We also owe the Greatest Generation a reprimand for the way it raised its children and kowtowed to government. Respect forbids me from delivering the reprimand, but I record it here, for the benefit of anyone who has unduly romanticized the Greatest Generation.

There’s more in “The Spoiled Children of Capitalism“:

This is from Tim [of Angle’s] “The Spoiled Children of Capitalism“:

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it….

I have long shared Tim’s assessment of the Boomer generation. Among the corroborating data are my sister and my wife’s sister and brother — Boomers all….

Low conscientiousness was the bane of those Boomers who, in the 1960s and 1970s, chose to “drop out” and “do drugs.”…

Now comes this:

According to writer and venture capitalist Bruce Gibney, baby boomers are a “generation of sociopaths.”

In his new book, he argues that their “reckless self-indulgence” is in fact what set the example for millennials.

Gibney describes boomers as “acting without empathy, prudence, or respect for facts – acting, in other words, as sociopaths.”

And he’s not the first person to suggest this.

Back in 1976, journalist Tom Wolfe dubbed the young adults then coming of age the “Me Generation” in the New York Times, which is a term now widely used to describe millennials.

But the baby boomers grew up in a very different climate to today’s young adults.

When the generation born after World War Two were starting to make their way in the world, it was a time of economic prosperity.

“For the first half of the boomers particularly, they came of age in a time of fairly effortless prosperity, and they were conditioned to think that everything gets better each year without any real effort,” Gibney explained to The Huffington Post.

“So they really just assume that things are going to work out, no matter what. That’s unhelpful conditioning.

“You have 25 years where everything just seems to be getting better, so you tend not to try as hard, and you have much greater expectations about what society can do for you, and what it owes you.”…

Gibney puts forward the argument that boomers – specifically white, middle-class ones – tend to have genuine sociopathic traits.

He backs up his argument with mental health data which appears to show that this generation have more anti-social characteristics than others – lack of empathy, disregard for others, egotism and impulsivity, for example. [Rachel Hosie, “Baby Boomers Are a Generation of Sociopaths,” Independent, March 23, 2017]

That’s what I said.

More about Intelligence

Do genes matter? You betcha! See geneticist Gregory Cochran’s “Everything Is Different but the Same” and “Missing Heritability — Found?” (Useful Wikipedia articles for explanations of terms used by Cochran: “Genome-wide association study,” “Genetic load,” and “Allele.”) Snippets:

Another new paper finds that the GWAS hits for IQ – largely determined in Europeans – don’t work in people of African descent.

*     *     *

There is an interesting new paper out on genetics and IQ. The claim is that they have found the missing heritability – in rare variants, generally different in each family.

Cochran, in typical fashion, ends the second item with a bombastic put-down of the purported dysgenic trend, about which I’ve written here.

Psychologist James Thompson seems to put stock in the dysgenic trend. See, for example, his post “The Woodley Effect“:

[W]e could say that the Flynn Effect is about adding fertilizer to the soil, whereas the Woodley Effect is about noting the genetic quality of the plants. In my last post I described the current situation thus: The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.

But Thompson joins Cochran in his willingness to accept what the data show, namely, that there are strong linkages between race and intelligence. See, for example, “County IQs and Their Consequences” (and my related post). Thompson writes:

[I]n social interaction it is not always either possible or desirable to make intelligence estimates. More relevant is to look at technical innovation rates, patents, science publications and the like…. If there were no differences [in such] measures, then the associations between mental ability and social outcomes would be weakened, and eventually disconfirmed. However, the general link between national IQs and economic outcomes holds up pretty well….

… Smart fraction research suggests that the impact of the brightest persons in a national economy has a disproportionately positive effect on GDP. Rindermann and I have argued, following others, that the brightest 5% of every country make the greatest contribution by far, though of course many others of lower ability are required to implement the discoveries and strategies of the brightest.

Though Thompson doesn’t directly address race and intelligence in “10 Replicants in Search of Fame,” he leaves no doubt about dominance of genes over environment in the determination of traits; for example:

[A] review of the world’s literature on intelligence that included 10,000 pairs of twins showed identical twins to be significantly more similar than fraternal twins (twin correlations of about .85 and .60, respectively), with corroborating results from family and adoption studies, implying significant genetic influence….

Some traits, such as individual differences in height, yield heritability as high as 90%. Behavioural traits are less reliably measured than physical traits such as height, and error of measurement contributes to nonheritable variance….

[A] review of 23 twin studies and 12 family studies confirmed that anxiety and depression are correlated entirely for genetic reasons. In other words, the same genes affect both disorders, meaning that from a genetic perspective they are the same disorder. [I have personally witnessed this effect: TEA.]…

The heritability of intelligence increases throughout development. This is a strange and counter-intuitive finding: one would expect the effects of learning to accumulate with experience, increasing the strength of the environmental factor, but the opposite is true….

[M]easures of the environment widely used in psychological science—such as parenting, social support, and life events—can be treated as dependent measures in genetic analyses….

In sum, environments are partly genetically-influenced niches….

People to some extent make their own environments….

[F]or most behavioral dimensions and disorders, it is genetics that accounts for similarity among siblings.

In several of the snippets quoted above, Thompson is referring to a phenomenon known as genetic confounding, which is to say that genetic effects are often mistaken for environmental effects. Brian Boutwell and JC Barnes address an aspect of genetic confounding in “Is Crime Genetic? Scientists Don’t Know Because They’re Afraid to Ask.” A small sample:

The effects of genetic differences make some people more impulsive and shortsighted than others, some people more healthy or infirm than others, and, despite how uncomfortable it might be to admit, genes also make some folks more likely to break the law than others.

John Ray addresses another aspect of genetic confounding in “Blacks, Whites, Genes, and Disease,” where he comments about a recent article in the Journal of the American Medical Association:

It says things that the Left do not want to hear. But it says those things in verbose academic language that hides the point. So let me translate into plain English:

* The poor get more illness and die younger
* Blacks get more illness than whites and die younger
* Part of that difference is traceable to genetic differences between blacks and whites.
* But environmental differences — such as education — explain more than genetic differences do
* Researchers often ignore genetics for ideological reasons
* You don’t fully understand what is going on in an illness unless you know about any genetic factors that may be at work.
* Genetics research should pay more attention to blacks

Most of those things I have been saying for years — with one exception:

They find that environmental factors have greater effect than genetics. But they do that by making one huge and false assumption. They assume that education is an environmental factor. It is not. Educational success is hugely correlated with IQ, which is about two thirds genetic. High IQ people stay in the educational system for longer because they are better at it, whereas low IQ people (many of whom are blacks) just can’t do it at all. So if we treated education as a genetic factor, environmental differences would fade way as causes of disease. As Hans Eysenck once said to me in a casual comment: “It’s ALL genetic”. That’s not wholly true but it comes close

So the recommendation of the study — that we work on improving environmental factors that affect disease — is unlikely to achieve much. They are aiming their gun towards where the rabbit is not. If it were an actual rabbit, it would probably say: “What’s up Doc?”

Some problems are unfixable but knowing which problems they are can help us to avoid wasting resources on them. The black/white gap probably has no medical solution.

I return to James Thompson for a pair of less incendiary items. “The Secret in Your Eyes” points to a link between intelligence and pupil size. In “Group IQ Doesn’t Exist,” Thompson points out the fatuousness of the belief that a group is somehow more intelligent that the smartest member of the group. As Thompson puts it:

So, if you want a problem solved, don’t form a team. Find the brightest person and let [him] work on it. Placing [him] in a team will, on average, reduce [his] productivity. My advice would be: never form a team if there is one person who can sort out the problem.

Forcing the brightest person to act as a member of a team often results in the suppression of that person’s ideas by the (usually) more extroverted and therefore less-intelligent members of the team.

Added 04/05/17: James Thompson issues a challenge to IQ-deniers in “IQ Does Not Exist (Lead Poisoning Aside)“:

[T]his study shows how a neuro-toxin can have an effect on intelligence, of similar magnitude to low birth weight….

[I]f someone tells you they do not believe in intelligence reply that you wish them well, but that if they have children they should keep them well away from neuro-toxins because, among other things, they reduce social mobility.

*     *     *

Related posts:
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Round Up the Usual Suspects
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Let’s Have That “Conversation” about Race
Affirmative Action Comes Home to Roost
The IQ of Nations
Race and Social Engineering

Mugged by Non-Reality

A wise man said that a conservative is a liberal who has been mugged by reality. Thanks to Malcolm Pollock, I’ve just learned that a liberal is a conservative whose grasp of reality has been erased, literally.

Actually, this is unsurprising news (to me). I have pointed out many times that the various manifestations of liberalism — from stifling regulation to untrammeled immigration — arise from the cosseted beneficiaries of capitalism (e.g., pundits, politicians, academicians, students) who are far removed from the actualities of producing real things for real people. This has turned their brains into a kind of mush that is fit only for hatching unrealistic but costly schemes which rest upon a skewed vision of human nature.

Daylight Saving Time Doesn’t Kill…

…it’s “springing forward” in March that kills.

There’s a hue and cry about daylight saving time (that’s “saving” not “savings”). The main complaint seems to be the stress that results from moving clocks ahead in March:

Springing forward may be hazardous to your health. The Monday following the start of daylight saving time (DST) is a particularly bad one for heart attacks, traffic accidents, workplace injuries and accidental deaths. Now that most Americans have switched their clocks an hour ahead, studies show many will suffer for it.

Most Americans slept about 40 minutes less than normal on Sunday night, according to a 2009 study published in the Journal of Applied Psychology…. Since sleep is important for maintaining the body’s daily performance levels, much of society is broadly feeling the impact of less rest, which can include forgetfulness, impaired memory and a lower sex drive, according to WebMD.

One of the most striking affects of this annual shift: Last year, Colorado researchers reported finding a 25 percent increase in the number of heart attacks that occur on the Monday after DST starts, as compared with a normal Monday…. A cardiologist in Croatia recorded about twice as many heart attacks than expected during that same day, and researchers in Sweden have also witnessed a spike in heart attacks in the week following the time adjustment, particularly among those who were already at risk.

Workplace injuries are more likely to occur on that Monday, too, possibly because workers are more susceptible to a loss of focus due to too little sleep. Researchers at Michigan State University used over 20 years of data from the Mine Safety and Health Administration to determine that three to four more miners than average sustain a work-related injury on the Monday following the start of DST. Those injuries resulted in 2,649 lost days of work, which is a 68 percent increase over the hours lost from injuries on an average day. The team found no effects following the nation’s one-hour shift back to standard time in the fall….

There’s even more bad news: Drivers are more likely to be in a fatal traffic accident on DST’s first Monday, according to a 2001 study in Sleep Medicine. The authors analyzed 21 years of data on fatal traffic accidents in the U.S. and found that, following the start of DST, drivers are in 83.5 accidents as compared with 78.2 on the average Monday. This phenomenon has also been recorded in Canadian drivers and British motorists.

If all that wasn’t enough, a researcher from the University of British Columbia who analyzed three years of data on U.S. fatalities reported that accidental deaths of any kind are more likely in the days following a spring forward. Their 1996 analysis showed a 6.5 percent increase, which meant that about 200 more accidental deaths occurred immediately after the start of DST than would typically occur in a given period of the same length.

I’m convinced. But the solution to the problem isn’t to get rid of DST. No, the solution is to get rid of standard time and use DST year around.

I’m not arguing for year-around DST from an economic standpoint. The evidence about the economic advantages of DST is inconclusive.

I’m arguing for year-around DST as a way to eliminate “spring forward” distress and enjoy an extra hour of daylight in the winter.

Don’t you enjoy those late summer sunsets? I sure do, and a lot other people seem to enjoy them, too. That’s why daylight saving time won’t be abolished.

But if you love those late summer sunsets, you should also enjoy an extra hour of daylight at the end of a drab winter day. I know that I would. And it’s not as if you’d miss anything if the sun rises an hour later in the winter. Even with standard time, most working people and students have to be up and about before sunrise in winter, even though sunrise comes an hour earlier than it would with DST.

How would year-around DST affect you? The following table gives the times of sunrise and sunset on the longest and shortest days of 2017 for nine major cities, north to south and west to east:

I report, you decide. If it were up to me, the decision would be year-around DST.

Thoughts for the Day

Excerpts of recent correspondence.

Robots, and their functional equivalents in specialized AI systems, can either replace people or make people more productive. I suspect that the latter has been true in the realm of medicine — so far, at least. But I have seen reportage of robotic units that are beginning to perform routine, low-level work in hospitals. So, as usual, the first people to be replaced will be those with rudimentary skills, not highly specialized training. Will it go on from there? Maybe, but the crystal ball is as cloudy as an old-time London fog.

In any event, I don’t believe that automation is inherently a job-killer. The real job-killer consists of government programs that subsidize non-work — early retirement under Social Security, food stamps and other forms of welfare, etc. Automation has been in progress for eons, and with a vengeance since the second industrial revolution. But, on balance, it hasn’t killed jobs. It just pushes people toward new and different jobs that fit the skills they have to offer. I expect nothing different in the future, barring government programs aimed at subsidizing the “victims” of technological displacement.

*      *      *

It’s civil war by other means (so far): David Wasserman, “Purple America Has All but Disappeared” (The New York Times, March 8, 2017).

*      *      *

I know that most of what I write (even the non-political stuff) has a combative edge, and that I’m therefore unlikely to persuade people who disagree with me. I do it my way for two reasons. First, I’m too old to change my ways, and I’m not going to try. Second, in a world that’s seemingly dominated by left-wing ideas, it’s just plain fun to attack them. If what I write happens to help someone else fight the war on leftism — or if it happens to make a young person re-think a mindless commitment to leftism — that’s a plus.

*     *     *

I am pessimistic about the likelihood of cultural renewal in America. The populace is too deeply saturated with left-wing propaganda, which is injected from kindergarten through graduate school, with constant reinforcement via the media and popular culture. There are broad swaths of people — especially in low-income brackets — whose lives revolve around mindless escape from the mundane via drugs, alcohol, promiscuous sex, etc. Broad swaths of the educated classes have abandoned erudition and contemplation and taken up gadgets and entertainment.

The only hope for conservatives is to build their own “bubbles,” like those of effete liberals, and live within them. Even that will prove difficult as long as government (especially the Supreme Court) persists in storming the ramparts in the name of “equality” and “self-creation.”

*     *     *

I correlated Austin’s average temperatures in February and August. Here are the correlation coefficients for following periods:

1854-2016 = 0.001
1875-2016 = -0.007
1900-2016 = 0.178
1925-2016 = 0.161
1950-2016 = 0.191
1975-2016 = 0.126

Of these correlations, only the one for 1900-2016 is statistically significant at the 0.05 level (less than a 5-percent chance of a random relationship). The correlations for 1925-2016 and 1950-2016 are fairly robust, and almost significant at the 0.05 level. The relationship for 1975-2016 is statistically insignificant. I conclude that there’s a positive relationship between February and August temperatures, but weak one. A warm winter doesn’t necessarily presage an extra-hot summer in Austin.

Is Consciousness an Illusion?

Scientists seem to have pinpointed the physical source of consciousness. But the execrable Daniel C. Dennett, for whom science is God, hasn’t read the memo. Dennett argues in his latest book, From Bacteria to Bach and Back: The Evolution of Minds, that consciousness is an illusion.

Another philosopher, Thomas Nagel, weighs in with a dissenting review of Dennett’s book. (Nagel is better than Dennett, but that’s faint praise.) Nagel’s review, “Is Consciousness an Illusion?,” appears in The New York Review of Books (March 9, 2017). Here are some excerpts:

According to the manifest image, Dennett writes, the world is

full of other people, plants, and animals, furniture and houses and cars…and colors and rainbows and sunsets, and voices and haircuts, and home runs and dollars, and problems and opportunities and mistakes, among many other such things. These are the myriad “things” that are easy for us to recognize, point to, love or hate, and, in many cases, manipulate or even create…. It’s the world according to us.

According to the scientific image, on the other hand, the world

is populated with molecules, atoms, electrons, gravity, quarks, and who knows what else (dark energy, strings? branes?)….

In an illuminating metaphor, Dennett asserts that the manifest image that depicts the world in which we live our everyday lives is composed of a set of user-illusions,

like the ingenious user-illusion of click-and-drag icons, little tan folders into which files may be dropped, and the rest of the ever more familiar items on your computer’s desktop. What is actually going on behind the desktop is mind-numbingly complicated, but users don’t need to know about it, so intelligent interface designers have simplified the affordances, making them particularly salient for human eyes, and adding sound effects to help direct attention. Nothing compact and salient inside the computer corresponds to that little tan file-folder on the desktop screen.

He says that the manifest image of each species is “a user-illusion brilliantly designed by evolution to fit the needs of its users.” In spite of the word “illusion” he doesn’t wish simply to deny the reality of the things that compose the manifest image; the things we see and hear and interact with are “not mere fictions but different versions of what actually exists: real patterns.” The underlying reality, however, what exists in itself and not just for us or for other creatures, is accurately represented only by the scientific image—ultimately in the language of physics, chemistry, molecular biology, and neurophysiology….

You may well ask how consciousness can be an illusion, since every illusion is itself a conscious experience—an appearance that doesn’t correspond to reality. So it cannot appear to me that I am conscious though I am not: as Descartes famously observed, the reality of my own consciousness is the one thing I cannot be deluded about….

According to Dennett, however, the reality is that the representations that underlie human behavior are found in neural structures of which we know very little. And the same is true of the similar conception we have of our own minds. That conception does not capture an inner reality, but has arisen as a consequence of our need to communicate to others in rough and graspable fashion our various competencies and dispositions (and also, sometimes, to conceal them)….

The trouble is that Dennett concludes not only that there is much more behind our behavioral competencies than is revealed to the first-person point of view—which is certainly true—but that nothing whatever is revealed to the first-person point of view but a “version” of the neural machinery….

I am reminded of the Marx Brothers line: “Who are you going to believe, me or your lying eyes?” Dennett asks us to turn our backs on what is glaringly obvious—that in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotle’s words, “maintaining a thesis at all costs.”

Nagel’s counterargument would have been more compelling if he had relied on a simple metaphor like this one: Most drivers can’t describe in any detail the process by which an automobile converts the potential energy of gasoline to the kinetic energy that’s produced by the engine and then transmitted eventually to the automobile’s drive wheels. Instead, most drivers simply rely on the knowledge that pushing the start button will start the car. That knowledge may be shallow, but it isn’t illusory. If it were, an automobile would be a useless hulk sitting in the driver’s garage.

Some tough questions are in order, too. If consciousness is an illusion, where does it come from? Dennett is an out-and-out physicalist and strident atheist. It therefore follows that Dennett can’t believe in consciousness (the manifest image) as a free-floating spiritual entity that’s disconnected from physical reality (the scientific image). It must, in fact, be a representation of physical reality, even if a weak and flawed one.

Looked at another way, consciousness is the gateway to the scientific image. It is only through the  deliberate, reasoned, fact-based application of consciousness that scientists have been able to roll back the mysteries of the physical world and improve the manifest image so that it more nearly resembles the scientific image. The gap will never be closed, of course. Even the most learned of human beings have only a tenuous grasp of physical reality in all of it myriad aspects. Nor will anyone ever understand what physical reality “really is” — it’s beyond apprehension and description. But that doesn’t negate the symbiosis of physical reality and consciousness.

*     *     *

Related posts:
Debunking “Scientific Objectivity”
A Non-Believer Defends Religion
Evolution as God?
The Greatest Mystery
What Is Truth?
The Improbability of Us
The Atheism of the Gaps
Demystifying Science
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Nothingness
The Glory of the Human Mind
Mind, Cosmos, and Consciousness
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Words Fail Us
Hayek’s Anticipatory Account of Consciousness

Fine-Tuning in a Wacky Wrapper

The Unz Review hosts columnists who hold a wide range of views, including whacko-bizarro-conspiracy-theory-nut-job ones. Case in point: Kevin Barrett, who recently posted a review of David Ray Griffin’s God Exists But Gawd Does Not: From Evil to the New Atheism to Fine Tuning. Some things said by Barrett in the course of his review suggest that Griffin, too, holds whacko-bizarro-conspiracy-theory-nut-job views; for example:

In 2004 he published The New Pearl Harbor — which still stands as the single most important work on 9/11 — and followed it up with more than ten books expanding on his analysis of the false flag obscenity that shaped the 21st century.

Further investigation — a trip to Wikipedia — tells me that Griffin believes there is

a prima facie case for the contention that there must have been complicity from individuals within the United States and joined the 9/11 Truth Movement in calling for an extensive investigation from the United States media, Congress and the 9/11 Commission. At this time, he set about writing his first book on the subject, which he called The New Pearl Harbor: Disturbing Questions About the Bush Administration and 9/11 (2004).

Part One of the book looks at the events of 9/11, discussing each flight in turn and also the behaviour of President George W. Bush and his Secret Service protection. Part Two examines 9/11 in a wider context, in the form of four “disturbing questions.” David Ray Griffin discussed this book and the claims within it in an interview with Nick Welsh, reported under the headline Thinking Unthinkable Thoughts: Theologian Charges White House Complicity in 9/11 Attack….

Griffin’s second book on the subject was a direct critique of the 9/11 Commission Report, called The 9/11 Commission Report: Omissions And Distortions (2005). Griffin’s article The 9/11 Commission Report: A 571-page Lie summarizes this book, presenting 115 instances of either omissions or distortions of evidence he claims are in the report, stating that “the entire Report is constructed in support of one big lie: that the official story about 9/11 is true.”

In his next book, Christian Faith and the Truth Behind 9/11: A Call to Reflection and Action (2006), he summarizes some of what he believes is evidence for government complicity and reflects on its implications for Christians. The Presbyterian Publishing Corporation, publishers of the book, noted that Griffin is a distinguished theologian and praised the book’s religious content, but said, “The board believes the conspiracy theory is spurious and based on questionable research.”

And on and on and on. The moral of which is this: If you have already “know” the “truth,” it’s easy to weave together factual tidbits that seem to corroborate it. It’s an old game that any number of persons can play; for example: Mrs. Lincoln hired John Wilkes Booth to kill Abe; Woodrow Wilson was behind the sinking of the Lusitania, which “forced” him to ask for a declaration of war against Germany; FDR knew about Japan’s plans to bomb Pearl Harbor but did nothing so that he could then have a roundly applauded excuse to ask for a declaration of war on Japan; LBJ ordered the assassination of JFK; etc. Some of those bizarre plots have been “proved” by recourse to factual tidbits. I’ve no doubt that all of them could be “proved” in that way.

If that is so, you may well ask why I am writing about Barrett’s review of Griffin’s book? Because in the midst of Barrett’s off-kilter observations (e.g., “the Nazi holocaust, while terrible, wasn’t as incomparably horrible as it has been made out to be”) there’s a tantalizing passage:

Griffin’s Chapter 14, “Teleological Order,” provides the strongest stand-alone rational-empirical argument for God’s existence, one that should convince any open-minded person who is willing to invest some time in thinking about it and investigating the cited sources. This argument rests on the observation that at least 26 of the fundamental constants discovered by physicists appear to have been “fine tuned” to produce a universe in which complex, intelligent life forms could exist. A very slight variation in any one of these 26 numbers (including the strong force, electromagnetism, gravity, the mass difference between protons and neutrons, and many others) would produce a vastly less complex, rich, interesting universe, and destroy any possibility of complex life forms or intelligent observers. In short, the universe is indeed a miracle, in the sense of something indescribably wonderful and almost infinitely improbable. The claim that it could arise by chance (as opposed to intelligent design) is ludicrous.

Even the most dogmatic atheists who are familiar with the scientific facts admit this. Their only recourse is to embrace the multiple-universes interpretation of quantum physics, claim that there are almost infinitely many actual universes (virtually all of them uninteresting and unfit for life), and assert that we just happen to have gotten unbelievably lucky by finding ourselves in the one-universe-out-of-infinity-minus-one with all of the constants perfectly fine-tuned for our existence. But, they argue, we should not be grateful for this almost unbelievable luck — which is far more improbable than winning hundreds of multi-million-dollar lottery jackpots in a row. For our existence in an amazingly, improbably-wonderful-for-us universe is just a tautology, since we couldn’t possibly be in any of the vast, vast, vast majority of universes that we couldn’t possibly be in.

Griffin gently and persuasively points out that the multiple-universes defense of atheism is riddled with absurdities and inconsistencies. Occam’s razor definitively indicates that by far the best explanation of the facts is that the universe was created not just by an intelligent designer, but by one that must be considered almost supremely intelligent as well as almost supremely creative: a creative intelligence as far beyond Einstein-times-Leonardo-to-the-Nth-power as those great minds were beyond that of a common slug.

Fine-tuning is not a good argument for God’s existence. Here is a good argument for God’s existence:

  1. In the material universe, cause precedes effect.
  2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
  3. The existence of the universe therefore implies a separate, uncaused cause.

Barrett (Griffin?) goes on:

Occam’s razor definitively indicates that by far the best explanation of the facts is that the universe was created not just by an intelligent designer, but by one that must be considered almost supremely intelligent as well as almost supremely creative: a creative intelligence as far beyond Einstein-times-Leonardo-to-the-Nth-power as those great minds were beyond that of a common slug.

Whoa! Occam’s razor indicates nothing of the kind:

Occam’s razor is used as a heuristic technique (discovery tool) to guide scientists in the development of theoretical models, rather than as an arbiter between published models. In the scientific method, Occam’s razor is not considered an irrefutable principle of logic or a scientific result; the preference for simplicity in the scientific method is based on the falsifiability criterion. For each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypotheses to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are more testable.

Barrett’s (Griffin’s?) hypothesis about the nature of the supremely intelligent being is unduly complicated. Not that the existence of God is a testable (falsifiable) hypothesis. It’s just a logical necessity, and should be left at that.

Scott Adams Understands Probability

A probability expresses the observed frequency of the occurrence of a well-defined event for a large number of repetitions of the event, where each repetition is independent of the others (i.e., random). Thus the probability that a fair coin will come up heads in, say, 100 tosses is approximately 0.5; that is, it will come up heads approximately 50 percent of the time. (In the penultimate paragraph of this post, I explain why I emphasize approximately.)

If a coin is tossed 100 times, what is the probability that it will come up heads on the 101st toss? There is no probability for that event because it hasn’t occurred yet. The coin will come up heads or tails, and that’s all that can be said about it.

Scott Adams, writing about the probability of being killed by an immigrant, puts it this way:

The idea that we can predict the future based on the past is one of our most persistent illusions. It isn’t rational (for the vast majority of situations) and it doesn’t match our observations. But we think it does.

The big problem is that we have lots of history from which to cherry-pick our predictions about the future. The only reason history repeats is because there is so much of it. Everything that happens today is bound to remind us of something that happened before, simply because lots of stuff happened before, and our minds are drawn to analogies.

…If you can rigorously control the variables of your experiment, you can expect the same outcomes almost every time [emphasis added].

You can expect a given outcome (e.g., heads) to occur approximately 50 percent of the time if you toss a coin a lot of times. But you won’t know the actual frequency (probability) until you measure it; that is, after the fact.

Here’s why. The statement that heads has a probability of 50 percent is a mathematical approximation, given that there are only two possible outcomes of a coin toss: heads or tails. While writing this post I used the RANDBETWEEN function of Excel 2016 to simulate ten 100-toss games of heads or tails, with the following results (number of heads per game): 55, 49, 49, 43, 43, 54, 47, 47, 53, 52. Not a single game yielded exactly 50 heads, and heads came up 492 times (not 500) in 1,000 tosses.

What is the point of a probability statement? What is it good for? It lets you know what to expect over the long run, for a large number of repetitions of a strictly defined event. Change the definition of the event, even slightly, and you can “probably” kiss its probability goodbye.

*     *     *

Related posts:
Fooled by Non-Randomness
Randomness Is Over-Rated
Beware the Rare Event
Some Thoughts about Probability
My War on the Misuse of Probability
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming

Not Just for Baseball Fans

I have substantially revised “Bigger, Stronger, and Faster — But Not Quicker?” I set out to test Dr. Michael Woodley’s hypothesis that reaction times have slowed since the Victorian era:

It seems to me that if Woodley’s hypothesis has merit, it ought to be confirmed by the course of major-league batting averages over the decades. Other things being equal, quicker reaction times ought to produce higher batting averages. Of course, there’s a lot to hold equal, given the many changes in equipment, playing conditions, player conditioning, “style” of the game (e.g., greater emphasis on home runs), and other key variables over the course of more than a century.

I conclude that my analysis

says nothing definitive about reaction times, even though it sheds a lot of light on the relative hitting prowess of American League batters over the past 116 years. (I’ll have more to say about that in a future post.)

It’s been great fun but it was just one of those things.

Sandwiched between those statements you’ll find much statistical meat (about baseball) to chew on.