Science and Understanding

Altruism, One More Time

I am reading and generally enjoying Darwinian Fairytales: Selfish Genes, Errors of Heredity and Other Fables of Evolution by the late Australian philosopher, David Stove. I say generally enjoying because in Essay 6, which I just finished reading, Stove goes off the rails.

The title of Essay 6 is “Tax and the Selfish Girl, Or Does ‘Altruism’ Need Inverted Commas?”. Stove expends many words in defense of altruism as it is commonly thought of: putting others before oneself. He also expends some words (though not many) in defense of taxation as an altruistic act.

Stove, whose writing is refreshingly informal instead of academically stilted, is fond of calling things “ridiculous” and “absurd”. Well, Essay 6 is both of those things. Stove’s analysis of altruism is circular: He parades examples of what he considers altruistic conduct, and says that because there is such conduct there must be altruism.

His target is a position that I have taken, and still hold despite Essay 6. My first two essays about altruism are here and here. I will quote a third essay, in which I address philosopher Jason Brennan’s defense of altruism:

What about Brennan’s assertion that he is genuinely altruistic because he doesn’t merely want to avoid bad feelings, but wants to help his son for his son’s sake. That’s called empathy. But empathy is egoistic. Even strong empathy — the ability to “feel” another person’s pain or anguish — is “felt” by the empathizer. It is the empathizer’s response to the other person’s pain or anguish.

Brennan inadvertently makes that point when he invokes sociopathy:

Sociopaths don’t care about other people for their own sake–they view them merely as instruments. Sociopaths don’t feel guilt for failing to help others.

The difference between a sociopath and a “normal” person is found in caring (feeling). But caring (feeling) is something that the I does — or fails to do, if the I is a sociopath. I = ego:

the “I” or self of any person; a thinking, feeling, and conscious being, able to distinguish itself from other selves.

I am not deprecating the kind of laudable act that is called altruistic. I am simply trying to point out what should be an obvious fact: Human beings necessarily act in their own interests, though their own interests often coincide with the interests of others for emotional reasons (e.g., love, empathy), as well as practical ones (e.g., loss of income or status because of the death of a patron).

It should go without saying that the world would be a better place if it had fewer sociopaths in it. Voluntary, mutually beneficial relationships are more than merely transactional; they thrive on the mutual trust and respect that arise from social bonds, including the bonds of love and affection.

Where Stove goes off the rails is with his claim that the existence of classes of people like soldiers, priests, and doctors is evidence of altruism. (NB: Stove was an atheist, so his inclusion of priests isn’t any kind of defense of religion.)

People become soldiers, priests, and doctors for various reasons, including (among many non-altruistic things) a love of danger (soldiers), a desire to control the lives of others (soldiers, priests, and doctors), an intellectual challenge that has nothing to do with caring for others (doctors), earning a lot of money (doctors), prestige (high-ranking soldiers, priests, and doctors), and job security (priests and doctors). Where’s the altruism in any of that?

Where Stove really goes off the rails is with his claim that redistributive taxation is evidence of altruism. As if human beings live in monolithic societies (like ant colonies), where the will of one was the will of all. And as if government represents the “will of the people”, when all it represents is the will of a small number of people who have been granted the power to govern by garnering a bare minority of votes cast by a minority of the populace, by their non-elected bureaucratic agents, and by (mostly) non-elected judges.

 

Hurricane Hysteria

UPDATED 09/15/17 AND 09/16/17

Yes, hurricanes are bad things when they kill and injure people, destroy property, and saturate the soil with seawater. But hurricanes are in the category of “stuff happens”.

Contrary to the true believers in catastrophic anthropogenic global warming (CAGW), hurricanes are not the fault of human beings. Hurricanes are not nature’s “retribution” for mankind’s “sinful” ways, such as the use of fossil fuels.

How do I know? Because there are people who actually look at the numbers. See, for example, “Hate on Display: Climate Activists Go Bonkers Over #Irma and Nonexistent Climate Connection” by Anthony Watts  (Watts Up With That?, September 11, 2017). See also Michel de Rougement’s “Correlation of Accumulated Cyclone Energy and Atlantic Multidecadal Oscillations” (Watts Up With That?, September 4, 2017).

M. de Rougemont’s post addresses accumulated cyclone energy (ACE):

The total energy accumulated each year by tropical storms and hurricanes (ACE) is also showing such a cyclic pattern.

NOAA’s Hurricane Research Division explanations on ACE: “the ACE is calculated by squaring the maximum sustained surface wind in the system every six hours (knots) and summing it up for the season. It is expressed in 104 kt2.” Direct instrumental observations are available as monthly series since 1848. A historic reconstruction since 1851 was done by NOAA (yearly means).

clip_image006

Figure 2 Yearly accumulated cyclone energy (ACE) ACE_7y: centered running average over 7 years

A correlation between ACE and AMO [Atlantic Multidecadal Oscillation] is confirmed by regression analysis.

clip_image008

Figure 3 Correlation ACE=f(AMO), using the running averages over 7 years. AMO: yearly means of the Atlantic Multidecadal Oscillations ACE_7y: yearly observed accumulated cyclone energy ACE_calc: calculated ACE by using the indicated formula.

Regression formula:

clip_image010
clip_image012

Thus, a simple, linear relation ties ACE to AMO, in part directly, and in part with an 18 years delay. The correlation coefficient is astonishingly good.

Anthony Watts adds fuel to this fire (or ice to this cocktail) in “Report: Ocean Cycles, Not Humans, May Be Behind Most Observed Climate Change” (Watts Up With That?, September 15, 2017). There, he discusses a report by Anastosios Tsonis, which I have added to the list of related readings, below:

… Anastasios Tsonis, emeritus distinguished professor of atmospheric sciences at the University of Wisconsin-Milwaukee, describes new and cutting-edge research into natural climatic cycles, including the well known El Nino cycle and the less familiar North Atlantic Oscillation and Pacific Decadal Oscillation.

He shows how interactions between these ocean cycles have been shown to drive changes in the global climate on timescales of several decades.

Professor Tsonis says:

We can show that at the start of the 20th century, the North Atlantic Oscillation pushed the global climate into a warming phase, and in 1940 it pushed it back into cooling mode. The famous “pause” in global warming at the start of the 21st century seems to have been instigated by the North Atlantic Oscillation too.

In fact, most of the changes in the global climate over the period of the instrumental record seem to have their origins in the North Atlantic.

Tsonis’ insights have profound implications for the way we view calls for climate alarm.

It may be that another shift in the North Atlantic could bring about another phase shift in the global climate, leading to renewed cooling or warming for several decades to come.

These climatic cycles are entirely natural, and can tell us nothing about the effect of carbon dioxide emissions. But they should inspire caution over the slowing trajectory of global warming we have seen in recent decades.

As Tsonis puts it:

While humans may play a role in climate change, other natural forces may play important roles too.

There are other reasons to be skeptical of CAGW, and even of AGW. For one thing, temperature records are notoriously unreliable, especially records from land-based thermometers. (See, for example, these two posts at Watt’s Up With That?: “Press Release – Watts at #AGU15 The Quality of Temperature Station Siting Matters for Temperature Trends” by Anthony Watts on December 17, 2015, and “Ooops! Australian BoM Climate Readings May Be invalid Due To Lack of Calibration“, on September 11, 2017.) And when those records aren’t skewed by siting and lack-of-coverage problems, they’re skewed by fudging the numbers to “prove” CAGW. (See my post, “Global-Warming Hype“, August 22, 2017.) Moreover, the models that “prove” CAGW and AGW are terrible, to put it bluntly. (Again, see “Global-Warming Hype“, and also Dr. Tim Ball’s post of September 16, 2017, “Climate Models Can’t Even Approximate Reality Because Atmospheric Structure and Movements are Virtually Unknown” at Watts Up With That?)

It’s certainly doubtful that NOAA’s reconstruction of ACE is accurate and consistent as far back as 1851. I hesitate to give credence to a data series that predates the confluence of satellite observations, ocean-buoys, and specially equipped aircraft. The history of weather satellites casts doubt on the validity of aggregate estimates for any period preceding the early 1960s.

As it happens, the data sets for tropical cyclone activity that are maintained by the Tropical Meteorology Project at Colorado State University cover all six of the relevant ocean basins as far back as 1972. And excluding the North Indian Ocean basin — which is by far the least active — the coverage goes back to 1961 (and beyond).

Here’s a graph of the annual values for each basin from 1961 through 2016:

Here’s a graph of the annual totals for 1961-2016, without the North Indian Ocean basin:

The red line is the sum of ACE for all five basins, including the Northwest Pacific basin; the yellow line in the sum of ACE for four basins, including the Northeast Pacific basin; etc.

The exclusion of the North Indian Ocean basin makes little difference in the totals, which look like this with the inclusion of that basin:

I have these observations about the numbers represented in the preceding graphs:

If one is a believer in CAGW (the G stands for global), it is a lie (by glaring omission) to focus on random, land-falling hurricanes hitting the U.S.

Tropical cyclone activity in the North Atlantic basin, which includes storms that hit the U.S., is not a major factor in the level of global activity.

The level of activity in the North Atlantic basin is practically flat between 1961 and 2016.

The overall level of activity is practically flat between 1961 and 2016, with the exception of spikes that seem to coincide with strong El Niño events.

There is a “pause” in the overall level of activity between the late 1990s and 2015 (with the exception of an El Niño-related spike in 2004). The pause coincides with the pause in global temperatures, which suggests an unsurprising correlation between the level of tropical cyclone activity and the warming of the globe — or lack thereof. But it doesn’t explain that warming, and climate models that “explain” it primarily as a function of the accumulation of atmospheric CO2 are notoriously unreliable.

In fact, NOAA’s reconstruction of ACE in the North Atlantic basin — which, if anything, probably understates ACE before the early 1960s — is rather suggestive:

The recent spikes in ACE are not unprecedented. And there are many prominent spikes that predate the late-20th-century temperature rise on which “warmism” is predicated.

I am very sorry for the victims of Harvey, Irma, and every other weather-related disaster — and of every other disaster, whether man-made or not. But I am not about to reduce my carbon footprint because of the Luddite hysterics who dominate and cling to the quasi-science of climatology.


Other related reading:
Ron Clutz, “Temperatures According to Climate Models“, Science Matters, March 24, 2015
Dr. Tim Ball, “Long-Term Climate Change: What Is a Reasonable Sample Size?“, Watts Up With That?, February 7, 2016
The Global Warming Policy Foundation, Climate Science: Assumptions, Policy Implications, and the Scientific Method, 2017
John Mauer, “Through the Looking Glass with NASA GISS“, Watts Up With That?, February 22, 2017
George White, “A Consensus of Convenience“, Watts Up With That?, August 20, 2017
Jennifer Marohasy, “Most of the Recent Warming Could be Natural“, Jennifer Marohasy, August 21, 2017
Anthony Watts, “What You Need to Know and Are Not Told about Hurricanes“, Watts Up With That?, September 15, 2017
Anastasios Tsonis, The Little Boy: El Niño and Natural Climate Change, Global Warming Policy Foundation, GWPF Report 26, 2017

Other related posts:
AGW: The Death Knell (with many links to related reading and earlier posts)
Not-So-Random Thoughts (XIV) (second item)
AGW in Austin?
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II) (with more links to related reading)

Babe Ruth and the Hot-Hand Hypothesis

According to Wikipedia, the so-called hot-hand fallacy is that “a person who has experienced success with a seemingly random event has a greater chance of further success in additional attempts.” The article continues:

[R]esearchers for many years did not find evidence for a “hot hand” in practice. However, later research has questioned whether the belief is indeed a fallacy. More recent studies using modern statistical analysis have shown that there is evidence for the “hot hand” in some sporting activities.

I won’t repeat the evidence cited in the Wikipedia article, nor will I link to the many studies about the hot-hand effect. You can follow the link and read it all for yourself.

What I will do here is offer an analysis that supports the hot-hand hypothesis, taking Babe Ruth as a case in point. Ruth was a regular position player (non-pitcher) from 1919 through 1934. In that span of 16 seasons he compiled 688 home runs (HR) in 7,649 at-bats (AB) for an overall record of 0.0900 HR/AB. Here are the HR/AB tallies for each of the 16 seasons:

Year HR/AB
1919 0.067
1920 0.118
1921 0.109
1922 0.086
1923 0.079
1924 0.087
1925 0.070
1926 0.095
1927 0.111
1928 0.101
1929 0.092
1930 0.095
1931 0.086
1932 0.090
1933 0.074
1934 0.060

Despite the fame that accrues to Ruth’s 1927 season, when he hit 60 home runs, his best season for HR/AB came in 1920. In 1919, Ruth set a new single-season record with 29 HR. He almost doubled that number in 1920, getting 54 HR in 458 AB for 0.118 HR/AB.

Here’s what that season looks like, in graphical form:

The word for it is “streaky”, which isn’t surprising. That’s the way of most sports. Streaks include not only cold spells but also hot spells. Look at the relatively brief stretches in which Ruth was shut out in the HR department. And look at the relatively long stretches in which he readily exceeded his HR/AB for the season. (For more about the hot and and streakiness, see Brett Green and Jeffrey Zwiebel, “The Hot-Hand Fallacy: Cognitive Mistakes or Equilibrium Adjustments? Evidence from Major League Baseball“, Stanford Graduate School of Business, Working Paper No. 3101, November 2013.)

The same pattern can be inferred from this composite picture of Ruth’s 1919-1934 seasons:

Here’s another way to look at it:

If hitting home runs were a random thing — which they would be if the hot hand were a fallacy — the distribution would be tightly clustered around the mean of 0.0900 HR/AB. Nor would there be a gap between 0 HR/AB and the 0.03 to 0.06 bin. In fact, the gap is wider than that; it goes from 0 to 0.042 HR/AB. When Ruth broke out of a home-run slump, he broke out with a vengeance, because he had the ability to do so.

In other words, Ruth’s hot streaks weren’t luck. They were the sum of his ability and focus (or “flow“); he was “putting it all together”. The flow was broken at times — by a bit of bad luck, a bout of indigestion, a lack of sleep, a hangover, an opponent who “had his number”, etc. But a great athlete like Ruth bounces back and put it all together again and again, until his skills fade to the point that he can’t overcome his infirmities by waiting for his opponents to make mistakes.

The hot hand is the default condition for a great player like a Ruth or a Cobb. The cold hand is the exception until the great player’s skills finally wither. And there’s no sharp dividing line between the likes of Cobb and Ruth and lesser mortals. Anyone who has the ability to play a sport at a professional level (and many an amateur, too) will play with a hot hand from time to time.

The hot hand isn’t a fallacy or a matter of pure luck (or randomness). It’s an artifact of skill.


Related posts:
Flow
Fooled by Non-Randomness
Randomness Is Over-Rated
Luck and Baseball, One More Time
Pseudoscience, “Moneyball,” and Luck
Ty Cobb and the State of Science
The American League’s Greatest Hitters: III

Pattern-Seeking

UPDATED 09/04/17

Scientists and analysts are reluctant to accept the “stuff happens” explanation for similar but disconnected events. The blessing and curse of the scientific-analytic mind is that it always seeks patterns, even where there are none to be found.

UPDATE 1

The version of this post that appears at Ricochet includes the following comments and replies:

Comment — Cool stuff, but are you thinking of any particular patter/maybe-not-pattern in particular?

My reply — The example that leaps readily to mind is “climate change”, the gospel of which is based on the fleeting (25-year) coincidence of rising temperatures and rising CO2 emissions. That, in turn, leads to the usual kind of hysteria about “climate change” when something like Harvey occurs.

Comment — It’s not a coincidence when the numbers are fudged.

My reply — The temperature numbers have been fudged to some extent, but even qualified skeptics accept the late 20th century temperature rise and the long-term rise in CO2. What’s really at issue is the cause of the temperature rise. The true believers seized on CO2 to the near-exclusion of other factors. How else could they then justify their puritanical desire to control the lives of others, or (if not that) their underlying anti-scientific mindset which seeks patterns instead of truths.

Another example, which applies to non-scientists and (some) scientists, is the identification of random arrangements of stars as “constellations”, simply because they “look” like something. Yet another example is the penchant for invoking conspiracy theories to explain (or rationalize) notorious events.

Returning to science, it is pattern-seeking which drives scientists to develop explanations that are later discarded and even discredited as wildly wrong. I list a succession of such explanations in my post “The Science Is Settled“.

UPDATE 2

Political pundits, sports writers, and sports commentators are notorious for making predictions that rely on tenuous historical parallels. I herewith offer an example, drawn from this very blog.

Here is the complete text of “A Baseball Note: The 2017 Astros vs. the 1951 Dodgers“, which I posted on the 14th of last month:

If you were following baseball in 1951 (as I was), you’ll remember how that season’s Brooklyn Dodgers blew a big lead, wound up tied with the New York Giants at the end of the regular season, and lost a 3-game playoff to the Giants on Bobby Thomson’s “shot heard ’round the world” in the bottom of the 9th inning of the final playoff game.

On August 11, 1951, the Dodgers took a doubleheader from the Boston Braves and gained their largest lead over the Giants — 13 games. The Dodgers at that point had a W-L record of 70-36 (.660), and would top out at .667 two games later. But their W-L record for the rest of the regular season was only .522. So the Giants caught them and went on to win what is arguably the most dramatic playoff in the history of professional sports.

The 2017 Astros peaked earlier than the 1951 Dodgers, attaining a season-high W-L record of .682 on July 5, and leading the second-place team in the AL West by 18 games on July 28. The Astros’ lead has dropped to 12 games, and the team’s W-L record since the July 5 peak is only .438.

The Los Angeles Angels might be this year’s version of the 1951 Giants. The Angels have come from 19 games behind the Astros on July 28, to trail by 12. In that span, the Angels have gone 11-4 (.733).

Hold onto your hats.

Since I wrote that, the Angels have gone 10-9, while the Astros have gone gone 12-8 and increased their lead over the Angels to 13.5 games. It’s still possible that the Astros will collapse and the Angels will surge. But the contest between the two teams no longer resembles the Dodgers-Giants duel of 1951, when the Giants had closed to 5.5 games behind the Dodgers at this point in the season.

My “model” of the 2017 contest between the Astros and Angels was on a par with the disastrously wrong models that “prove” the inexorability of catastrophic anthropogenic global warming. The models are disastrously wrong because they are being used to push government policy in counterproductive directions: wasting money on “green energy” while shutting down efficient sources of energy at the cost of real jobs and economic growth.


Related posts:
Hemibel Thinking
The Limits of Science
The Thing about Science
Words of Caution for Scientific Dogmatists
What’s Wrong with Game Theory
Debunking “Scientific Objectivity”
Pseudo-Science in the Service of Political Correctness
Science’s Anti-Scientific Bent
Mathematical Economics
Modeling Is Not Science
Beware the Rare Event
Physics Envy
What Is Truth?
The Improbability of Us
We, the Children of the Enlightenment
In Defense of Subjectivism
The Atheism of the Gaps
The Ideal as a False and Dangerous Standard
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Luck and Baseball, One More Time
Are the Natural Numbers Supernatural?
The Candle Problem: Balderdash Masquerading as Science
More about Luck and Baseball
Combinatorial Play
Pseudoscience, “Moneyball,” and Luck
The Fallacy of Human Progress
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design
Mind, Cosmos, and Consciousness
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
Verbal Regression Analysis, the “End of History,” and Think-Tanks
The Limits of Science, Illustrated by Scientists
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
The “Marketplace” of Ideas
Time and Reality
My War on the Misuse of Probability
Ty Cobb and the State of Science
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
Is Science Self-Correcting?
Taleb’s Ruinous Rhetoric
Words Fail Us
Fine-Tuning in a Wacky Wrapper
Tricky Reasoning
Modeling Revisited
Bayesian Irrationality
The Fragility of Knowledge

The Fragility of Knowledge

A recent addition to the collection of essays at “Einstein’s Errors” relies mainly on Christoph von Mettenheim’s Popper versus Einstein. One of Mettenheim’s key witnesses for the prosecution of Einstein’s special theory of relativity (STR) is Alfred Tarski, a Polish-born logician and mathematician. According to Mettenheim, Tarski showed

that all the axioms of geometry [upon which STR is built] are in fact nominalistic definitions, and therefore have nothing to do with truth, but only with expedience. [p. 86]

Later:

Tarski has demonstrated that logical and mathematical inferences can never yield an increase of empirical information because they are based on nominalistic definitions of the most simple terms of our language. We ourselves give them their meaning and cannot,therefore, get out of them anything but what we ourselves have put into them. They are tautological in the sense that any information contained in the conclusion must also have been contained in the premises. This is why logic and mathematics alone can never lead to scientific discoveries. [p. 100]

Mettenheim refers also to Alfred North Whitehead, a great English mathematician and philosopher who preceded Tarski. I am reading Whitehead’s Science and the Modern World thanks to my son, who recently wrote about it. I had heretofore only encountered the book in bits and snatches. I will have more to say about it in future posts. For now, I am content to quote this relevant passage, which presages Tarski’s theme and goes beyond it:

Thought is abstract; and the the intolerant use of abstractions is the major vice of the intellect. this vice is not wholly corrected by the recurrence to concrete experience. For after all, you need only attend to those aspects of your concrete experience which lie within some limited scheme. There are two methods for the purification of ideas. One of them is dispassionate observation by means of the bodily senses. But observation is selection. [p. 18]

More to come.

Mettenheim on Einstein’s Relativity

I have added “Mettenheim on Einstein’s Relativity – Part I” to “Einstein’s Errors“. The new material draws on the Part I of Christoph von Mettenheim’s Popper versus Einstein: On the Philosophical Foundations of Physics (Tübingen: Mohr Siebeck, 1998). Mettenheim strikes many telling blows against STR. These go to the heart of STR and Einstein’s view of science:

[T[o Einstein the axiomatic method of Euclidean geometry was the method of all science; and the task of the scientist was to find those fundamental truths from which all other statement of science could then be derived by purely logical inference. He explicitly said that the step from geometry to physics was to be achieved by simply adding to the axioms of Euclidean geometry one single further axiom, namely the sentence

Regarding the possibilities of their position solid physical bodies will behave like the bodies of Euclidean geometry.

Popper versus Einstein, p. 30

*     *     *

[T]he theory of relativity as Einstein stated it was a mathematical theory. To him the logical necessity of his theory served as an explanation of its results. He believed that nature itself will observe the rules of logic. His words were that

experience of course remains the sole criterion of the serviceability of a mathematical construction for physics, but the truly creative principle resides in mathematics.

Popper versus Einstein, pp. 61-62

*     *     *

There’s much, much more. Go there and see for yourself.

Bayesian Irrationality

I just came across a strange and revealing statement by Tyler Cowen:

I am frustrated by the lack of Bayesianism in most of the religious belief I observe. I’ve never met a believer who asserted: “I’m really not sure here. But I think Lutheranism is true with p = .018, and the next strongest contender comes in only at .014, so call me Lutheran.” The religious people I’ve known rebel against that manner of framing, even though during times of conversion they may act on such a basis.

I don’t expect all or even most religious believers to present their views this way, but hardly any of them do. That in turn inclines me to think they are using belief for psychological, self-support, and social functions.

I wouldn’t expect anyone to say something like “Lutheranism is true with p = .018”. Lutheranism is either true or false. Just as a person on trial is either guilty or innocent. One may have doubts about the truth of Lutheranism or the guilt of a defendant, but those doubts have nothing to do with probability. Neither does Bayesianism.

In defense of probability, I will borrow heavily from myself. According to Wikipedia (as of December 19, 2014):

Bayesian probability represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials.

An event with Bayesian probability of .6 (or 60%) should be interpreted as stating “With confidence 60%, this event contains the true outcome”, whereas a frequentist interpretation would view it as stating “Over 100 trials, we should observe event X approximately 60 times.”

Or consider this account:

The Bayesian approach to learning is based on the subjective interpretation of probability.   The value of the proportion p is unknown, and a person expresses his or her opinion about the uncertainty in the proportion by means of a probability distribution placed on a set of possible values of p….

“Level of certainty” and “subjective interpretation” mean “guess.” The guess may be “educated.” It’s well known, for example, that a balanced coin will come up heads about half the time, in the long run. But to say that “I’m 50-percent confident that the coin will come up heads” is to say nothing meaningful about the outcome of a single coin toss. There are as many probable outcomes of a coin toss as there are bystanders who are willing to make a statement like “I’m x-percent confident that the coin will come up heads.” Which means that a single toss doesn’t have a probability, though it can be the subject of many opinions as to the outcome.

Returning to reality, Richard von Mises eloquently explains frequentism in Probability, Statistics and Truth (second revised English edition, 1957). Here are some excerpts:

The rational concept of probability, which is the only basis of probability calculus, applies only to problems in which either the same event repeats itself again and again, or a great number of uniform elements are involved at the same time. Using the language of physics, we may say that in order to apply the theory of probability we must have a practically unlimited sequence of uniform observations. [P. 11]

*     *     *

In games of dice, the individual event is a single throw of the dice from the box and the attribute is the observation of the number of points shown by the dice. In the game of “heads or tails”, each toss of the coin is an individual event, and the side of the coin which is uppermost is the attribute. [P. 11]

*     *     *

We must now introduce a new term…. This term is “the collective”, and it denotes a sequence of uniform events or processes which differ by certain observable attributes…. All the throws of dice made in the course of a game [of many throws] from a collective wherein the attribute of the single event is the number of points thrown…. The definition of probability which we shall give is concerned with ‘the probability of encountering a single attribute in a given collective’. [Pp. 11-12]

*     *     *

[A] collective is a mass phenomenon or a repetitive event, or, simply, a long sequence of observations for which there are sufficient reasons to believe that the relative frequency of the observed attribute would tend to a fixed limit if the observations were indefinitely continued. The limit will be called the probability of the attribute considered within the collective. [P. 15, emphasis in the original]

*     *     *

The result of each calculation … is always … nothing else but a probability, or, using our general definition, the relative frequency of a certain event in a sufficiently long (theoretically, infinitely long) sequence of observations. The theory of probability can never lead to a definite statement concerning a single event. The only question that it can answer is: what is to be expected in the course of a very long sequence of observations? [P. 33, emphasis added]

Cowen has always struck me a intellectually askew — looking at things from odd angles just for the sake of doing so. In that respect he reminds me of a local news anchor whose suits, shirts, ties, and pocket handkerchiefs almost invariably clash in color and pattern. If there’s a method to his madness, other than attention-getting, it’s lost on me — as is Cowen’s skewed, attention-getting way of thinking.

Modeling Revisited

Arnold Kling comments on a post by John Taylor, who writes about the Macroeconomic Modelling and Model Comparison Network (MMCN), which

is one part of a larger project called the Macroeconomic Model Comparison Initiative (MMCI)…. That initiative includes the Macroeconomic Model Data Base, which already has 82 models that have been developed by researchers at central banks, international institutions, and universities. Key activities of the initiative are comparing solution methods for speed and accuracy, performing robustness studies of policy evaluations, and providing more powerful and user-friendly tools for modelers.

Kling says: “Why limit the comparison to models? Why not compare models with verbal reasoning?” I say: a pox on economic models, whether they are mathematical or verbal.

That said, I do harbor special disdain for mathematical models, including statistical estimates of such models. Reality is nuanced. Verbal descriptions of reality, being more nuanced than mathematics, can more closely represent reality than can be done with mathematics.

Mathematical modelers are quick to point out that a mathematical model can express complex relationships which are difficult to express in words. True, but the words must always precede the mathematics. Long usage may enable a person to grasp the meaning of 2 + 2 = 4 without consciously putting it into words, but only because he already done so and committed the formula to memory.

Do you remember word problems? As I remember them, the words came first:

John is twenty years younger than Amy, and in five years’ time he will be half her age. What is John’s age now?

Then came the math:

Solve for J [John’s age]:

J = A − 20
J + 5 = (A + 5) / 2

[where A = Amy’s age]

What would be the point of presenting the math, then asking for the words?

Mathematics is a man-made tool. It probably started with counting. Sheep? Goats? Bananas? It doesn’t matter what it was. What matters is that the actual thing, which had a spoken name, came before the numbering convention that enabled people to refer to three sheep without having to draw or produce three actual sheep.

But … when it came to bartering sheep for loaves of bread, or whatever, those wily ancestors of ours knew that sheep come in many sizes, ages, fecundity, and states of health, and in two sexes. (Though I suppose that the LGBTQ movement has by now “discovered” homosexual and transgender sheep, and transsexual sheep may be in the offing.) Anyway, there are so many possible combinations of sizes, ages, fecundity, and states of health that it was (and is) impractical to reduce them to numbers. A quick, verbal approximation would have to do in the absence of the real thing. And the real thing would have to be produced before Grog and Grok actually exchanged X sheep for Y loaves of bread, unless they absolutely trusted each other’s honesty and descriptive ability.

Things are somewhat different in this age of mass production and commodification. But even if it’s possible to add sheep that have been bred for near-uniformity or nearly identical loaves of bread or Paper Mate Mirado Woodcase Pencils, HB 2, Yellow Barrel, it’s not possible to add those pencils to the the sheep and the loaves of bread. The best that one could do is to list the components of such a conglomeration by name and number, with the caveat that there’s a lot of variability in the sheep, goats, banana, and bread.

An economist would say that it is possible to add a collection of disparate things: Just take the sales price of each one, multiply it by the quantity sold, and if you do that for every product and service produced in the U.S. during a year you have an estimate of GDP. (I’m being a bit loose with the definition of GDP, but it’s good enough for the point I wish to make.) Further, some economists will tout this or that model which estimates changes in the value of GDP as a function of such things as interest rates, the rate of government spending, and estimates of projected consumer spending.

I don’t disagree that GDP can be computed or that economic models can be concocted. But it is to say that such computations and models, aside from being notoriously inaccurate (even though they deal in dollars, not in quantities of various products and services), are essentially meaningless. Aside from the errors that are inevitable in the use of sampling to estimate the dollar value of billions of transactions, there is the essential meaninglessness of the dollar value. Every transaction represented in an estimate of GDP (or any lesser aggregation) has a different real value to each participant in the transaction. Further, those real values, even if they could be measured and expressed in “utils“, can’t be summed because “utils” are incommensurate — there is no such thing as a social-welfare function.

Quantitative aggregations are not only meaningless, but their existence simply encourages destructive government interference in economic affairs. Mathematical modeling of “aggregate economic activity” (there is no such thing) may serve as an amusing and even lucrative pastime, but it does nothing to advance the lives and fortunes of the vast majority of Americans. In fact, it serves to retard their lives and fortunes.

All of that because pointy-headed academics, power-lusting politicians, and bamboozled bureaucrats believe that economic aggregates and quantitative economic models are meaningful. If they spent more than a few minutes thinking about what those models are supposed to represent — and don’t and can’t represent — they would at least use them with a slight pang of conscience. (I hold little hope that they would abandon them. The allure of power and the urge to “do something” are just too strong.)

Economic aggregates and models gain meaning and precision only as their compass shrinks to discrete markets for closely similar products and services. But even in the quantification of such markets there will always be some kind of misrepresentation by aggregation, if only because tastes, preferences, materials, processes, and relative prices change constantly. Only a fool believes that a quantitative economic model (of any kind) is more than a rough approximation of past reality — an approximation that will fade quickly as time marches on.

Economist Tony Lawson puts it this way:

Given the modern emphasis on mathematical modelling it is important to determine the conditions in which such tools are appropriate or useful. In other words we need to uncover the ontological presuppositions involved in the insistence that mathematical methods of a certain sort be everywhere employed. The first thing to note is that all these mathematical methods that economists use presuppose event regularities or correlations. This makes modern economics a form of deductivism. A closed system in this context just means any situation in which an event regularity occurs. Deductivism is a form of explanation that requires event regularities. Now event regularities can just be assumed to hold, even if they cannot be theorised, and some econometricians do just that and dedicate their time to trying to uncover them. But most economists want to theorise in economic terms as well. But clearly they must do so in terms that guarantee event regularity results. The way to do this is to formulate theories in terms of isolated atoms. By an atom I just mean a factor that has the same independent effect whatever the context. Typically human individuals are portrayed as the atoms in question, though there is nothing essential about this. Notice too that most debates about the nature of rationality are beside the point. Mainstream modellers just need to fix the actions of the individual of their analyses to render them atomistic, i.e., to fix their responses to given conditions. It is this implausible fixing of actions that tends to be expressed though, or is the task of, any rationality axiom. But in truth any old specification will do, including fixed rule or algorithm following as in, say, agent based modelling; the precise assumption used to achieve this matters little. Once some such axiom or assumption-fixing behaviour is made economists can predict/deduce what the factor in question will do if stimulated. Finally the specification in this way of what any such atom does in given conditions allows the prediction activities of economists ONLY if nothing is allowed to counteract the actions of the atoms of analysis. Hence these atoms must additionally be assumed to act in isolation. It is easy to show that this ontology of closed systems of isolated atoms characterises all of the substantive theorising of mainstream economists.

It is also easy enough to show that the real world, the social reality in which we actually live, is of a nature that is anything but a set of closed systems of isolated atoms (see Lawson, [Economics and Reality, London and New York: Routledge] 1997, [Reorienting Economics, London and New York: Routledge] 2003).

Mathematical-statistical descriptions of economic phenomena are either faithful (if selective) depictions of one-off events (which are unlikely to recur) or highly stylized renditions of complex chains of events (which almost certainly won’t recur). As Arnold Kling says in his review of Richard Bookstaber’s The End of Theory,

people are assumed to know, now and for the indefinite future, the entire range of possibilities, and the likelihood of each. The alternative assumption, that the future has aspects that are not foreseeable today, goes by the name of “radical uncertainty.” But we might just call it the human condition. Bookstaber writes that radical uncertainty “leads the world to go in directions we had never imagined…. The world could be changing right now in ways that will blindside you down the road.”

I’m picking on economics because it’s an easy target. But the “hard sciences” have their problems, too. See, for example, my work in progress about Einstein’s special theory of relativity.


Related reading:

John Cochrane, “Mallaby, the Fed, and Technocratic Illusions“, The Grumpy Economist, July 5, 2017

Vincent Randall: “The Uncertainty Monster: Lessons from Non-Orthodox Economics“, Climate Etc., July 5, 2017

Related posts:

Modeling Is Not Science
Microeconomics and Macroeconomics
Why the “Stimulus” Failed to Stimulate
Baseball Statistics and the Consumer Price Index
The Keynesian Multiplier: Phony Math
Further Thoughts about the Keynesian Multiplier
The Wages of Simplistic Economics
The Essence of Economics
Economics and Science
Economists As Scientists
Mathematical Economics
Economic Modeling: A Case of Unrewarded Complexity
Economics from the Bottom Up
Unorthodox Economics: 1. What Is Economics?
Unorthodox Economics: 2. Pitfalls
Unorthodox Economics: 3. What Is Scientific about Economics?
Unorthodox Economics 4: A Parable of Political Economy

“Science” vs. Science: The Case of Evolution, Race, and Intelligence

If you were to ask those people who marched for science if they believe in evolution, they would have answered with a resounding “yes”. Ask them if they believe that all branches of the human race evolved identically and you will be met with hostility. The problem, for them, is that an admission of the obvious — differential evolution, resulting in broad racial differences — leads to a fact that they don’t want to admit: there are broad racial differences in intelligence, differences that must have evolutionary origins.

“Science” — the cherished totem of left-wing ideologues — isn’t the same thing as science. The totemized version consists of whatever set of facts and hypotheses suit the left’s agenda. In the case of “climate change”, for example, the observation that in the late 1900s temperatures rose for a period of about 25 years coincident with a reported rise in the level of atmospheric CO2 occasioned the hypothesis that the generation of CO2 by humans causes temperatures to rise. This is a reasonable hypothesis, given the long-understood, positive relationship between temperature and so-called greenhouse gases. But it comes nowhere close to confirming what leftists seem bent on believing and “proving” with hand-tweaked models, which is that if humans continue to emit CO2, and do so at a higher rate than in the past, temperatures will rise to the point that life on Earth will become difficult if not impossible to sustain. There is ample evidence to support the null hypothesis (that “climate change” isn’t catastrophic) and the alternative view (that recent warming is natural and caused mainly by things other than human activity).

Leftists want to believe in catastrophic anthropogenic global warming because it suits the left’s puritanical agenda, as did Paul Ehrlich’s discredited thesis that population growth would outstrip the availability of food and resources, leading to mass starvation and greater poverty. Population control therefore became a leftist mantra, and remains one despite the generally rising prosperity of the human race and the diminution of scarcity (except where leftist governments, like Venezuela’s, create misery).

Why are leftists so eager to believe in problems that portend catastrophic consequences which “must” be averted through draconian measures, such as enforced population control, taxes on soft drinks above a certain size, the prohibition of smoking not only in government buildings but in all buildings, and decreed reductions in CO2-emitting activities (which would, in fact, help to impoverish humans)? The common denominator of such measures is control. And yet, by the process of psychological projection, leftists are always screaming “fascist” at libertarians and conservatives who resist control.

Returning to evolution, why are leftists so eager to eager to embrace it or, rather, what they choose to believe about it? My answers are that (a) it’s “science” (it’s only science when it’s spelled out in detail, uncertainties and all) and (b) it gives leftists (who usually are atheists) a stick with which to beat “creationists”.

But when it comes to race, leftists insist on denying what’s in front of their eyes: evolutionary disparities in such phenomena as skin color, hair texture, facial structure, running and jumping ability, cranial capacity, and intelligence.

Why? Because the urge to control others is of a piece with the superiority with which leftists believe they’re endowed because they are mainly white persons of European descent and above-average intelligence (just smart enough to be dangerous). Blacks and Hispanics who vote left do so mainly for the privileges it brings them. White leftists are their useful idiots.

Leftism, in other words, is a manifestation of “white privilege”, which white leftists feel compelled to overcome through paternalistic condescension toward blacks and other persons of color. (But not East Asians or the South Asians who have emigrated to the U.S., because the high intelligence of those groups is threatening to white leftists’ feelings of superiority.) What could be more condescending, and less scientific, than to deny what evolution has wrought in order to advance a political agenda?

Leftist race-denial, which has found its way into government policy, is akin to Stalin’s support of Lysenkoism, which its author cleverly aligned with Marxism. Lysenkoism

rejected Mendelian inheritance and the concept of the “gene”; it departed from Darwinian evolutionary theory by rejecting natural selection.

This brings me to Stephen Jay Gould, a leading neo-Lysenkoist and a fraudster of “science” who did much to deflect science from the question of race and intelligence:

[In The Mismeasure of Man] Gould took the work of a 19th century physical anthropologist named Samuel George Morton and made it ridiculous. In his telling, Morton was a fool and an unconscious racist — his project of measuring skull sizes of different ethnic groups conceived in racism and executed in same. Why, Morton clearly must have thought Caucasians had bigger brains than Africans, Indians, and Asians, and then subconsciously mismeasured the skulls to prove they were smarter.

The book then casts the entire project of measuring brain function — psychometrics — in the same light of primitivism.

Gould’s antiracist book was a hit with reviewers in the popular press, and many of its ideas about the morality and validity of testing intelligence became conventional wisdom, persisting today among the educated folks. If you’ve got some notion that IQ doesn’t measure anything but the ability to take IQ tests, that intelligence can’t be defined or may not be real at all, that multiple intelligences exist rather than a general intelligence, you can thank Gould….

Then, in 2011, a funny thing happened. Researchers at the University of Pennsylvania went and measured old Morton’s skulls, which turned out to be just the size he had recorded. Gould, according to one of the co-authors, was nothing but a “charlatan.”

The study itself couldn’t matter, though, could it? Well, recent work using MRI technology has established that descendants of East Asia have slightly more cranial capacity than descendants of Europe, who in turn have a little more than descendants of Africa. Another meta-analysis finds a mild correlation between brain size and IQ performance.

You see where this is going, especially if you already know about the racial disparities in IQ testing, and you’d probably like to hit the brakes before anybody says… what, exactly? It sounds like we’re perilously close to invoking science to argue for genetic racial superiority.

Am I serious? Is this a joke?…

… The reason the joke feels dangerous is that it incorporates a fact that is rarely mentioned in public life. In America, white people on average score higher than black people on IQ tests, by a margin of 12-15 points. And there’s one man who has been made to pay the price for that fact — the scholar Charles Murray.

Murray didn’t come up with a hypothesis of racial disparity in intelligence testing. He simply co-wrote a book, The Bell Curve, that publicized a fact well known within the field of psychometrics, a fact that makes the rest of us feel tremendously uncomfortable.

Nobody bears more responsibility for the misunderstanding of Murray’s work than Gould, who reviewed The Bell Curve savagely in the New Yorker. The IQ tests couldn’t be explained away — here he is acknowledging the IQ gap in 1995 — but the validity of IQ testing could be challenged. That was no trouble for the old Marxist.

Gould should have known that he was dead wrong about his central claim — that general intelligence, or g, as psychologists call it, was unreal. In fact, “Psychologists generally agree that the greatest success of their field has been in intelligence testing,” biologist Bernard D. Davis wrote in the Public Interest in 1983, in a long excoriation of Gould’s strange ideas.

Psychologists have found that performance on almost any test of cognition will have some correlation to other tests of cognition, even in areas that might seem distant from pure logic, such as recognizing musical notes. The more demanding tests have a higher correlation, or a high g load, as they term it.

IQ is very closely related to this measure, and turns out to be extraordinarily predictive not just for how well one does on tests, but on all sorts of real-life outcomes.

Since the publication of The Bell Curve, the data have demonstrated not just those points, but that intelligence is highly heritable (around 50 to 80 percent, Murray says), and that there’s little that can be done to permanently change the part that’s dependent on the environment….

The liberal explainer website Vox took a swing at Murray earlier this year, publishing a rambling 3,300-word hit job on Murray that made zero references to the scientific literature….

Vox might have gotten the last word, but a new outlet called Quillette published a first-rate rebuttal this week, which sent me down a three-day rabbit hole. I came across some of the most troubling facts I’ve ever encountered — IQ scores by country — and then came across some more reassuring ones from Thomas Sowell, suggesting that environment could be the main or exclusive factor after all.

The classic analogy from the environment-only crowd is of two handfuls of genetically identical seed corn, one planted in Iowa and the other in the Mojave Desert. One group flourishes; the other is stunted. While all of the variation within one group will be due to genetics, its flourishing relative to the other group will be strictly due to environment.

Nobody doubts that the United States is richer soil than Equatorial Guinea, but the analogy doesn’t prove the case. The idea that there exists a mean for human intelligence and that all racial subgroups would share it given identical environments remains a metaphysical proposition. We may want this to be true quite desperately, but it’s not something we know to be true.

For all the lines of attack, all the brutal slander thrown Murray’s way, his real crime is having an opinion on this one key issue that’s open to debate. Is there a genetic influence on the IQ testing gap? Murray has written that it’s “likely” genetics explains “some” of the difference. For this, he’s been crucified….

Murray said [in a recent interview] that the assumption “that everyone is equal above the neck” is written into social policy, employment policy, academic policy and more.

He’s right, of course, especially as ideas like “disparate impact” come to be taken as proof of discrimination. There’s no scientifically valid reason to expect different ethnic groups to have a particular representation in this area or that. That much is utterly clear.

The universities, however, are going to keep hollering about institutional racism. They are not going to accept Murray’s views, no matter what develops. [Jon Cassidy, “Mau Mau Redux: Charles Murray Comes in for Abuse, Again“, The American Spectator, June 9, 2017]

And so it goes in the brave new world of alternative facts, most of which seem to come from the left. But the left, with its penchant for pseudo-intellectualism (“science” vs. science) calls it postmodernism:

Postmodernists … eschew any notion of objectivity, perceiving knowledge as a construct of power differentials rather than anything that could possibly be mutually agreed upon…. [S]cience therefore becomes an instrument of Western oppression; indeed, all discourse is a power struggle between oppressors and oppressed. In this scheme, there is no Western civilization to preserve—as the more powerful force in the world, it automatically takes on the role of oppressor and therefore any form of equity must consequently then involve the overthrow of Western “hegemony.” These folks form the current Far Left, including those who would be described as communists, socialists, anarchists, Antifa, as well as social justice warriors (SJWs). These are all very different groups, but they all share a postmodernist ethos. [Michael Aaron, “Evergreen State and the Battle for Modernity“, Quillette, June 8, 2017]


Other related reading (listed chronologically):

Molly Hensley-Clancy, “Asians With “Very Familiar Profiles”: How Princeton’s Admissions Officers Talk About Race“, BuzzFeed News, May 19, 2017

Warren Meyer, “Princeton Appears To Penalize Minority Candidates for Not Obsessing About Their Race“, Coyote Blog, May 24, 2017

B. Wineguard et al., “Getting Voxed: Charles Murray, Ideology, and the Science of IQ“, Quillette, June 2, 2017

James Thompson, “Genetics of Racial Differences in Intelligence: Updated“, The Unz Review: James Thompson Archive, June 5, 2017

Raymond Wolters, “We Are Living in a New Dark Age“, American Renaissance, June 5, 2017

F. Roger Devlin, “A Tactical Retreat for Race Denial“, American Renaissance, June 9, 2017

Scott Johnson, “Mugging Mr. Murray: Mr. Murray Speaks“, Power Line, June 9, 2017


Related posts:
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Round Up the Usual Suspects
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Let’s Have That “Conversation” about Race
Affirmative Action Comes Home to Roost
The IQ of Nations
Race and Social Engineering
Some Notes about Psychology and Intelligence

Quantum Mechanics and Free Will

Physicist Adam Frank, in “Minding Matter” (Aeon, March 13, 2017), visits subjects that I have approached from several angles in various posts. Frank addresses the manifestation of brain activity — more properly, the activity of the central nervous system (CNS) — which is known as consciousness. But there’s a lot more to CNS activity than that. What it all adds up to is generally called “mind”, which has conscious components (things we are aware of, including being aware of being aware) and subconscious components (things that go on in the background that we might or might not become aware of).

In the traditional (non-mystical) view, each person’s mind is separate from the minds of other persons. Mind (or the concepts, perceptions, feelings, memories, etc. that comprise it) therefore defines self. I am my self (i.e., not you) because my mind is a manifestation of my body’s CNS, which isn’t physically linked to yours.

With those definitional matters in hand, Frank’s essay can be summarized and interpreted as follows:

According to materialists, mind is nothing more than a manifestation of CNS activity.

The underlying physical properties of the CNS are unknown because the nature of matter is unknown.

Matter, whatever it is, doesn’t behave in billiard-ball fashion, where cause and effect are tightly linked.

Instead, according to quantum mechanics, matter has probabilistic properties that supposedly rule out strict cause-and-effect relationships. The act of measuring matter resolves the uncertainty, but in an unpredictable way.

Mind is therefore a mysterious manifestation of quantum-mechanical processes. One’s state of mind is affected by how one “samples” those processes, that is, by one’s deliberate, conscious attempt to use one’s CNS in formulating the mind’s output (e.g., thoughts and interpretations of the world around us).

Because of the ability of mind to affect mind (“mind over matter”), it is more than merely a passive manifestation of the physical state of one’s CNS. It is, rather, a meta-state — a physical state that is created by “mental” processes that are themselves physical.

In sum, mind really isn’t immaterial. It’s just a manifestation of poorly understood material processes that can be influenced by the possessor of a mind. It’s the ultimate self-referential system, a system that can monitor and change itself to some degree.

None of this means that human beings lack free will. In fact, the complexity of mind argues for free will. This is from a 12-year-old post of mine:

Suppose I think that I might want to eat some ice cream. I go to the freezer compartment and pull out an unopened half-gallon of vanilla ice cream and an unopened half-gallon of chocolate ice cream. I can’t decide between vanilla, chocolate, some of each, or none. I ask a friend to decide for me by using his random-number generator, according to rules of his creation. He chooses the following rules:

  • If the random number begins in an odd digit and ends in an odd digit, I will eat vanilla.
  • If the random number begins in an even digit and ends in an even digit, I will eat chocolate.
  • If the random number begins in an odd digit and ends in an even digit, I will eat some of each flavor.
  • If the random number begins in an even digit and ends in an odd digit, I will not eat ice cream.

Suppose that the number generated by my friend begins in an even digit and ends in an even digit: the choice is chocolate. I act accordingly.

I didn’t inevitably choose chocolate because of events that led to the present state of my body’s chemistry, which might otherwise have dictated my choice. That is, I broke any link between my past and my choice about a future action.I call that free will.

I suspect that our brains are constructed in such a way as to produce the same kind of result in many situations, though certainly not in all situations. That is, we have within us the equivalent of an impartial friend and an (informed) decision-making routine, which together enable us to exercise something we can call free will.

This rudimentary metaphor is consistent with the quantum nature of the material that underlies mind. But I don’t believe that free will depends on quantum mechanics. I believe that there is a part of mind — a part with a physical location — which makes independent judgments and arrives at decisions based on those judgments.

To extend the ice-cream metaphor, I would say that my brain’s executive function, having become aware of my craving for ice cream, taps my knowledge (memory) of snacks on hand, or directs the part of my brain that controls my movements to look in the cupboard and freezer. My executive function, having determined that my craving isn’t so urgent that I will drive to a grocery store, then compiles the available options and chooses the one that seems best suited to the satisfaction of my craving at that moment. It may be ice cream, or it may be something else. If it is ice cream, it will consult my “taste preferences” and choose between the flavors then available to me.

Given the ways in which people are seen to behave, it seems obvious that the executive function, like consciousness, is on a “different circuit” from other functions (memory, motor control, autonomic responses, etc.), just as the software programs that drive my computer’s operations are functionally separate from the data stored on the hard drive and in memory. The software programs would still be on my computer even if I erased all the data on my hard drive and in memory. So, too, would my executive function (and consciousness) remain even I lost all memory of everything that happened to me before I awoke this morning.

Given this separateness, there should be no question that a person has free will. That is why I can sometimes resist a craving for ice cream. That is why most people are often willing and able to overcome urges, from eating candy to smoking a cigarette to punching a jerk.

Conditioning, which leads to addiction, makes it hard to resist urges — sometimes nigh unto impossible. But the ability of human beings to overcome conditioning, even severe addictions, argues for the separateness of the executive function from other functions. In short, it argues for free will.


Related posts:
Free Will: A Proof by Example?
Free Will, Crime, and Punishment
Mind, Cosmos, and Consciousness
“Feelings, Nothing More than Feelings”
Hayek’s Anticipatory Account of Consciousness
Is Consciousness an Illusion?

Special Relativity

I have removed my four posts about special relativity and incorporated them in a new page, “Einstein’s Errors.” I will update that page occasionally rather than post about special relativity, which is rather “off the subject” for this blog.

A True Scientist Speaks

I am reading, with great delight, Old Physics for New: A Worldview Alternative to Einstein’s Relativity Theory, by Thomas E. Phipps Jr. (1925-2016). Dr. Phipps was a physicist who happened to have been a member of a World War II operations research unit that evolved into the think-tank where I worked for 30 years.

Phipps challenged the basic tenets of Einstein’s special theory of relativity (STR) in Old Physics for New, an earlier book (Heretical Verities: Mathematical Themes in Physical Description), and many of his scholarly articles. I have drawn on Old Physics for New in two of my posts about STR (this and this), and will do so in future posts on the subject. But aside from STR, about which Phipps is refreshingly skeptical, I admire his honesty and clear-minded view of science.

Regarding Phipps’s honesty, I turn to his preface to the second edition of Old Physics for New:

[I]n the first edition I wrongly claimed awareness of two “crucial” experiments that would decide between Einstein’s special relativity theory and my proposed alternative. These two were (1) an accurate assessment of stellar aberration and (2) a measurement of light speed in orbit. Only the first of these is valid. The other was an error on my part, which I am obligated and privileged to correct here. [pp. xi-xii]

Phipps’s clear-minded view of science is evident throughout the book. In the preface, he scores a direct hit on the pseudo-scientific faddism:

The attitude of the traditional scientist toward lies and errors has always been that it is his job to tell the truth and to eradicate mistakes. Lately, scientists, with climate science in the van, have begun openly to espouse an opposite view, a different paradigm, which marches under the black banner of “post-normal science.”

According to this new perception, before the scientist goes into his laboratory it is his duty, for the sake of mankind, to study the worldwide political situation and to decide what errors need promulgating and what lies need telling. Then he goes into his laboratory, interrogates his computer, fiddles his theory, fabricates or massages his data, etc., and produces the results required to support those predetermined lies and errors. Finally he emerges into the light of publicity and writes reports acceptable to like-minded bureaucrats in such government agencies as the National Science Foundation, offers interviews to reporters working for like-minded bosses in the media, testifies before Congress, etc., all in such a way as to suppress traditional science and ultimately to make it impossible….

In this way post-normal science wages pre-emptive war on what Thomas Kuhn famously called “normal science,” because the latter fails to promote with adequate zeal those political and social goals that the post-normal scientist happens to recognize as deserving promotion…. Post-normal behavior seamlessly blends the implacable arrogance of the up-to-date terrorist with the technique of The Big Lie, pioneered by Hitler and Goebbels…. [pp. xii-xiii]

I regret deeply that I never met or corresponded with Dr. Phipps.

Nature, Nurture, and Leniency

I recently came across an article by Brian Boutwell, “Why Parenting May not Matter and Why Most Social Science Research Is Probably Wrong” (Quillette, December 1, 2015). Boutwell is an associate professor of criminology and criminal justice at Saint Louis University. Here’s some of what he has to say about nature, nurture, and behavior:

Despite how it feels, your mother and father (or whoever raised you) likely imprinted almost nothing on your personality that has persisted into adulthood…. I do have evidence, though, and by the time we’ve strolled through the menagerie of reasons to doubt parenting effects, I think another point will also become evident: the problems with parenting research are just a symptom of a larger malady plaguing the social and health sciences. A malady that needs to be dealt with….

[L]et’s start with a study published recently in the prestigious journal Nature Genetics.1 Tinca Polderman and colleagues just completed the Herculean task of reviewing nearly all twin studies published by behavior geneticists over the past 50 years….

Genetic factors were consistently relevant, differentiating humans on a range of health and psychological outcomes (in technical parlance, human differences are heritable). The environment, not surprisingly, was also clearly and convincingly implicated….

[B]ehavioral geneticists make a finer grain distinction than most about the environment, subdividing it into shared and non-shared components. Not much is really complicated about this. The shared environment makes children raised together similar to each other. The term encompasses the typical parenting effects that we normally envision when we think about environmental variables. Non-shared influences capture the unique experiences of siblings raised in the same home; they make siblings different from one another….

Based on the results of classical twin studies, it just doesn’t appear that parenting—whether mom and dad are permissive or not, read to their kid or not, or whatever else—impacts development as much as we might like to think. Regarding the cross-validation that I mentioned, studies examining identical twins separated at birth and reared apart have repeatedly revealed (in shocking ways) the same thing: these individuals are remarkably similar when in fact they should be utterly different (they have completely different environments, but the same genes).3 Alternatively, non-biologically related adopted children (who have no genetic commonalities) raised together are utterly dissimilar to each other—despite in many cases having decades of exposure to the same parents and home environments.

One logical explanation for this is a lack of parenting influence for psychological development. Judith Rich Harris made this point forcefully in her book The Nurture Assumption…. As Harris notes, parents are not to blame for their children’s neuroses (beyond the genes they contribute to the manufacturing of that child), nor can they take much credit for their successful psychological adjustment. To put a finer point on what Harris argued, children do not transport the effects of parenting (whatever they might be) outside the home. The socialization of children certainly matters (remember, neither personality nor temperament is 100 percent heritable), but it is not the parents who are the primary “socializers”, that honor goes to the child’s peer group….

Is it possible that parents really do shape children in deep and meaningful ways? Sure it is…. The trouble is that most research on parenting will not help you in the slightest because it doesn’t control for genetic factors….

Natural selection has wired into us a sense of attachment for our offspring. There is no need to graft on beliefs about “the power of parenting” in order to justify our instinct that being a good parent is important. Consider this: what if parenting really doesn’t matter? Then what? The evidence for pervasive parenting effects, after all, looks like a foundation of sand likely to slide out from under us at any second. If your moral constitution requires that you exert god-like control over your kid’s psychological development in order to treat them with the dignity afforded any other human being, then perhaps it is time to recalibrate your moral compass…. If you want happy children, and you desire a relationship with them that lasts beyond when they’re old enough to fly the nest, then be good to your kids. Just know that it probably will have little effect on the person they will grow into.

Color me unconvinced. There’s a lot of hand-waving in Boutwell’s piece, but little in the way of crucial facts, such as:

  • How is behavior quantified?
  • Does the quantification account for all aspects of behavior (unlikely), or only those aspects that are routinely quantified (e.g., criminal convictions)?
  • Is it meaningful to say that about 50 percent of behavior is genetically determined, 45 percent is peer-driven, and 0-5 percent is due to “parenting” (as Judith Rich Harris does)? Which 50 percent, 45 percent, and 0-5 percent? And how does one add various types of behavior?
  • How does one determine (outside an unrealistic experiment) the extent to which “children do not transport the effects of parenting (whatever they might be) outside the home”?

The measurement of behavior can’t possibly be as rigorous and comprehensive as the measurement of intelligence. And even those researchers who are willing to countenance and estimate the heritability of intelligence give varying estimates of its magnitude, ranging from 50 to 80 percent.

I wonder if Boutwell, Harris, et al. would like to live in a world in which parents quit teaching their children to obey the law; refrain from lying, stealing, and hurting others; honor their obligations; respect old people; treat babies with care; and work for a living (“money doesn’t grow on trees”).

Unfortunately, the world in which we live — even in the United States — seems more and more to resemble the kind of world in which parents have failed in their duty to inculcate in their children the values of honesty, respect, and hard work. This is from a post at Dyspepsia Generation, “The Spoiled Children of Capitalism“ (no longer online):

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it. And, sadly, they passed their principles, if one may use the term so loosely, down the generations to the point where young people today are scarcely worth using for fertilizer.

In 1919, or 1929, or especially 1939, the adolescents of 1969 would have had neither the leisure nor the money to create the Woodstock Nation. But mommy and daddy shelled out because they didn’t want their little darlings to be caught short, and consequently their little darlings became the worthless whiners who voted for people like Bill Clinton and Barack Obama [and who were people like Bill Clinton and Barack Obama: ED.], with results as you see them. Now that history is catching up to them, a third generation of losers can think of nothing better to do than camp out on Wall Street in hopes that the Cargo will suddenly begin to arrive again.

Good luck with that.

I subscribe to the view that the rot set in after World War II. That rot, in the form of slackerism, is more prevalent now than it ever was. It is not for nothing that Gen Y is also known as the Boomerang Generation.

Nor is it unsurprising that campuses have become hotbeds of petulant and violent behavior. And it’s not just students, but also faculty and administrators — many of whom are boomers. Where were these people before the 1960s, when the boomers came of age? Do you suppose that their sudden emergence was the result of a massive genetic mutation that swept across the nation in the late 1940s? I doubt it very much.

Their sudden emergence was due to the failure of too many members of the so-called Greatest Generation to inculcate in their children the values of honesty, respect, and hard work. How does one do that? By being clear about expectations and by setting limits on behavior — limits that are enforced swiftly, unequivocally, and sometimes with the palm of a hand. When children learn that they can “get away” with dishonesty, disrespect, and sloth, guess what? They become dishonest, disrespectful, and slothful. They give vent to their disrespect through whining, tantrum-like behavior, and even violence.

The leniency that’s being shown toward campus jerks — students, faculty, and administrators — is especially disgusting to this pre-boomer. University presidents need to grow backbones. Campus and municipal police should be out in force, maintaining order and arresting whoever fails to provide a “safe space” for a speaker who might offend their delicate sensibilities. Disruptive and violent behavior should be met with expulsions, firings, and criminal charges.

“My genes made me do it” is neither a valid explanation nor an acceptable excuse.


Related reading: There is a page on Judith Rich Harris’s website with a long list of links to reviews, broadcast commentary, and other discussions of The Nurture Assumption. It is to Harris’s credit that she links to negative as well as positive views of her work.

Institutional Bias

Arnold Kling:

On the question of whether Federal workers are overpaid relative to private sector workers, [Justin Fox] writes,

The Federal Salary Council, a government advisory body composed of labor experts and government-employee representatives, regularly finds that federal employees make about a third less than people doing similar work in the private sector. The conservative American Enterprise Institute and Heritage Foundation, on the other hand, have estimated that federal employees make 14 percent and 22 percent more, respectively, than comparable private-sector workers….

… Could you have predicted ahead of time which organization’s “research” would find a result favorable to Federal workers and which organization would find unfavorable results? Of course you could. So how do you sustain the belief that normative economics and positive economics are distinct from one another, that economic research cleanly separates facts from values?

I saw institutional bias at work many times in my career as an analyst at a tax-funded think-tank. My first experience with it came in the first project to which I was assigned. The issue at hand was a hot one on those days: whether the defense budget should be altered to increase the size of the Air Force’s land-based tactical air (tacair)  forces while reducing the size of Navy’s carrier-based counterpart. The Air Force’s think-tank had issued a report favorable to land-based tacair (surprise!), so the Navy turned to its think-tank (where I worked). Our report favored carrier-based tacair (surprise!).

How could two supposedly objective institutions study the same issue and come to opposite conclusions? Analytical fraud abetted by overt bias? No, that would be too obvious to the “neutral” referees in the Office of the Secretary of Defense. (Why “neutral”? Read this.)

Subtle bias is easily introduced when the issue is complex, as the tacair issue was. Where would tacair forces be required? What payloads would fighters and bombers carry? How easy would it be to set up land bases? How vulnerable would they be to an enemy’s land and air forces? How vulnerable would carriers be to enemy submarines and long-range bombers? How close to shore could carriers approach? How much would new aircraft, bases, and carriers cost to buy and maintain? What kinds of logistical support would they need, and how much would it cost? And on and on.

Hundreds, if not thousands, of assumptions underlay the results of the studies. Analysts at the Air Force’s think-tank chose those assumptions that favored the Air Force; analysts at the Navy’s think-tank chose those assumptions that favored the Navy.

Why? Not because analysts’ jobs were at stake; they weren’t. Not because the Air Force and Navy directed the outcomes of the studies; they didn’t. They didn’t have to because “objective” analysts are human beings who want “their side” to win. When you work for an institution you tend to identify with it; its success becomes your success, and its failure becomes your failure.

The same was true of the “neutral” analysts in the Office of the Secretary of Defense. They knew which way Mr. McNamara leaned on any issue, and they found themselves drawn to the assumptions that would justify his biases.

And so it goes. Bias is a rampant and ineradicable aspect of human striving. It’s ever-present in the political arena The current state of affairs in Washington, D.C., is just the tip of the proverbial iceberg.

The prevalence and influence of bias in matters that affect hundreds of millions of Americans is yet another good reason to limit the power of government.

Not-So-Random Thoughts (XX)

An occasional survey of web material that’s related to subjects about which I’ve posted. Links to the other posts in this series may be found at “Favorite Posts,” just below the list of topics.

In “The Capitalist Paradox Meets the Interest-Group Paradox,” I quote from Frédéric Bastiat’s “What Is Seen and What Is Not Seen“:

[A] law produces not only one effect, but a series of effects. Of these effects, the first alone is immediate; it appears simultaneously with its cause; it is seen. The other effects emerge only subsequently; they are not seen; we are fortunate if we foresee them.

This might also be called the law of unintended consequences. It explains why so much “liberal” legislation is passed: the benefits are focused a particular group and obvious (if overestimated); the costs are borne by taxpayers in general, many of whom fail to see that the sum of “liberal” legislation is a huge tax bill.

Ross Douthat understands:

[A] new paper, just released through the National Bureau of Economic Research, that tries to look at the Affordable Care Act in full. Its authors find, as you would expect, a substantial increase in insurance coverage across the country. What they don’t find is a clear relationship between that expansion and, again, public health. The paper shows no change in unhealthy behaviors (in terms of obesity, drinking and smoking) under
Obamacare, and no statistically significant improvement in self-reported health since the law went into effect….

[T]he health and mortality data [are] still important information for policy makers, because [they] indicate[] that subsidies for health insurance are not a uniquely death-defying and therefore sacrosanct form of social spending. Instead, they’re more like other forms of redistribution, with costs and benefits that have to be weighed against one another, and against other ways to design a safety net. Subsidies for employer-provided coverage crowd out wages, Medicaid coverage creates benefit cliffs and work disincentives…. [“Is Obamacare a Lifesaver?The New York Times, March 29, 2017]

So does Roy Spencer:

In a theoretical sense, we can always work to make the environment “cleaner”, that is, reduce human pollution. So, any attempts to reduce the EPA’s efforts will be viewed by some as just cozying up to big, polluting corporate interests. As I heard one EPA official state at a conference years ago, “We can’t stop making the environment ever cleaner”.

The question no one is asking, though, is “But at what cost?

It was relatively inexpensive to design and install scrubbers on smokestacks at coal-fired power plants to greatly reduce sulfur emissions. The cost was easily absorbed, and electricty rates were not increased that much.

The same is not true of carbon dioxide emissions. Efforts to remove CO2 from combustion byproducts have been extremely difficult, expensive, and with little hope of large-scale success.

There is a saying: don’t let perfect be the enemy of good enough.

In the case of reducing CO2 emissions to fight global warming, I could discuss the science which says it’s not the huge problem it’s portrayed to be — how warming is only progressing at half the rate forecast by those computerized climate models which are guiding our energy policy; how there have been no obvious long-term changes in severe weather; and how nature actually enjoys the extra CO2, with satellites now showing a “global greening” phenomenon with its contribution to increases in agricultural yields.

But it’s the economics which should kill the Clean Power Plan and the alleged Social “Cost” of Carbon. Not the science.

There is no reasonable pathway by which we can meet more than about 20% of global energy demand with renewable energy…the rest must come mostly from fossil fuels. Yes, renewable energy sources are increasing each year, usually because rate payers or taxpayers are forced to subsidize them by the government or by public service commissions. But global energy demand is rising much faster than renewable energy sources can supply. So, for decades to come, we are stuck with fossil fuels as our main energy source.

The fact is, the more we impose high-priced energy on the masses, the more it will hurt the poor. And poverty is arguably the biggest threat to human health and welfare on the planet. [“Trump’s Rollback of EPA Overreach: What No One Is Talking About,” Roy Spencer, Ph.D.[blog], March 29, 2017]

*     *     *

I mentioned the Benedict Option in “Independence Day 2016: The Way Ahead,” quoting Bruce Frohnen in tacit agreement:

[Rod] Dreher has been writing a good deal, of late, about what he calls the Benedict Option, by which he means a tactical withdrawal by people of faith from the mainstream culture into religious communities where they will seek to nurture and strengthen the faithful for reemergence and reengagement at a later date….

The problem with this view is that it underestimates the hostility of the new, non-Christian society [e.g., this and this]….

Leaders of this [new, non-Christian] society will not leave Christians alone if we simply surrender the public square to them. And they will deny they are persecuting anyone for simply applying the law to revoke tax exemptions, force the hiring of nonbelievers, and even jail those who fail to abide by laws they consider eminently reasonable, fair, and just.

Exactly. John Horvat II makes the same point:

For [Dreher], the only response that still remains is to form intentional communities amid the neo-barbarians to “provide an unintentional political witness to secular culture,” which will overwhelm the barbarian by the “sheer humanity of Christian compassion, and the image of human dignity it honors.” He believes that setting up parallel structures inside society will serve to protect and preserve Christian communities under the new neo-barbarian dispensation. We are told we should work with the political establishment to “secure and expand the space within which we can be ourselves and our own institutions” inside an umbrella of religious liberty.

However, barbarians don’t like parallel structures; they don’t like structures at all. They don’t co-exist well with anyone. They don’t keep their agreements or respect religious liberty. They are not impressed by the holy lives of the monks whose monastery they are plundering. You can trust barbarians to always be barbarians. [“Is the Benedict Option the Answer to Neo-Barbarianism?Crisis Magazine, March 29, 2017]

As I say in “The Authoritarianism of Modern Liberalism, and the Conservative Antidote,”

Modern liberalism attracts persons who wish to exert control over others. The stated reasons for exerting control amount to “because I know better” or “because it’s good for you (the person being controlled)” or “because ‘social justice’ demands it.”

Leftists will not countenance a political arrangement that allows anyone to escape the state’s grasp — unless, of course, the state is controlled by the “wrong” party, In which case, leftists (or many of them) would like to exercise their own version of the Benedict Option. See “Polarization and De Facto Partition.”

*     *     *

Theodore Dalrymple understands the difference between terrorism and accidents:

Statistically speaking, I am much more at risk of being killed when I get into my car than when I walk in the streets of the capital cities that I visit. Yet this fact, no matter how often I repeat it, does not reassure me much; the truth is that one terrorist attack affects a society more deeply than a thousand road accidents….

Statistics tell me that I am still safe from it, as are all my fellow citizens, individually considered. But it is precisely the object of terrorism to create fear, dismay, and reaction out of all proportion to its volume and frequency, to change everyone’s way of thinking and behavior. Little by little, it is succeeding. [“How Serious Is the Terrorist Threat?City Journal, March 26, 2017]

Which reminds me of several things I’ve written, beginning with this entry from “Not-So-Random Thoughts (VI)“:

Cato’s loony libertarians (on matters of defense) once again trot out Herr Doktor Professor John Mueller. He writes:

We have calculated that, for the 12-year period from 1999 through 2010 (which includes 9/11, of course), there was one chance in 22 million that an airplane flight would be hijacked or otherwise attacked by terrorists. (“Serial Innumeracy on Homeland Security,” Cato@Liberty, July 24, 2012)

Mueller’s “calculation” consists of an recitation of known terrorist attacks pre-Benghazi and speculation about the status of Al-Qaeda. Note to Mueller: It is the unknown unknowns that kill you. I refer Herr Doktor Professor to “Riots, Culture, and the Final Showdown” and “Mission Not Accomplished.”

See also my posts “Getting It All Wrong about the Risk of Terrorism” and “A Skewed Perspective on Terrorism.”

*     *     *

This is from my post, “A Reflection on the Greatest Generation“:

The Greatest tried to compensate for their own privations by giving their children what they, the parents, had never had in the way of material possessions and “fun”. And that is where the Greatest Generation failed its children — especially the Baby Boomers — in large degree. A large proportion of Boomers grew up believing that they should have whatever they want, when they want it, with no strings attached. Thus many of them divorced, drank, and used drugs almost wantonly….

The Greatest Generation — having grown up believing that FDR was a secular messiah, and having learned comradeship in World War II — also bequeathed us governmental self-indulgence in the form of the welfare-regulatory state. Meddling in others’ affairs seems to be a predilection of the Greatest Generation, a predilection that the Millenials may be shrugging off.

We owe the Greatest Generation a great debt for its service during World War II. We also owe the Greatest Generation a reprimand for the way it raised its children and kowtowed to government. Respect forbids me from delivering the reprimand, but I record it here, for the benefit of anyone who has unduly romanticized the Greatest Generation.

There’s more in “The Spoiled Children of Capitalism“:

This is from Tim [of Angle’s] “The Spoiled Children of Capitalism“:

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it….

I have long shared Tim’s assessment of the Boomer generation. Among the corroborating data are my sister and my wife’s sister and brother — Boomers all….

Low conscientiousness was the bane of those Boomers who, in the 1960s and 1970s, chose to “drop out” and “do drugs.”…

Now comes this:

According to writer and venture capitalist Bruce Gibney, baby boomers are a “generation of sociopaths.”

In his new book, he argues that their “reckless self-indulgence” is in fact what set the example for millennials.

Gibney describes boomers as “acting without empathy, prudence, or respect for facts – acting, in other words, as sociopaths.”

And he’s not the first person to suggest this.

Back in 1976, journalist Tom Wolfe dubbed the young adults then coming of age the “Me Generation” in the New York Times, which is a term now widely used to describe millennials.

But the baby boomers grew up in a very different climate to today’s young adults.

When the generation born after World War Two were starting to make their way in the world, it was a time of economic prosperity.

“For the first half of the boomers particularly, they came of age in a time of fairly effortless prosperity, and they were conditioned to think that everything gets better each year without any real effort,” Gibney explained to The Huffington Post.

“So they really just assume that things are going to work out, no matter what. That’s unhelpful conditioning.

“You have 25 years where everything just seems to be getting better, so you tend not to try as hard, and you have much greater expectations about what society can do for you, and what it owes you.”…

Gibney puts forward the argument that boomers – specifically white, middle-class ones – tend to have genuine sociopathic traits.

He backs up his argument with mental health data which appears to show that this generation have more anti-social characteristics than others – lack of empathy, disregard for others, egotism and impulsivity, for example. [Rachel Hosie, “Baby Boomers Are a Generation of Sociopaths,” Independent, March 23, 2017]

That’s what I said.

More about Intelligence

Do genes matter? You betcha! See geneticist Gregory Cochran’s “Everything Is Different but the Same” and “Missing Heritability — Found?” (Useful Wikipedia articles for explanations of terms used by Cochran: “Genome-wide association study,” “Genetic load,” and “Allele.”) Snippets:

Another new paper finds that the GWAS hits for IQ – largely determined in Europeans – don’t work in people of African descent.

*     *     *

There is an interesting new paper out on genetics and IQ. The claim is that they have found the missing heritability – in rare variants, generally different in each family.

Cochran, in typical fashion, ends the second item with a bombastic put-down of the purported dysgenic trend, about which I’ve written here.

Psychologist James Thompson seems to put stock in the dysgenic trend. See, for example, his post “The Woodley Effect“:

[W]e could say that the Flynn Effect is about adding fertilizer to the soil, whereas the Woodley Effect is about noting the genetic quality of the plants. In my last post I described the current situation thus: The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.

But Thompson joins Cochran in his willingness to accept what the data show, namely, that there are strong linkages between race and intelligence. See, for example, “County IQs and Their Consequences” (and my related post). Thompson writes:

[I]n social interaction it is not always either possible or desirable to make intelligence estimates. More relevant is to look at technical innovation rates, patents, science publications and the like…. If there were no differences [in such] measures, then the associations between mental ability and social outcomes would be weakened, and eventually disconfirmed. However, the general link between national IQs and economic outcomes holds up pretty well….

… Smart fraction research suggests that the impact of the brightest persons in a national economy has a disproportionately positive effect on GDP. Rindermann and I have argued, following others, that the brightest 5% of every country make the greatest contribution by far, though of course many others of lower ability are required to implement the discoveries and strategies of the brightest.

Though Thompson doesn’t directly address race and intelligence in “10 Replicants in Search of Fame,” he leaves no doubt about dominance of genes over environment in the determination of traits; for example:

[A] review of the world’s literature on intelligence that included 10,000 pairs of twins showed identical twins to be significantly more similar than fraternal twins (twin correlations of about .85 and .60, respectively), with corroborating results from family and adoption studies, implying significant genetic influence….

Some traits, such as individual differences in height, yield heritability as high as 90%. Behavioural traits are less reliably measured than physical traits such as height, and error of measurement contributes to nonheritable variance….

[A] review of 23 twin studies and 12 family studies confirmed that anxiety and depression are correlated entirely for genetic reasons. In other words, the same genes affect both disorders, meaning that from a genetic perspective they are the same disorder. [I have personally witnessed this effect: TEA.]…

The heritability of intelligence increases throughout development. This is a strange and counter-intuitive finding: one would expect the effects of learning to accumulate with experience, increasing the strength of the environmental factor, but the opposite is true….

[M]easures of the environment widely used in psychological science—such as parenting, social support, and life events—can be treated as dependent measures in genetic analyses….

In sum, environments are partly genetically-influenced niches….

People to some extent make their own environments….

[F]or most behavioral dimensions and disorders, it is genetics that accounts for similarity among siblings.

In several of the snippets quoted above, Thompson is referring to a phenomenon known as genetic confounding, which is to say that genetic effects are often mistaken for environmental effects. Brian Boutwell and JC Barnes address an aspect of genetic confounding in “Is Crime Genetic? Scientists Don’t Know Because They’re Afraid to Ask.” A small sample:

The effects of genetic differences make some people more impulsive and shortsighted than others, some people more healthy or infirm than others, and, despite how uncomfortable it might be to admit, genes also make some folks more likely to break the law than others.

John Ray addresses another aspect of genetic confounding in “Blacks, Whites, Genes, and Disease,” where he comments about a recent article in the Journal of the American Medical Association:

It says things that the Left do not want to hear. But it says those things in verbose academic language that hides the point. So let me translate into plain English:

* The poor get more illness and die younger
* Blacks get more illness than whites and die younger
* Part of that difference is traceable to genetic differences between blacks and whites.
* But environmental differences — such as education — explain more than genetic differences do
* Researchers often ignore genetics for ideological reasons
* You don’t fully understand what is going on in an illness unless you know about any genetic factors that may be at work.
* Genetics research should pay more attention to blacks

Most of those things I have been saying for years — with one exception:

They find that environmental factors have greater effect than genetics. But they do that by making one huge and false assumption. They assume that education is an environmental factor. It is not. Educational success is hugely correlated with IQ, which is about two thirds genetic. High IQ people stay in the educational system for longer because they are better at it, whereas low IQ people (many of whom are blacks) just can’t do it at all. So if we treated education as a genetic factor, environmental differences would fade way as causes of disease. As Hans Eysenck once said to me in a casual comment: “It’s ALL genetic”. That’s not wholly true but it comes close

So the recommendation of the study — that we work on improving environmental factors that affect disease — is unlikely to achieve much. They are aiming their gun towards where the rabbit is not. If it were an actual rabbit, it would probably say: “What’s up Doc?”

Some problems are unfixable but knowing which problems they are can help us to avoid wasting resources on them. The black/white gap probably has no medical solution.

I return to James Thompson for a pair of less incendiary items. “The Secret in Your Eyes” points to a link between intelligence and pupil size. In “Group IQ Doesn’t Exist,” Thompson points out the fatuousness of the belief that a group is somehow more intelligent that the smartest member of the group. As Thompson puts it:

So, if you want a problem solved, don’t form a team. Find the brightest person and let [him] work on it. Placing [him] in a team will, on average, reduce [his] productivity. My advice would be: never form a team if there is one person who can sort out the problem.

Forcing the brightest person to act as a member of a team often results in the suppression of that person’s ideas by the (usually) more extroverted and therefore less-intelligent members of the team.

Added 04/05/17: James Thompson issues a challenge to IQ-deniers in “IQ Does Not Exist (Lead Poisoning Aside)“:

[T]his study shows how a neuro-toxin can have an effect on intelligence, of similar magnitude to low birth weight….

[I]f someone tells you they do not believe in intelligence reply that you wish them well, but that if they have children they should keep them well away from neuro-toxins because, among other things, they reduce social mobility.

*     *     *

Related posts:
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Round Up the Usual Suspects
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Let’s Have That “Conversation” about Race
Affirmative Action Comes Home to Roost
The IQ of Nations
Race and Social Engineering

Mugged by Non-Reality

A wise man said that a conservative is a liberal who has been mugged by reality. Thanks to Malcolm Pollock, I’ve just learned that a liberal is a conservative whose grasp of reality has been erased, literally.

Actually, this is unsurprising news (to me). I have pointed out many times that the various manifestations of liberalism — from stifling regulation to untrammeled immigration — arise from the cosseted beneficiaries of capitalism (e.g., pundits, politicians, academicians, students) who are far removed from the actualities of producing real things for real people. This has turned their brains into a kind of mush that is fit only for hatching unrealistic but costly schemes which rest upon a skewed vision of human nature.

Daylight Saving Time Doesn’t Kill…

…it’s “springing forward” in March that kills.

There’s a hue and cry about daylight saving time (that’s “saving” not “savings”). The main complaint seems to be the stress that results from moving clocks ahead in March:

Springing forward may be hazardous to your health. The Monday following the start of daylight saving time (DST) is a particularly bad one for heart attacks, traffic accidents, workplace injuries and accidental deaths. Now that most Americans have switched their clocks an hour ahead, studies show many will suffer for it.

Most Americans slept about 40 minutes less than normal on Sunday night, according to a 2009 study published in the Journal of Applied Psychology…. Since sleep is important for maintaining the body’s daily performance levels, much of society is broadly feeling the impact of less rest, which can include forgetfulness, impaired memory and a lower sex drive, according to WebMD.

One of the most striking affects of this annual shift: Last year, Colorado researchers reported finding a 25 percent increase in the number of heart attacks that occur on the Monday after DST starts, as compared with a normal Monday…. A cardiologist in Croatia recorded about twice as many heart attacks than expected during that same day, and researchers in Sweden have also witnessed a spike in heart attacks in the week following the time adjustment, particularly among those who were already at risk.

Workplace injuries are more likely to occur on that Monday, too, possibly because workers are more susceptible to a loss of focus due to too little sleep. Researchers at Michigan State University used over 20 years of data from the Mine Safety and Health Administration to determine that three to four more miners than average sustain a work-related injury on the Monday following the start of DST. Those injuries resulted in 2,649 lost days of work, which is a 68 percent increase over the hours lost from injuries on an average day. The team found no effects following the nation’s one-hour shift back to standard time in the fall….

There’s even more bad news: Drivers are more likely to be in a fatal traffic accident on DST’s first Monday, according to a 2001 study in Sleep Medicine. The authors analyzed 21 years of data on fatal traffic accidents in the U.S. and found that, following the start of DST, drivers are in 83.5 accidents as compared with 78.2 on the average Monday. This phenomenon has also been recorded in Canadian drivers and British motorists.

If all that wasn’t enough, a researcher from the University of British Columbia who analyzed three years of data on U.S. fatalities reported that accidental deaths of any kind are more likely in the days following a spring forward. Their 1996 analysis showed a 6.5 percent increase, which meant that about 200 more accidental deaths occurred immediately after the start of DST than would typically occur in a given period of the same length.

I’m convinced. But the solution to the problem isn’t to get rid of DST. No, the solution is to get rid of standard time and use DST year around.

I’m not arguing for year-around DST from an economic standpoint. The evidence about the economic advantages of DST is inconclusive.

I’m arguing for year-around DST as a way to eliminate “spring forward” distress and enjoy an extra hour of daylight in the winter.

Don’t you enjoy those late summer sunsets? I sure do, and a lot other people seem to enjoy them, too. That’s why daylight saving time won’t be abolished.

But if you love those late summer sunsets, you should also enjoy an extra hour of daylight at the end of a drab winter day. I know that I would. And it’s not as if you’d miss anything if the sun rises an hour later in the winter. Even with standard time, most working people and students have to be up and about before sunrise in winter, even though sunrise comes an hour earlier than it would with DST.

How would year-around DST affect you? The following table gives the times of sunrise and sunset on the longest and shortest days of 2017 for nine major cities, north to south and west to east:

I report, you decide. If it were up to me, the decision would be year-around DST.