Science and Understanding

Special Relativity

I have removed my four posts about special relativity and incorporated them in a new page, “Einstein’s Errors.” I will update that page occasionally rather than post about special relativity, which is rather “off the subject” for this blog.

A True Scientist Speaks

I am reading, with great delight, Old Physics for New: A Worldview Alternative to Einstein’s Relativity Theory, by Thomas E. Phipps Jr. (1925-2016). Dr. Phipps was a physicist who happened to have been a member of a World War II operations research unit that evolved into the think-tank where I worked for 30 years.

Phipps challenged the basic tenets of Einstein’s special theory of relativity (STR) in Old Physics for New, an earlier book (Heretical Verities: Mathematical Themes in Physical Description), and many of his scholarly articles. I have drawn on Old Physics for New in two of my posts about STR (this and this), and will do so in future posts on the subject. But aside from STR, about which Phipps is refreshingly skeptical, I admire his honesty and clear-minded view of science.

Regarding Phipps’s honesty, I turn to his preface to the second edition of Old Physics for New:

[I]n the first edition I wrongly claimed awareness of two “crucial” experiments that would decide between Einstein’s special relativity theory and my proposed alternative. These two were (1) an accurate assessment of stellar aberration and (2) a measurement of light speed in orbit. Only the first of these is valid. The other was an error on my part, which I am obligated and privileged to correct here. [pp. xi-xii]

Phipps’s clear-minded view of science is evident throughout the book. In the preface, he scores a direct hit on the pseudo-scientific faddism:

The attitude of the traditional scientist toward lies and errors has always been that it is his job to tell the truth and to eradicate mistakes. Lately, scientists, with climate science in the van, have begun openly to espouse an opposite view, a different paradigm, which marches under the black banner of “post-normal science.”

According to this new perception, before the scientist goes into his laboratory it is his duty, for the sake of mankind, to study the worldwide political situation and to decide what errors need promulgating and what lies need telling. Then he goes into his laboratory, interrogates his computer, fiddles his theory, fabricates or massages his data, etc., and produces the results required to support those predetermined lies and errors. Finally he emerges into the light of publicity and writes reports acceptable to like-minded bureaucrats in such government agencies as the National Science Foundation, offers interviews to reporters working for like-minded bosses in the media, testifies before Congress, etc., all in such a way as to suppress traditional science and ultimately to make it impossible….

In this way post-normal science wages pre-emptive war on what Thomas Kuhn famously called “normal science,” because the latter fails to promote with adequate zeal those political and social goals that the post-normal scientist happens to recognize as deserving promotion…. Post-normal behavior seamlessly blends the implacable arrogance of the up-to-date terrorist with the technique of The Big Lie, pioneered by Hitler and Goebbels…. [pp. xii-xiii]

I regret deeply that I never met or corresponded with Dr. Phipps.

Nature, Nurture, and Leniency

I recently came across an article by Brian Boutwell, “Why Parenting May not Matter and Why Most Social Science Research Is Probably Wrong” (Quillette, December 1, 2015). Boutwell is an associate professor of criminology and criminal justice at Saint Louis University. Here’s some of what he has to say about nature, nurture, and behavior:

Despite how it feels, your mother and father (or whoever raised you) likely imprinted almost nothing on your personality that has persisted into adulthood…. I do have evidence, though, and by the time we’ve strolled through the menagerie of reasons to doubt parenting effects, I think another point will also become evident: the problems with parenting research are just a symptom of a larger malady plaguing the social and health sciences. A malady that needs to be dealt with….

[L]et’s start with a study published recently in the prestigious journal Nature Genetics.1 Tinca Polderman and colleagues just completed the Herculean task of reviewing nearly all twin studies published by behavior geneticists over the past 50 years….

Genetic factors were consistently relevant, differentiating humans on a range of health and psychological outcomes (in technical parlance, human differences are heritable). The environment, not surprisingly, was also clearly and convincingly implicated….

[B]ehavioral geneticists make a finer grain distinction than most about the environment, subdividing it into shared and non-shared components. Not much is really complicated about this. The shared environment makes children raised together similar to each other. The term encompasses the typical parenting effects that we normally envision when we think about environmental variables. Non-shared influences capture the unique experiences of siblings raised in the same home; they make siblings different from one another….

Based on the results of classical twin studies, it just doesn’t appear that parenting—whether mom and dad are permissive or not, read to their kid or not, or whatever else—impacts development as much as we might like to think. Regarding the cross-validation that I mentioned, studies examining identical twins separated at birth and reared apart have repeatedly revealed (in shocking ways) the same thing: these individuals are remarkably similar when in fact they should be utterly different (they have completely different environments, but the same genes).3 Alternatively, non-biologically related adopted children (who have no genetic commonalities) raised together are utterly dissimilar to each other—despite in many cases having decades of exposure to the same parents and home environments.

One logical explanation for this is a lack of parenting influence for psychological development. Judith Rich Harris made this point forcefully in her book The Nurture Assumption…. As Harris notes, parents are not to blame for their children’s neuroses (beyond the genes they contribute to the manufacturing of that child), nor can they take much credit for their successful psychological adjustment. To put a finer point on what Harris argued, children do not transport the effects of parenting (whatever they might be) outside the home. The socialization of children certainly matters (remember, neither personality nor temperament is 100 percent heritable), but it is not the parents who are the primary “socializers”, that honor goes to the child’s peer group….

Is it possible that parents really do shape children in deep and meaningful ways? Sure it is…. The trouble is that most research on parenting will not help you in the slightest because it doesn’t control for genetic factors….

Natural selection has wired into us a sense of attachment for our offspring. There is no need to graft on beliefs about “the power of parenting” in order to justify our instinct that being a good parent is important. Consider this: what if parenting really doesn’t matter? Then what? The evidence for pervasive parenting effects, after all, looks like a foundation of sand likely to slide out from under us at any second. If your moral constitution requires that you exert god-like control over your kid’s psychological development in order to treat them with the dignity afforded any other human being, then perhaps it is time to recalibrate your moral compass…. If you want happy children, and you desire a relationship with them that lasts beyond when they’re old enough to fly the nest, then be good to your kids. Just know that it probably will have little effect on the person they will grow into.

Color me unconvinced. There’s a lot of hand-waving in Boutwell’s piece, but little in the way of crucial facts, such as:

  • How is behavior quantified?
  • Does the quantification account for all aspects of behavior (unlikely), or only those aspects that are routinely quantified (e.g., criminal convictions)?
  • Is it meaningful to say that about 50 percent of behavior is genetically determined, 45 percent is peer-driven, and 0-5 percent is due to “parenting” (as Judith Rich Harris does)? Which 50 percent, 45 percent, and 0-5 percent? And how does one add various types of behavior?
  • How does one determine (outside an unrealistic experiment) the extent to which “children do not transport the effects of parenting (whatever they might be) outside the home”?

The measurement of behavior can’t possibly be as rigorous and comprehensive as the measurement of intelligence. And even those researchers who are willing to countenance and estimate the heritability of intelligence give varying estimates of its magnitude, ranging from 50 to 80 percent.

I wonder if Boutwell, Harris, et al. would like to live in a world in which parents quit teaching their children to obey the law; refrain from lying, stealing, and hurting others; honor their obligations; respect old people; treat babies with care; and work for a living (“money doesn’t grow on trees”).

Unfortunately, the world in which we live — even in the United States — seems more and more to resemble the kind of world in which parents have failed in their duty to inculcate in their children the values of honesty, respect, and hard work. This is from a post at Dyspepsia Generation, “The Spoiled Children of Capitalism“ (no longer online):

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it. And, sadly, they passed their principles, if one may use the term so loosely, down the generations to the point where young people today are scarcely worth using for fertilizer.

In 1919, or 1929, or especially 1939, the adolescents of 1969 would have had neither the leisure nor the money to create the Woodstock Nation. But mommy and daddy shelled out because they didn’t want their little darlings to be caught short, and consequently their little darlings became the worthless whiners who voted for people like Bill Clinton and Barack Obama [and who were people like Bill Clinton and Barack Obama: ED.], with results as you see them. Now that history is catching up to them, a third generation of losers can think of nothing better to do than camp out on Wall Street in hopes that the Cargo will suddenly begin to arrive again.

Good luck with that.

I subscribe to the view that the rot set in after World War II. That rot, in the form of slackerism, is more prevalent now than it ever was. It is not for nothing that Gen Y is also known as the Boomerang Generation.

Nor is it unsurprising that campuses have become hotbeds of petulant and violent behavior. And it’s not just students, but also faculty and administrators — many of whom are boomers. Where were these people before the 1960s, when the boomers came of age? Do you suppose that their sudden emergence was the result of a massive genetic mutation that swept across the nation in the late 1940s? I doubt it very much.

Their sudden emergence was due to the failure of too many members of the so-called Greatest Generation to inculcate in their children the values of honesty, respect, and hard work. How does one do that? By being clear about expectations and by setting limits on behavior — limits that are enforced swiftly, unequivocally, and sometimes with the palm of a hand. When children learn that they can “get away” with dishonesty, disrespect, and sloth, guess what? They become dishonest, disrespectful, and slothful. They give vent to their disrespect through whining, tantrum-like behavior, and even violence.

The leniency that’s being shown toward campus jerks — students, faculty, and administrators — is especially disgusting to this pre-boomer. University presidents need to grow backbones. Campus and municipal police should be out in force, maintaining order and arresting whoever fails to provide a “safe space” for a speaker who might offend their delicate sensibilities. Disruptive and violent behavior should be met with expulsions, firings, and criminal charges.

“My genes made me do it” is neither a valid explanation nor an acceptable excuse.


Related reading: There is a page on Judith Rich Harris’s website with a long list of links to reviews, broadcast commentary, and other discussions of The Nurture Assumption. It is to Harris’s credit that she links to negative as well as positive views of her work.

Institutional Bias

Arnold Kling:

On the question of whether Federal workers are overpaid relative to private sector workers, [Justin Fox] writes,

The Federal Salary Council, a government advisory body composed of labor experts and government-employee representatives, regularly finds that federal employees make about a third less than people doing similar work in the private sector. The conservative American Enterprise Institute and Heritage Foundation, on the other hand, have estimated that federal employees make 14 percent and 22 percent more, respectively, than comparable private-sector workers….

… Could you have predicted ahead of time which organization’s “research” would find a result favorable to Federal workers and which organization would find unfavorable results? Of course you could. So how do you sustain the belief that normative economics and positive economics are distinct from one another, that economic research cleanly separates facts from values?

I saw institutional bias at work many times in my career as an analyst at a tax-funded think-tank. My first experience with it came in the first project to which I was assigned. The issue at hand was a hot one on those days: whether the defense budget should be altered to increase the size of the Air Force’s land-based tactical air (tacair)  forces while reducing the size of Navy’s carrier-based counterpart. The Air Force’s think-tank had issued a report favorable to land-based tacair (surprise!), so the Navy turned to its think-tank (where I worked). Our report favored carrier-based tacair (surprise!).

How could two supposedly objective institutions study the same issue and come to opposite conclusions? Analytical fraud abetted by overt bias? No, that would be too obvious to the “neutral” referees in the Office of the Secretary of Defense. (Why “neutral”? Read this.)

Subtle bias is easily introduced when the issue is complex, as the tacair issue was. Where would tacair forces be required? What payloads would fighters and bombers carry? How easy would it be to set up land bases? How vulnerable would they be to an enemy’s land and air forces? How vulnerable would carriers be to enemy submarines and long-range bombers? How close to shore could carriers approach? How much would new aircraft, bases, and carriers cost to buy and maintain? What kinds of logistical support would they need, and how much would it cost? And on and on.

Hundreds, if not thousands, of assumptions underlay the results of the studies. Analysts at the Air Force’s think-tank chose those assumptions that favored the Air Force; analysts at the Navy’s think-tank chose those assumptions that favored the Navy.

Why? Not because analysts’ jobs were at stake; they weren’t. Not because the Air Force and Navy directed the outcomes of the studies; they didn’t. They didn’t have to because “objective” analysts are human beings who want “their side” to win. When you work for an institution you tend to identify with it; its success becomes your success, and its failure becomes your failure.

The same was true of the “neutral” analysts in the Office of the Secretary of Defense. They knew which way Mr. McNamara leaned on any issue, and they found themselves drawn to the assumptions that would justify his biases.

And so it goes. Bias is a rampant and ineradicable aspect of human striving. It’s ever-present in the political arena The current state of affairs in Washington, D.C., is just the tip of the proverbial iceberg.

The prevalence and influence of bias in matters that affect hundreds of millions of Americans is yet another good reason to limit the power of government.

Not-So-Random Thoughts (XX)

An occasional survey of web material that’s related to subjects about which I’ve posted. Links to the other posts in this series may be found at “Favorite Posts,” just below the list of topics.

In “The Capitalist Paradox Meets the Interest-Group Paradox,” I quote from Frédéric Bastiat’s “What Is Seen and What Is Not Seen“:

[A] law produces not only one effect, but a series of effects. Of these effects, the first alone is immediate; it appears simultaneously with its cause; it is seen. The other effects emerge only subsequently; they are not seen; we are fortunate if we foresee them.

This might also be called the law of unintended consequences. It explains why so much “liberal” legislation is passed: the benefits are focused a particular group and obvious (if overestimated); the costs are borne by taxpayers in general, many of whom fail to see that the sum of “liberal” legislation is a huge tax bill.

Ross Douthat understands:

[A] new paper, just released through the National Bureau of Economic Research, that tries to look at the Affordable Care Act in full. Its authors find, as you would expect, a substantial increase in insurance coverage across the country. What they don’t find is a clear relationship between that expansion and, again, public health. The paper shows no change in unhealthy behaviors (in terms of obesity, drinking and smoking) under
Obamacare, and no statistically significant improvement in self-reported health since the law went into effect….

[T]he health and mortality data [are] still important information for policy makers, because [they] indicate[] that subsidies for health insurance are not a uniquely death-defying and therefore sacrosanct form of social spending. Instead, they’re more like other forms of redistribution, with costs and benefits that have to be weighed against one another, and against other ways to design a safety net. Subsidies for employer-provided coverage crowd out wages, Medicaid coverage creates benefit cliffs and work disincentives…. [“Is Obamacare a Lifesaver?The New York Times, March 29, 2017]

So does Roy Spencer:

In a theoretical sense, we can always work to make the environment “cleaner”, that is, reduce human pollution. So, any attempts to reduce the EPA’s efforts will be viewed by some as just cozying up to big, polluting corporate interests. As I heard one EPA official state at a conference years ago, “We can’t stop making the environment ever cleaner”.

The question no one is asking, though, is “But at what cost?

It was relatively inexpensive to design and install scrubbers on smokestacks at coal-fired power plants to greatly reduce sulfur emissions. The cost was easily absorbed, and electricty rates were not increased that much.

The same is not true of carbon dioxide emissions. Efforts to remove CO2 from combustion byproducts have been extremely difficult, expensive, and with little hope of large-scale success.

There is a saying: don’t let perfect be the enemy of good enough.

In the case of reducing CO2 emissions to fight global warming, I could discuss the science which says it’s not the huge problem it’s portrayed to be — how warming is only progressing at half the rate forecast by those computerized climate models which are guiding our energy policy; how there have been no obvious long-term changes in severe weather; and how nature actually enjoys the extra CO2, with satellites now showing a “global greening” phenomenon with its contribution to increases in agricultural yields.

But it’s the economics which should kill the Clean Power Plan and the alleged Social “Cost” of Carbon. Not the science.

There is no reasonable pathway by which we can meet more than about 20% of global energy demand with renewable energy…the rest must come mostly from fossil fuels. Yes, renewable energy sources are increasing each year, usually because rate payers or taxpayers are forced to subsidize them by the government or by public service commissions. But global energy demand is rising much faster than renewable energy sources can supply. So, for decades to come, we are stuck with fossil fuels as our main energy source.

The fact is, the more we impose high-priced energy on the masses, the more it will hurt the poor. And poverty is arguably the biggest threat to human health and welfare on the planet. [“Trump’s Rollback of EPA Overreach: What No One Is Talking About,” Roy Spencer, Ph.D.[blog], March 29, 2017]

*     *     *

I mentioned the Benedict Option in “Independence Day 2016: The Way Ahead,” quoting Bruce Frohnen in tacit agreement:

[Rod] Dreher has been writing a good deal, of late, about what he calls the Benedict Option, by which he means a tactical withdrawal by people of faith from the mainstream culture into religious communities where they will seek to nurture and strengthen the faithful for reemergence and reengagement at a later date….

The problem with this view is that it underestimates the hostility of the new, non-Christian society [e.g., this and this]….

Leaders of this [new, non-Christian] society will not leave Christians alone if we simply surrender the public square to them. And they will deny they are persecuting anyone for simply applying the law to revoke tax exemptions, force the hiring of nonbelievers, and even jail those who fail to abide by laws they consider eminently reasonable, fair, and just.

Exactly. John Horvat II makes the same point:

For [Dreher], the only response that still remains is to form intentional communities amid the neo-barbarians to “provide an unintentional political witness to secular culture,” which will overwhelm the barbarian by the “sheer humanity of Christian compassion, and the image of human dignity it honors.” He believes that setting up parallel structures inside society will serve to protect and preserve Christian communities under the new neo-barbarian dispensation. We are told we should work with the political establishment to “secure and expand the space within which we can be ourselves and our own institutions” inside an umbrella of religious liberty.

However, barbarians don’t like parallel structures; they don’t like structures at all. They don’t co-exist well with anyone. They don’t keep their agreements or respect religious liberty. They are not impressed by the holy lives of the monks whose monastery they are plundering. You can trust barbarians to always be barbarians. [“Is the Benedict Option the Answer to Neo-Barbarianism?Crisis Magazine, March 29, 2017]

As I say in “The Authoritarianism of Modern Liberalism, and the Conservative Antidote,”

Modern liberalism attracts persons who wish to exert control over others. The stated reasons for exerting control amount to “because I know better” or “because it’s good for you (the person being controlled)” or “because ‘social justice’ demands it.”

Leftists will not countenance a political arrangement that allows anyone to escape the state’s grasp — unless, of course, the state is controlled by the “wrong” party, In which case, leftists (or many of them) would like to exercise their own version of the Benedict Option. See “Polarization and De Facto Partition.”

*     *     *

Theodore Dalrymple understands the difference between terrorism and accidents:

Statistically speaking, I am much more at risk of being killed when I get into my car than when I walk in the streets of the capital cities that I visit. Yet this fact, no matter how often I repeat it, does not reassure me much; the truth is that one terrorist attack affects a society more deeply than a thousand road accidents….

Statistics tell me that I am still safe from it, as are all my fellow citizens, individually considered. But it is precisely the object of terrorism to create fear, dismay, and reaction out of all proportion to its volume and frequency, to change everyone’s way of thinking and behavior. Little by little, it is succeeding. [“How Serious Is the Terrorist Threat?City Journal, March 26, 2017]

Which reminds me of several things I’ve written, beginning with this entry from “Not-So-Random Thoughts (VI)“:

Cato’s loony libertarians (on matters of defense) once again trot out Herr Doktor Professor John Mueller. He writes:

We have calculated that, for the 12-year period from 1999 through 2010 (which includes 9/11, of course), there was one chance in 22 million that an airplane flight would be hijacked or otherwise attacked by terrorists. (“Serial Innumeracy on Homeland Security,” Cato@Liberty, July 24, 2012)

Mueller’s “calculation” consists of an recitation of known terrorist attacks pre-Benghazi and speculation about the status of Al-Qaeda. Note to Mueller: It is the unknown unknowns that kill you. I refer Herr Doktor Professor to “Riots, Culture, and the Final Showdown” and “Mission Not Accomplished.”

See also my posts “Getting It All Wrong about the Risk of Terrorism” and “A Skewed Perspective on Terrorism.”

*     *     *

This is from my post, “A Reflection on the Greatest Generation“:

The Greatest tried to compensate for their own privations by giving their children what they, the parents, had never had in the way of material possessions and “fun”. And that is where the Greatest Generation failed its children — especially the Baby Boomers — in large degree. A large proportion of Boomers grew up believing that they should have whatever they want, when they want it, with no strings attached. Thus many of them divorced, drank, and used drugs almost wantonly….

The Greatest Generation — having grown up believing that FDR was a secular messiah, and having learned comradeship in World War II — also bequeathed us governmental self-indulgence in the form of the welfare-regulatory state. Meddling in others’ affairs seems to be a predilection of the Greatest Generation, a predilection that the Millenials may be shrugging off.

We owe the Greatest Generation a great debt for its service during World War II. We also owe the Greatest Generation a reprimand for the way it raised its children and kowtowed to government. Respect forbids me from delivering the reprimand, but I record it here, for the benefit of anyone who has unduly romanticized the Greatest Generation.

There’s more in “The Spoiled Children of Capitalism“:

This is from Tim [of Angle’s] “The Spoiled Children of Capitalism“:

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it….

I have long shared Tim’s assessment of the Boomer generation. Among the corroborating data are my sister and my wife’s sister and brother — Boomers all….

Low conscientiousness was the bane of those Boomers who, in the 1960s and 1970s, chose to “drop out” and “do drugs.”…

Now comes this:

According to writer and venture capitalist Bruce Gibney, baby boomers are a “generation of sociopaths.”

In his new book, he argues that their “reckless self-indulgence” is in fact what set the example for millennials.

Gibney describes boomers as “acting without empathy, prudence, or respect for facts – acting, in other words, as sociopaths.”

And he’s not the first person to suggest this.

Back in 1976, journalist Tom Wolfe dubbed the young adults then coming of age the “Me Generation” in the New York Times, which is a term now widely used to describe millennials.

But the baby boomers grew up in a very different climate to today’s young adults.

When the generation born after World War Two were starting to make their way in the world, it was a time of economic prosperity.

“For the first half of the boomers particularly, they came of age in a time of fairly effortless prosperity, and they were conditioned to think that everything gets better each year without any real effort,” Gibney explained to The Huffington Post.

“So they really just assume that things are going to work out, no matter what. That’s unhelpful conditioning.

“You have 25 years where everything just seems to be getting better, so you tend not to try as hard, and you have much greater expectations about what society can do for you, and what it owes you.”…

Gibney puts forward the argument that boomers – specifically white, middle-class ones – tend to have genuine sociopathic traits.

He backs up his argument with mental health data which appears to show that this generation have more anti-social characteristics than others – lack of empathy, disregard for others, egotism and impulsivity, for example. [Rachel Hosie, “Baby Boomers Are a Generation of Sociopaths,” Independent, March 23, 2017]

That’s what I said.

More about Intelligence

Do genes matter? You betcha! See geneticist Gregory Cochran’s “Everything Is Different but the Same” and “Missing Heritability — Found?” (Useful Wikipedia articles for explanations of terms used by Cochran: “Genome-wide association study,” “Genetic load,” and “Allele.”) Snippets:

Another new paper finds that the GWAS hits for IQ – largely determined in Europeans – don’t work in people of African descent.

*     *     *

There is an interesting new paper out on genetics and IQ. The claim is that they have found the missing heritability – in rare variants, generally different in each family.

Cochran, in typical fashion, ends the second item with a bombastic put-down of the purported dysgenic trend, about which I’ve written here.

Psychologist James Thompson seems to put stock in the dysgenic trend. See, for example, his post “The Woodley Effect“:

[W]e could say that the Flynn Effect is about adding fertilizer to the soil, whereas the Woodley Effect is about noting the genetic quality of the plants. In my last post I described the current situation thus: The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.

But Thompson joins Cochran in his willingness to accept what the data show, namely, that there are strong linkages between race and intelligence. See, for example, “County IQs and Their Consequences” (and my related post). Thompson writes:

[I]n social interaction it is not always either possible or desirable to make intelligence estimates. More relevant is to look at technical innovation rates, patents, science publications and the like…. If there were no differences [in such] measures, then the associations between mental ability and social outcomes would be weakened, and eventually disconfirmed. However, the general link between national IQs and economic outcomes holds up pretty well….

… Smart fraction research suggests that the impact of the brightest persons in a national economy has a disproportionately positive effect on GDP. Rindermann and I have argued, following others, that the brightest 5% of every country make the greatest contribution by far, though of course many others of lower ability are required to implement the discoveries and strategies of the brightest.

Though Thompson doesn’t directly address race and intelligence in “10 Replicants in Search of Fame,” he leaves no doubt about dominance of genes over environment in the determination of traits; for example:

[A] review of the world’s literature on intelligence that included 10,000 pairs of twins showed identical twins to be significantly more similar than fraternal twins (twin correlations of about .85 and .60, respectively), with corroborating results from family and adoption studies, implying significant genetic influence….

Some traits, such as individual differences in height, yield heritability as high as 90%. Behavioural traits are less reliably measured than physical traits such as height, and error of measurement contributes to nonheritable variance….

[A] review of 23 twin studies and 12 family studies confirmed that anxiety and depression are correlated entirely for genetic reasons. In other words, the same genes affect both disorders, meaning that from a genetic perspective they are the same disorder. [I have personally witnessed this effect: TEA.]…

The heritability of intelligence increases throughout development. This is a strange and counter-intuitive finding: one would expect the effects of learning to accumulate with experience, increasing the strength of the environmental factor, but the opposite is true….

[M]easures of the environment widely used in psychological science—such as parenting, social support, and life events—can be treated as dependent measures in genetic analyses….

In sum, environments are partly genetically-influenced niches….

People to some extent make their own environments….

[F]or most behavioral dimensions and disorders, it is genetics that accounts for similarity among siblings.

In several of the snippets quoted above, Thompson is referring to a phenomenon known as genetic confounding, which is to say that genetic effects are often mistaken for environmental effects. Brian Boutwell and JC Barnes address an aspect of genetic confounding in “Is Crime Genetic? Scientists Don’t Know Because They’re Afraid to Ask.” A small sample:

The effects of genetic differences make some people more impulsive and shortsighted than others, some people more healthy or infirm than others, and, despite how uncomfortable it might be to admit, genes also make some folks more likely to break the law than others.

John Ray addresses another aspect of genetic confounding in “Blacks, Whites, Genes, and Disease,” where he comments about a recent article in the Journal of the American Medical Association:

It says things that the Left do not want to hear. But it says those things in verbose academic language that hides the point. So let me translate into plain English:

* The poor get more illness and die younger
* Blacks get more illness than whites and die younger
* Part of that difference is traceable to genetic differences between blacks and whites.
* But environmental differences — such as education — explain more than genetic differences do
* Researchers often ignore genetics for ideological reasons
* You don’t fully understand what is going on in an illness unless you know about any genetic factors that may be at work.
* Genetics research should pay more attention to blacks

Most of those things I have been saying for years — with one exception:

They find that environmental factors have greater effect than genetics. But they do that by making one huge and false assumption. They assume that education is an environmental factor. It is not. Educational success is hugely correlated with IQ, which is about two thirds genetic. High IQ people stay in the educational system for longer because they are better at it, whereas low IQ people (many of whom are blacks) just can’t do it at all. So if we treated education as a genetic factor, environmental differences would fade way as causes of disease. As Hans Eysenck once said to me in a casual comment: “It’s ALL genetic”. That’s not wholly true but it comes close

So the recommendation of the study — that we work on improving environmental factors that affect disease — is unlikely to achieve much. They are aiming their gun towards where the rabbit is not. If it were an actual rabbit, it would probably say: “What’s up Doc?”

Some problems are unfixable but knowing which problems they are can help us to avoid wasting resources on them. The black/white gap probably has no medical solution.

I return to James Thompson for a pair of less incendiary items. “The Secret in Your Eyes” points to a link between intelligence and pupil size. In “Group IQ Doesn’t Exist,” Thompson points out the fatuousness of the belief that a group is somehow more intelligent that the smartest member of the group. As Thompson puts it:

So, if you want a problem solved, don’t form a team. Find the brightest person and let [him] work on it. Placing [him] in a team will, on average, reduce [his] productivity. My advice would be: never form a team if there is one person who can sort out the problem.

Forcing the brightest person to act as a member of a team often results in the suppression of that person’s ideas by the (usually) more extroverted and therefore less-intelligent members of the team.

Added 04/05/17: James Thompson issues a challenge to IQ-deniers in “IQ Does Not Exist (Lead Poisoning Aside)“:

[T]his study shows how a neuro-toxin can have an effect on intelligence, of similar magnitude to low birth weight….

[I]f someone tells you they do not believe in intelligence reply that you wish them well, but that if they have children they should keep them well away from neuro-toxins because, among other things, they reduce social mobility.

*     *     *

Related posts:
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Round Up the Usual Suspects
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Let’s Have That “Conversation” about Race
Affirmative Action Comes Home to Roost
The IQ of Nations
Race and Social Engineering

Mugged by Non-Reality

A wise man said that a conservative is a liberal who has been mugged by reality. Thanks to Malcolm Pollock, I’ve just learned that a liberal is a conservative whose grasp of reality has been erased, literally.

Actually, this is unsurprising news (to me). I have pointed out many times that the various manifestations of liberalism — from stifling regulation to untrammeled immigration — arise from the cosseted beneficiaries of capitalism (e.g., pundits, politicians, academicians, students) who are far removed from the actualities of producing real things for real people. This has turned their brains into a kind of mush that is fit only for hatching unrealistic but costly schemes which rest upon a skewed vision of human nature.

Daylight Saving Time Doesn’t Kill…

…it’s “springing forward” in March that kills.

There’s a hue and cry about daylight saving time (that’s “saving” not “savings”). The main complaint seems to be the stress that results from moving clocks ahead in March:

Springing forward may be hazardous to your health. The Monday following the start of daylight saving time (DST) is a particularly bad one for heart attacks, traffic accidents, workplace injuries and accidental deaths. Now that most Americans have switched their clocks an hour ahead, studies show many will suffer for it.

Most Americans slept about 40 minutes less than normal on Sunday night, according to a 2009 study published in the Journal of Applied Psychology…. Since sleep is important for maintaining the body’s daily performance levels, much of society is broadly feeling the impact of less rest, which can include forgetfulness, impaired memory and a lower sex drive, according to WebMD.

One of the most striking affects of this annual shift: Last year, Colorado researchers reported finding a 25 percent increase in the number of heart attacks that occur on the Monday after DST starts, as compared with a normal Monday…. A cardiologist in Croatia recorded about twice as many heart attacks than expected during that same day, and researchers in Sweden have also witnessed a spike in heart attacks in the week following the time adjustment, particularly among those who were already at risk.

Workplace injuries are more likely to occur on that Monday, too, possibly because workers are more susceptible to a loss of focus due to too little sleep. Researchers at Michigan State University used over 20 years of data from the Mine Safety and Health Administration to determine that three to four more miners than average sustain a work-related injury on the Monday following the start of DST. Those injuries resulted in 2,649 lost days of work, which is a 68 percent increase over the hours lost from injuries on an average day. The team found no effects following the nation’s one-hour shift back to standard time in the fall….

There’s even more bad news: Drivers are more likely to be in a fatal traffic accident on DST’s first Monday, according to a 2001 study in Sleep Medicine. The authors analyzed 21 years of data on fatal traffic accidents in the U.S. and found that, following the start of DST, drivers are in 83.5 accidents as compared with 78.2 on the average Monday. This phenomenon has also been recorded in Canadian drivers and British motorists.

If all that wasn’t enough, a researcher from the University of British Columbia who analyzed three years of data on U.S. fatalities reported that accidental deaths of any kind are more likely in the days following a spring forward. Their 1996 analysis showed a 6.5 percent increase, which meant that about 200 more accidental deaths occurred immediately after the start of DST than would typically occur in a given period of the same length.

I’m convinced. But the solution to the problem isn’t to get rid of DST. No, the solution is to get rid of standard time and use DST year around.

I’m not arguing for year-around DST from an economic standpoint. The evidence about the economic advantages of DST is inconclusive.

I’m arguing for year-around DST as a way to eliminate “spring forward” distress and enjoy an extra hour of daylight in the winter.

Don’t you enjoy those late summer sunsets? I sure do, and a lot other people seem to enjoy them, too. That’s why daylight saving time won’t be abolished.

But if you love those late summer sunsets, you should also enjoy an extra hour of daylight at the end of a drab winter day. I know that I would. And it’s not as if you’d miss anything if the sun rises an hour later in the winter. Even with standard time, most working people and students have to be up and about before sunrise in winter, even though sunrise comes an hour earlier than it would with DST.

How would year-around DST affect you? The following table gives the times of sunrise and sunset on the longest and shortest days of 2017 for nine major cities, north to south and west to east:

I report, you decide. If it were up to me, the decision would be year-around DST.

Thoughts for the Day

Excerpts of recent correspondence.

Robots, and their functional equivalents in specialized AI systems, can either replace people or make people more productive. I suspect that the latter has been true in the realm of medicine — so far, at least. But I have seen reportage of robotic units that are beginning to perform routine, low-level work in hospitals. So, as usual, the first people to be replaced will be those with rudimentary skills, not highly specialized training. Will it go on from there? Maybe, but the crystal ball is as cloudy as an old-time London fog.

In any event, I don’t believe that automation is inherently a job-killer. The real job-killer consists of government programs that subsidize non-work — early retirement under Social Security, food stamps and other forms of welfare, etc. Automation has been in progress for eons, and with a vengeance since the second industrial revolution. But, on balance, it hasn’t killed jobs. It just pushes people toward new and different jobs that fit the skills they have to offer. I expect nothing different in the future, barring government programs aimed at subsidizing the “victims” of technological displacement.

*      *      *

It’s civil war by other means (so far): David Wasserman, “Purple America Has All but Disappeared” (The New York Times, March 8, 2017).

*      *      *

I know that most of what I write (even the non-political stuff) has a combative edge, and that I’m therefore unlikely to persuade people who disagree with me. I do it my way for two reasons. First, I’m too old to change my ways, and I’m not going to try. Second, in a world that’s seemingly dominated by left-wing ideas, it’s just plain fun to attack them. If what I write happens to help someone else fight the war on leftism — or if it happens to make a young person re-think a mindless commitment to leftism — that’s a plus.

*     *     *

I am pessimistic about the likelihood of cultural renewal in America. The populace is too deeply saturated with left-wing propaganda, which is injected from kindergarten through graduate school, with constant reinforcement via the media and popular culture. There are broad swaths of people — especially in low-income brackets — whose lives revolve around mindless escape from the mundane via drugs, alcohol, promiscuous sex, etc. Broad swaths of the educated classes have abandoned erudition and contemplation and taken up gadgets and entertainment.

The only hope for conservatives is to build their own “bubbles,” like those of effete liberals, and live within them. Even that will prove difficult as long as government (especially the Supreme Court) persists in storming the ramparts in the name of “equality” and “self-creation.”

*     *     *

I correlated Austin’s average temperatures in February and August. Here are the correlation coefficients for following periods:

1854-2016 = 0.001
1875-2016 = -0.007
1900-2016 = 0.178
1925-2016 = 0.161
1950-2016 = 0.191
1975-2016 = 0.126

Of these correlations, only the one for 1900-2016 is statistically significant at the 0.05 level (less than a 5-percent chance of a random relationship). The correlations for 1925-2016 and 1950-2016 are fairly robust, and almost significant at the 0.05 level. The relationship for 1975-2016 is statistically insignificant. I conclude that there’s a positive relationship between February and August temperatures, but weak one. A warm winter doesn’t necessarily presage an extra-hot summer in Austin.

Is Consciousness an Illusion?

Scientists seem to have pinpointed the physical source of consciousness. But the execrable Daniel C. Dennett, for whom science is God, hasn’t read the memo. Dennett argues in his latest book, From Bacteria to Bach and Back: The Evolution of Minds, that consciousness is an illusion.

Another philosopher, Thomas Nagel, weighs in with a dissenting review of Dennett’s book. (Nagel is better than Dennett, but that’s faint praise.) Nagel’s review, “Is Consciousness an Illusion?,” appears in The New York Review of Books (March 9, 2017). Here are some excerpts:

According to the manifest image, Dennett writes, the world is

full of other people, plants, and animals, furniture and houses and cars…and colors and rainbows and sunsets, and voices and haircuts, and home runs and dollars, and problems and opportunities and mistakes, among many other such things. These are the myriad “things” that are easy for us to recognize, point to, love or hate, and, in many cases, manipulate or even create…. It’s the world according to us.

According to the scientific image, on the other hand, the world

is populated with molecules, atoms, electrons, gravity, quarks, and who knows what else (dark energy, strings? branes?)….

In an illuminating metaphor, Dennett asserts that the manifest image that depicts the world in which we live our everyday lives is composed of a set of user-illusions,

like the ingenious user-illusion of click-and-drag icons, little tan folders into which files may be dropped, and the rest of the ever more familiar items on your computer’s desktop. What is actually going on behind the desktop is mind-numbingly complicated, but users don’t need to know about it, so intelligent interface designers have simplified the affordances, making them particularly salient for human eyes, and adding sound effects to help direct attention. Nothing compact and salient inside the computer corresponds to that little tan file-folder on the desktop screen.

He says that the manifest image of each species is “a user-illusion brilliantly designed by evolution to fit the needs of its users.” In spite of the word “illusion” he doesn’t wish simply to deny the reality of the things that compose the manifest image; the things we see and hear and interact with are “not mere fictions but different versions of what actually exists: real patterns.” The underlying reality, however, what exists in itself and not just for us or for other creatures, is accurately represented only by the scientific image—ultimately in the language of physics, chemistry, molecular biology, and neurophysiology….

You may well ask how consciousness can be an illusion, since every illusion is itself a conscious experience—an appearance that doesn’t correspond to reality. So it cannot appear to me that I am conscious though I am not: as Descartes famously observed, the reality of my own consciousness is the one thing I cannot be deluded about….

According to Dennett, however, the reality is that the representations that underlie human behavior are found in neural structures of which we know very little. And the same is true of the similar conception we have of our own minds. That conception does not capture an inner reality, but has arisen as a consequence of our need to communicate to others in rough and graspable fashion our various competencies and dispositions (and also, sometimes, to conceal them)….

The trouble is that Dennett concludes not only that there is much more behind our behavioral competencies than is revealed to the first-person point of view—which is certainly true—but that nothing whatever is revealed to the first-person point of view but a “version” of the neural machinery….

I am reminded of the Marx Brothers line: “Who are you going to believe, me or your lying eyes?” Dennett asks us to turn our backs on what is glaringly obvious—that in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotle’s words, “maintaining a thesis at all costs.”

Nagel’s counterargument would have been more compelling if he had relied on a simple metaphor like this one: Most drivers can’t describe in any detail the process by which an automobile converts the potential energy of gasoline to the kinetic energy that’s produced by the engine and then transmitted eventually to the automobile’s drive wheels. Instead, most drivers simply rely on the knowledge that pushing the start button will start the car. That knowledge may be shallow, but it isn’t illusory. If it were, an automobile would be a useless hulk sitting in the driver’s garage.

Some tough questions are in order, too. If consciousness is an illusion, where does it come from? Dennett is an out-and-out physicalist and strident atheist. It therefore follows that Dennett can’t believe in consciousness (the manifest image) as a free-floating spiritual entity that’s disconnected from physical reality (the scientific image). It must, in fact, be a representation of physical reality, even if a weak and flawed one.

Looked at another way, consciousness is the gateway to the scientific image. It is only through the  deliberate, reasoned, fact-based application of consciousness that scientists have been able to roll back the mysteries of the physical world and improve the manifest image so that it more nearly resembles the scientific image. The gap will never be closed, of course. Even the most learned of human beings have only a tenuous grasp of physical reality in all of it myriad aspects. Nor will anyone ever understand what physical reality “really is” — it’s beyond apprehension and description. But that doesn’t negate the symbiosis of physical reality and consciousness.

*     *     *

Related posts:
Debunking “Scientific Objectivity”
A Non-Believer Defends Religion
Evolution as God?
The Greatest Mystery
What Is Truth?
The Improbability of Us
The Atheism of the Gaps
Demystifying Science
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Nothingness
The Glory of the Human Mind
Mind, Cosmos, and Consciousness
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Words Fail Us
Hayek’s Anticipatory Account of Consciousness

Fine-Tuning in a Wacky Wrapper

The Unz Review hosts columnists who hold a wide range of views, including whacko-bizarro-conspiracy-theory-nut-job ones. Case in point: Kevin Barrett, who recently posted a review of David Ray Griffin’s God Exists But Gawd Does Not: From Evil to the New Atheism to Fine Tuning. Some things said by Barrett in the course of his review suggest that Griffin, too, holds whacko-bizarro-conspiracy-theory-nut-job views; for example:

In 2004 he published The New Pearl Harbor — which still stands as the single most important work on 9/11 — and followed it up with more than ten books expanding on his analysis of the false flag obscenity that shaped the 21st century.

Further investigation — a trip to Wikipedia — tells me that Griffin believes there is

a prima facie case for the contention that there must have been complicity from individuals within the United States and joined the 9/11 Truth Movement in calling for an extensive investigation from the United States media, Congress and the 9/11 Commission. At this time, he set about writing his first book on the subject, which he called The New Pearl Harbor: Disturbing Questions About the Bush Administration and 9/11 (2004).

Part One of the book looks at the events of 9/11, discussing each flight in turn and also the behaviour of President George W. Bush and his Secret Service protection. Part Two examines 9/11 in a wider context, in the form of four “disturbing questions.” David Ray Griffin discussed this book and the claims within it in an interview with Nick Welsh, reported under the headline Thinking Unthinkable Thoughts: Theologian Charges White House Complicity in 9/11 Attack….

Griffin’s second book on the subject was a direct critique of the 9/11 Commission Report, called The 9/11 Commission Report: Omissions And Distortions (2005). Griffin’s article The 9/11 Commission Report: A 571-page Lie summarizes this book, presenting 115 instances of either omissions or distortions of evidence he claims are in the report, stating that “the entire Report is constructed in support of one big lie: that the official story about 9/11 is true.”

In his next book, Christian Faith and the Truth Behind 9/11: A Call to Reflection and Action (2006), he summarizes some of what he believes is evidence for government complicity and reflects on its implications for Christians. The Presbyterian Publishing Corporation, publishers of the book, noted that Griffin is a distinguished theologian and praised the book’s religious content, but said, “The board believes the conspiracy theory is spurious and based on questionable research.”

And on and on and on. The moral of which is this: If you have already “know” the “truth,” it’s easy to weave together factual tidbits that seem to corroborate it. It’s an old game that any number of persons can play; for example: Mrs. Lincoln hired John Wilkes Booth to kill Abe; Woodrow Wilson was behind the sinking of the Lusitania, which “forced” him to ask for a declaration of war against Germany; FDR knew about Japan’s plans to bomb Pearl Harbor but did nothing so that he could then have a roundly applauded excuse to ask for a declaration of war on Japan; LBJ ordered the assassination of JFK; etc. Some of those bizarre plots have been “proved” by recourse to factual tidbits. I’ve no doubt that all of them could be “proved” in that way.

If that is so, you may well ask why I am writing about Barrett’s review of Griffin’s book? Because in the midst of Barrett’s off-kilter observations (e.g., “the Nazi holocaust, while terrible, wasn’t as incomparably horrible as it has been made out to be”) there’s a tantalizing passage:

Griffin’s Chapter 14, “Teleological Order,” provides the strongest stand-alone rational-empirical argument for God’s existence, one that should convince any open-minded person who is willing to invest some time in thinking about it and investigating the cited sources. This argument rests on the observation that at least 26 of the fundamental constants discovered by physicists appear to have been “fine tuned” to produce a universe in which complex, intelligent life forms could exist. A very slight variation in any one of these 26 numbers (including the strong force, electromagnetism, gravity, the mass difference between protons and neutrons, and many others) would produce a vastly less complex, rich, interesting universe, and destroy any possibility of complex life forms or intelligent observers. In short, the universe is indeed a miracle, in the sense of something indescribably wonderful and almost infinitely improbable. The claim that it could arise by chance (as opposed to intelligent design) is ludicrous.

Even the most dogmatic atheists who are familiar with the scientific facts admit this. Their only recourse is to embrace the multiple-universes interpretation of quantum physics, claim that there are almost infinitely many actual universes (virtually all of them uninteresting and unfit for life), and assert that we just happen to have gotten unbelievably lucky by finding ourselves in the one-universe-out-of-infinity-minus-one with all of the constants perfectly fine-tuned for our existence. But, they argue, we should not be grateful for this almost unbelievable luck — which is far more improbable than winning hundreds of multi-million-dollar lottery jackpots in a row. For our existence in an amazingly, improbably-wonderful-for-us universe is just a tautology, since we couldn’t possibly be in any of the vast, vast, vast majority of universes that we couldn’t possibly be in.

Griffin gently and persuasively points out that the multiple-universes defense of atheism is riddled with absurdities and inconsistencies. Occam’s razor definitively indicates that by far the best explanation of the facts is that the universe was created not just by an intelligent designer, but by one that must be considered almost supremely intelligent as well as almost supremely creative: a creative intelligence as far beyond Einstein-times-Leonardo-to-the-Nth-power as those great minds were beyond that of a common slug.

Fine-tuning is not a good argument for God’s existence. Here is a good argument for God’s existence:

  1. In the material universe, cause precedes effect.
  2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
  3. The existence of the universe therefore implies a separate, uncaused cause.

Barrett (Griffin?) goes on:

Occam’s razor definitively indicates that by far the best explanation of the facts is that the universe was created not just by an intelligent designer, but by one that must be considered almost supremely intelligent as well as almost supremely creative: a creative intelligence as far beyond Einstein-times-Leonardo-to-the-Nth-power as those great minds were beyond that of a common slug.

Whoa! Occam’s razor indicates nothing of the kind:

Occam’s razor is used as a heuristic technique (discovery tool) to guide scientists in the development of theoretical models, rather than as an arbiter between published models. In the scientific method, Occam’s razor is not considered an irrefutable principle of logic or a scientific result; the preference for simplicity in the scientific method is based on the falsifiability criterion. For each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypotheses to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are more testable.

Barrett’s (Griffin’s?) hypothesis about the nature of the supremely intelligent being is unduly complicated. Not that the existence of God is a testable (falsifiable) hypothesis. It’s just a logical necessity, and should be left at that.

Scott Adams Understands Probability

A probability expresses the observed frequency of the occurrence of a well-defined event for a large number of repetitions of the event, where each repetition is independent of the others (i.e., random). Thus the probability that a fair coin will come up heads in, say, 100 tosses is approximately 0.5; that is, it will come up heads approximately 50 percent of the time. (In the penultimate paragraph of this post, I explain why I emphasize approximately.)

If a coin is tossed 100 times, what is the probability that it will come up heads on the 101st toss? There is no probability for that event because it hasn’t occurred yet. The coin will come up heads or tails, and that’s all that can be said about it.

Scott Adams, writing about the probability of being killed by an immigrant, puts it this way:

The idea that we can predict the future based on the past is one of our most persistent illusions. It isn’t rational (for the vast majority of situations) and it doesn’t match our observations. But we think it does.

The big problem is that we have lots of history from which to cherry-pick our predictions about the future. The only reason history repeats is because there is so much of it. Everything that happens today is bound to remind us of something that happened before, simply because lots of stuff happened before, and our minds are drawn to analogies.

…If you can rigorously control the variables of your experiment, you can expect the same outcomes almost every time [emphasis added].

You can expect a given outcome (e.g., heads) to occur approximately 50 percent of the time if you toss a coin a lot of times. But you won’t know the actual frequency (probability) until you measure it; that is, after the fact.

Here’s why. The statement that heads has a probability of 50 percent is a mathematical approximation, given that there are only two possible outcomes of a coin toss: heads or tails. While writing this post I used the RANDBETWEEN function of Excel 2016 to simulate ten 100-toss games of heads or tails, with the following results (number of heads per game): 55, 49, 49, 43, 43, 54, 47, 47, 53, 52. Not a single game yielded exactly 50 heads, and heads came up 492 times (not 500) in 1,000 tosses.

What is the point of a probability statement? What is it good for? It lets you know what to expect over the long run, for a large number of repetitions of a strictly defined event. Change the definition of the event, even slightly, and you can “probably” kiss its probability goodbye.

*     *     *

Related posts:
Fooled by Non-Randomness
Randomness Is Over-Rated
Beware the Rare Event
Some Thoughts about Probability
My War on the Misuse of Probability
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming

Not Just for Baseball Fans

I have substantially revised “Bigger, Stronger, and Faster — But Not Quicker?” I set out to test Dr. Michael Woodley’s hypothesis that reaction times have slowed since the Victorian era:

It seems to me that if Woodley’s hypothesis has merit, it ought to be confirmed by the course of major-league batting averages over the decades. Other things being equal, quicker reaction times ought to produce higher batting averages. Of course, there’s a lot to hold equal, given the many changes in equipment, playing conditions, player conditioning, “style” of the game (e.g., greater emphasis on home runs), and other key variables over the course of more than a century.

I conclude that my analysis

says nothing definitive about reaction times, even though it sheds a lot of light on the relative hitting prowess of American League batters over the past 116 years. (I’ll have more to say about that in a future post.)

It’s been great fun but it was just one of those things.

Sandwiched between those statements you’ll find much statistical meat (about baseball) to chew on.

Not-So Random Thoughts (XIX)

ITEM ADDED 12/18/16

Manhattan Contrarian takes on the partisan analysis of economic growth offered by Alan Blinder and Mark Watson, and endorsed (predictably) by Paul Krugman. Eight years ago, I took on an earlier analysis along the same lines by Dani Rodrik, which Krugman (predictably) endorsed. In fact, bigger government, which is the growth mantra of economists like Blinder, Watson, Rodrik, and (predictably) Krugman, is anti-growth. The combination of spending, which robs the private sector of resources, and regulations, which rob the private sector of options and initiative, is killing economic growth. You can read about it here.

*     *     *

Rania Gihleb and Kevin Lang say that assortative mating hasn’t increased. But even if it had, so what?

Is there a potential social problem that will  have to be dealt with by government because it poses a severe threat to the nation’s political stability or economic well-being? Or is it just a step in the voluntary social evolution of the United States — perhaps even a beneficial one?

In fact,

The best way to help the people … of Charles Murray’s Fishtown [of Coming Apart] — is to ignore the smart-educated-professional-affluent class. It’s a non-problem…. The best way to help the forgotten people of America is to unleash the latent economic power of the United States by removing the dead hand of government from the economy.

*     *     *

Anthropogenic global warming (AGW) is a zombie-like creature of pseudo-science. I’ve rung its death knell, as have many actual scientists. But it keeps coming back. Perhaps President Trump will drive a stake through its heart — or whatever is done to extinguish zombies. In the meantime, here’s more evidence that AGW is a pseudo-scientific hoax:

In conclusion, this synthesis of empirical data reveals that increases in the CO2 concentration has not caused temperature change over the past 38 years across the Tropics-Land area of the Globe. However, the rate of change in CO2 concentration may have been influenced to a statistically significant degree by the temperature level.

And still more:

[B]ased on [Patrick[ Frank’s work, when considering the errors in clouds and CO2 levels only, the error bars around that prediction are ±15˚C. this does not mean—thankfully— that it could be 19˚ warmer in 2100. rather, it means the models are looking for a signal of a few degrees when they can’t differentiate within 15˚ in either direction; their internal errors and uncertainties are too large. this means that the models are unable to validate even the existence of a CO2 fingerprint because of their poor resolution, just as you wouldn’t claim to see DnA with a household magnifying glass.

And more yet:

[P]oliticians using global warming as a policy tool to solve a perceived problem is indeed a hoax. The energy needs of humanity are so large that Bjorn Lomborg has estimated that in the coming decades it is unlikely that more than about 20% of those needs can be met with renewable energy sources.

Whether you like it or not, we are stuck with fossil fuels as our primary energy source for decades to come. Deal with it. And to the extent that we eventually need more renewables, let the private sector figure it out. Energy companies are in the business of providing energy, and they really do not care where that energy comes from….

Scientists need to stop mischaracterizing global warming as settled science.

I like to say that global warming research isn’t rocket science — it is actually much more difficult. At best it is dodgy science, because there are so many uncertainties that you can get just about any answer you want out of climate models just by using those uncertianties as a tuning knob.

*     *     *

Well, that didn’t take long. lawprof Geoffrey Stone said something reasonable a few months ago. Now he’s back to his old, whiny, “liberal” self. Because the Senate failed to take up the nomination of Merrick Garland to fill Antonin Scalia’s seat on the Supreme Court — which is the Senate’s constitutional prerogative, Stone is characterizing the action (or lack of it) as a “constitutional coup d’etat” and claiming that the eventual Trump nominee will be an “illegitimate interloper.” Ed Whelan explains why Stone is wrong here, and adds a few cents worth here.

*     *     *

BHO stereotypes Muslims by asserting that

Trump’s proposal to bar immigration by Muslims would make Americans less safe. How? Because more Muslims would become radicalized and acts of terrorism would therefore become more prevalent. Why would there be more radicalized Muslims? Because the Islamic State (IS) would claim that America has declared war on Islam, and this would not only anger otherwise peaceful Muslims but draw them to IS. Therefore, there shouldn’t be any talk of barring immigration by Muslims, nor any action in that direction….

Because Obama is a semi-black leftist — and “therefore” not a racist — he can stereotype Muslims with impunity. To put it another way, Obama can speak the truth about Muslims without being accused of racism (though he’d never admit to the truth about blacks and violence).

It turns out, unsurprisingly, that there’s a lot of truth in stereotypes:

A stereotype is a preliminary insight. A stereotype can be true, the first step in noticing differences. For conceptual economy, stereotypes encapsulate the characteristics most people have noticed. Not all heuristics are false.

Here is a relevant paper from Denmark.

Emil O. W. Kirkegaard and Julius Daugbjerg Bjerrekær. Country of origin and use of social benefits: A large, preregistered study of stereotype accuracy in Denmark. Open Differential Psychology….

The high accuracy of aggregate stereotypes is confirmed. If anything, the stereotypes held by Danish people about immigrants underestimates those immigrants’ reliance on Danish benefits.

Regarding stereotypes about the criminality of immigrants:

Here is a relevant paper from the United Kingdom.

Noah Carl. NET OPPOSITION TO IMMIGRANTS OF DIFFERENT NATIONALITIES CORRELATES STRONGLY WITH THEIR ARREST RATES IN THE UK. Open Quantitative Sociology and Political Science. 10th November, 2016….

Public beliefs about immigrants and immigration are widely regarded as erroneous. Yet popular stereotypes about the respective characteristics of different groups are generally found to be quite accurate. The present study has shown that, in the UK, net opposition to immigrants of different nationalities correlates strongly with the log of immigrant arrests rates and the log of their arrest rates for violent crime.

The immigrants in question, in both papers, are Muslims — for what it’s worth.

* * *

ADDED 12/18/16:

I explained the phoniness of the Keynesian multiplier here, derived a true (strongly negative) multiplier here, and added some thoughts about the multiplier here. Economist Scott Sumner draws on the Japanese experience to throw more cold water on Keynesianism.

Hayek’s Anticipatory Account of Consciousness

I have almost finished reading F.A. Hayek‘s The Sensory Order, which was originally published in 1952. Chapter VI is Consciousness and Conceptual Thought. In the section headed the Functions of Consciousness, Hayek writes:

6.29.  …[I]t will be the pre-existing excitatory state of the higher centres [of the central nervous system] which will decide whether the evaluation of the new impulses [arising from stimuli external to the higher centres] will be of the kind characteristic of attention or consciousness. It will depend on the predisposition (or set) how fully the newly arriving impulses will be evaluated or whether they will be consciously perceived, and what the responses to them will be.

6.30.  It is probable that the processes in the highest centres which become conscious require the continuous support from nervous impulses originating at some source within the nervous system itself, such as the ‘wakefuleness center’ for whose existence a considerable amount of physiological evidence has been found. If this is so, it would seem probable also that it is these reinforcing impulses which, guided by the expectations evoked by pre-existing conditions, prepare the ground and decide on which of the new impulses the searchlight beam of full consciousness and attention will be focued. The stream of impulses which is thus strengthened becomes capable of dominating the processes in the highest centre, and of overruling and shutting out from full consciousness all the sensory signals which do not belong to the object on which attention is fixed, and which are not themselves strong enough (or perhaps not sufficiently in conflict with the underlying outline picture of the environment) to attract attention.

6.31.  There would thus appear to exist within the central nervous system a highest and most comprehensive center at which at any one time only a limited group of coherent processes can be fully evaluated; where all these processes are related to the same spatial and temporal framework; where the ‘abstract’ or generic relations for a closely knit order in which individual objects are placed; and where, in addition, a close connexion with the instruments of communication has not only contributed a further and very powerful means of classification, but has also made it possible for the individual to participate in a social or conventional representation of the world which he shares with his fellows.

Now, 64 years later, comes a report which I first saw in an online article by Fiona MacDonald, “Harvard Scientists Think They’ve Pinpointed the Physical Source of Consciousness” (Science Alert, November 8, 2016):

Scientists have struggled for millennia to understand human consciousness – the awareness of one’s existence. Despite advances in neuroscience, we still don’t really know where it comes from, and how it arises.

But researchers think they might have finally figured out its physical origins, after pinpointing a network of three specific regions in the brain that appear to be crucial to consciousness.

It’s a pretty huge deal for our understanding of what it means to be human, and it could also help researchers find new treatments for patients in vegetative states.

“For the first time, we have found a connection between the brainstem region involved in arousal and regions involved in awareness, two prerequisites for consciousness,” said lead researcher Michael Fox from the Beth Israel Deaconess Medical Centre at Harvard Medical School.

“A lot of pieces of evidence all came together to point to this network playing a role in human consciousness.”

Consciousness is generally thought of as being comprised of two critical components – arousal and awareness.

Researchers had already shown that arousal is likely regulated by the brainstem – the portion of the brain that links up with the spinal cord – seeing as it regulates when we sleep and wake, and our heart rate and breathing.

Awareness has been more elusive. Researchers have long thought that it resides somewhere in the cortex – the outer layer of the brain – but no one has been able to pinpoint where.

Now the Harvard team has identified not only the specific brainstem region linked to arousal, but also two cortex regions, that all appear to work together to form consciousness.

A full account of the research is given by David B. Fischer M.D. et al. in “A Human Brain Network Derived from Coma-Causing Brainstem Lesions” (Neurology, published online November 4, 2016, ungated version available here).

Hayek isn’t credited in the research paper. But he should be, for pointing the way to a physiological explanation of consciousness that finds it centered in the brain and not in that mysterious emanation called “mind.”

The IQ of Nations

In a twelve-year-old post, “The Main Causes of Prosperity,” I drew on statistics (sourced and described in the post) to find a statistically significant relationship between a nation’s real, per-capita GDP and three variables:

Y =  – 23,518 + 2,316L – 259T  + 253I

Where,
Y = GDP in 1998 dollars (U.S.)
L = Index for rule of law
T = Index for mean tariff rate
I = Verbal IQ

The r-squared of the regression equation is 0.89 and the p-values for the intercept and independent variables are 8.52E-07, 4.70E-10, 1.72E-04, and 3.96E-05.

The effect of IQ, by itself, is strong enough to merit a place of honor:

per-capita-gdp-and-average-verbal-iq

Another relationship struck me when I revisited the IQ numbers. There seems to be a strong correlation between IQ and distance from the equator. That correlation, however, may be an artifact of the strong (negative) correlation between blackness and IQ: The countries whose citizens are predominantly black are generally closer to the equator than the countries whose citizens are predominantly of other races.

Because of the strong (negative) correlation between blackness and IQ, and the geographic grouping of predominantly black countries, it’s not possible to find a statistically significant regression equation that accounts for national IQ as a function of the distance of nations from the equator and their dominant racial composition.

The most significant regression equation omits distance from the equator and admits race:

I = 84.0 – 13.2B + 12.4W + 20.7EA

Where,
I = national average IQ
B = predominantly black
W = predominantly white (i.e., residents are European or of European origin)
EA = East Asian (China, Hong Kong, Japan, Mongolia, South Korea, Taiwan, and Singapore, which is largely populated by persons of Chinese descent)

The r-squared of the equation is 0.78 and the p-values of the intercept and coefficients are all less than 1E-17. The F-value of the equation is 8.24E-51. The standard error of the estimate is 5.6, which means that the 95-percent confidence interval is plus or minus 11 — a smaller number than any of the coefficients.

The intercept applies to all “other” countries that aren’t predominantly black, white, or East Asian in their racial composition. There are 66 such countries in the sample, which comprises 159 countries. The 66 “other” countries span the Middle East; North Africa; South Asia; Southeast Asia; island-states in Indian, Pacific, and Atlantic Oceans; and most of the nations of Central and South America and the Caribbean. Despite the range of racial and ethnic mixtures in those 66 countries, their average IQs cluster fairly tightly around 84. By the same token, there’s a definite clustering of the black countries around 71 (84.0 – 13.2), of the white countries around 96 (84.0 + 12.4), and of the East Asian countries around 105 (84.0 + 20.7).

Thus this graph, where each “row” (from bottom to top) corresponds to black, “other,” white, and East Asian:

estimated-vs-actual-iq

The dotted line represents a perfect correlation. The regression yields a less-than-perfect relationship between race and IQ, but a strong one. That strong relationship is also seen in the following graph:

iq-vs-distance-from-the-equator

There’s a definite pattern — if a somewhat loose one — that goes from low-IQ black countries near the equator to higher IQ white countries farther from the equator. The position  of East Asian countries, which is toward the middle latitudes rather than the highest ones, points to something special in the relationship between East Asian genetic heritage and IQ.

*     *     *

Related posts:
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Let’s Have That “Conversation” about Race

Words Fail Us

Regular readers of this blog know that I seldom use “us” and “we.” Those words are too often appropriated by writers who say such things as “we the people,” and who characterize as “society” the geopolitical entity known as the United States. There is no such thing as “we the people,” and the United States is about as far from being a “society” as Hillary Clinton is from being president (I hope).

There are nevertheless some things that are so close to being universal that it’s fair to refer to them as characteristics of “us” and “we.” The inadequacy of language is one of those things.

Why is that the case? Try to describe in words a person who is beautiful or handsome to you, and why. It’s hard to do, if not impossible. There’s something about the combination of that person’s features, coloring, expression, etc., that defies anything like a complete description. You may have an image of that person in your mind, and you may know that — to you — the person is beautiful or handsome. But you just can’t capture in words all of those attributes. Why? Because the person’s beauty or handsomeness is a whole thing. It’s everything taken together, including subtle things that nestle in your subconscious mind but don’t readily swim to the surface. One such thing could be the relative size of the person’s upper and lower lips in the context of that particular person’s face; whereas, the same lips on another face might convey plainness or ugliness.

Words are inadequate because they describe one thing at a time — the shape of a nose, the slant of a brow, the prominence of a cheekbone. And the sum of those words isn’t the same thing as your image of the beautiful or handsome person. In fact, the sum of those words may be meaningless to a third party, who can’t begin to translate your words into an image of the person you think of as beautiful or handsome.

Yes, there are (supposedly) general rules about beauty and handsomeness. One of them is the symmetry of a person’s features. But that leaves a lot of ground uncovered. And it focuses on one aspect of a person’s face, rather than all of its aspects, which are what you take into account when you judge a person beautiful or handsome.

And, of course, there are many disagreements about who is beautiful or handsome. It’s a matter of taste. Where does the taste come from? Who knows? I have a theory about why I prefer dark-haired women to women whose hair is blonde, red, or medium-to-light brown: My mother was dark-haired, and photographs of her show that she was beautiful (in my opinion) as a young woman. (Despite that, I never thought of her as beautiful because she was just Mom to me.) You can come up with your own theories — and I expect that no two of them will be the same.

What about facts? Isn’t it possible to put facts into words? Not really, and for much the same reason that it’s impossible to describe beauty, handsomeness, love, hate, or anything “subjective” or “emotional.” Facts, at bottom, are subjective, and sometimes even emotional.

Let’s take a “fact” at random: the color red. We can all agree as to whether something looks red, can’t we? Even putting aside people who are color-blind, the answer is: not necessarily. For one thing red is defined as having a “predominant light wavelength of roughly 620–740 nanometers.” “Predominant” and “roughly” are weasel-words. Clearly, there’s no definite point on the visible spectrum where light changes from orange to red. If you think there is, just look at this chart and tell me where it happens. So red comes in shades, which various people describe variously: orange-red and reddish-orange, for example.

Not only that, but the visible spectrum

does not … contain all the colors that the human eyes and brain can distinguish. Unsaturated colors such as pink, or purple variations such as magenta, are absent, for example, because they can be made only by a mix of multiple wavelengths.

Thus we have magenta, fuchsia, blood-red, scarlet, crimson, vermillion, maroon, ruby, and even the many shades of pink — some are blends, some are represented by narrow segments of the light spectrum. Do all of those kinds of red have a clear definition, or are they defined by the beholder? Well, some may be easy to distinguish from others, but the distinctions between them remain arbitrary. Where does scarlet or magenta become vermillion?

In any event, how do you describe a color (whatever you call it) in words? Referring to its wavelength or composition in terms of other colors or its relation to other colors is no help. Wavelength really is meaningless unless you can show an image of the visible spectrum to someone who perceives colors exactly as you do, and point to red — or what you call red. In doing so, you will have pointed to a range of colors, not to red, because there is no red red and no definite boundary between orange and red (or yellow and orange, or green and yellow, etc.).

Further, you won’t have described red in words. And you can’t — without descending into tautologies — because red (as you visualize it) is what’s in your mind. It’s not an objective fact.

My point is that description isn’t the same as definition. You can define red (however vaguely) as a color which has a predominant light wavelength of roughly 620–740 nanometers. But you can’t describe it. Why? Because red is just a concept.

A concept isn’t a real thing that you can see, hear, taste, touch, smell, eat, drink from, drive, etc. How do you describe a concept? You define it in terms of other concepts.

Moving on from color, I’ll take gross domestic product (GDP) as another example. GDP is an estimate of the dollar value of the output of finished goods and services produced in the United States during a particular period of time. Wow, what a string of concepts. And every one of them must be defined, in turn. Some of them can be illustrated by referring to real things; a haircut is a kind of service, for example. But it’s impossible to describe GDP and its underlying concepts because they’re all abstractions, or representations of indescribable conglomerations of real things.

All right, you say, it’s impossible to describe concepts, but surely it’s possible to describe things. People do it all the time. See that ugly, dark-haired, tall guy standing over there? I’ve already dealt with ugly, indirectly, in my discussion of beauty or handsomeness. Ugliness, like beauty, is just a concept, the idea of which differs from person to person. What about tall? It’s a relative term, isn’t it? You can measure a person’s height, but whether or not you consider him tall depends on where and when you live and the range of heights you’re used to encountering. A person who seems tall to you may not seem tall to your taller brother. Dark-haired will evoke different pictures in different minds — ranging from jet-black to dark brown and even auburn.

But if you point to the guy you call ugly, dark-haired, tall guy, I may agree with you that he’s ugly, dark-haired, and tall. Or I may disagree with you, but gain some understanding of what you mean by ugly, dark-haired, and tall.

And therein lies the tale of how people are able to communicate with each other, despite their inability to describe concepts or to define them without going in endless circles and chains of definitions. First, human beings possess central nervous systems and sensory organs that are much alike, though within a wide range of variations (e.g., many people must wear glasses with an almost-infinite variety of corrections, hearing aids are programmed to an almost-infinite variety of settings, sensitivity to touch varies widely, reaction times vary widely). Nevertheless, most people seem to perceive the same color when light with a wavelength of, say, 700 nanometers strikes the retina. The same goes for sounds, tastes, smells, etc., as various external stimuli are detected by various receptors. Those perceptions then acquire agreed definitions through acculturation. For example, an object that reflects light with a wavelength of 700 nanometers becomes known as red; a sound with a certain frequency becomes known as middle C; a certain taste is characterized as bitter, sweet, or sour.

Objects acquire names in the same way: for example: a square piece of cloth that’s wrapped around a person’s head or neck becomes a bandana, and a longish, curved, yellow-skinned fruit with a soft interior becomes a banana. And so I can visualize a woman wearing a red bandana and eating a banana.

There is less agreement about “soft” concepts (e.g., beauty) because they’re based not just on “hard” facts (e.g., the wavelength of light), but on judgments that vary from person to person. A face that’s cute to one person may be beautiful to another person, but there’s no rigorous division between cute and beautiful. Both convey a sense of physical attractiveness that many persons will agree upon, but which won’t yield a consistent image. A very large percentage of Caucasian males (of a certain age) would agree that Ingrid Bergman and Hedy Lamarr were beautiful, but there’s nothing like a consensus about Katharine Hepburn (perhaps striking but not beautiful) or Jean Arthur (perhaps cute but not beautiful).

Other concepts, like GDP, acquire seemingly rigorous definitions, but they’re based on strings of seemingly rigorous definitions, the underpinnings of which may be as squishy as the flesh of a banana (e.g., the omission of housework and the effects of pollution from GDP). So if you’re familiar with the definitions of the definitions, you have a good grasp of the concepts. If you aren’t, you don’t. But if you have a good grasp of the numbers underlying the definitions of definitions, you know that the top-level concept is actually vague and hard to pin down. The numbers not only omit important things but are only estimates, and often are estimates of disparate things that are grouped because they’re judged to “alike enough.”

Acculturation in the form of education is a way of getting people to grasp concepts that have widely agreed definitions. Mathematics, for example, is nothing but concepts, all the way down. And to venture beyond arithmetic is to venture into a world of ideas that’s held together by definitions that rest upon definitions and end in nothing real. Unless you’re one of those people who insists that mathematics is the “real” stuff of which the universe is made, which is nothing more than a leap of faith. (Math, by the way, is nothing but words in shorthand.)

And so, human beings are able to communicate and (usually) understand each other because of their physical and cultural similarities, which include education in various and sundry subjects. Those similarities also enable people of different cultures and languages to translate their concepts (and the words that define them) from one language to another.

Those similarities also enable people to “feel” what another person is feeling when he says that he’s happy, sad, drunk, or whatever. There’s the physical similarity — the physiological changes that usually occur when a person becomes what he thinks of as happy, etc. And there’s acculturation — the acquired knowledge that people feel happy (or whatever) for certain reasons (e.g., a marriage, the birth of a child) and display their happiness in certain ways (e.g., a broad smile, a “jump for joy”).

A good novelist, in my view, is one who knows how to use words that evoke vivid mental images of the thoughts, feelings, and actions of characters, and the settings in which the characters act out the plot of a novel. A novelist who can do that and also tell a good story — one with an engaging or suspenseful plot — is thereby a great novelist. I submit that a good or great novelist (an admittedly vague concept) is worth almost any number of psychologists and psychiatrists, whose vision of the human mind is too rigid to grasp the subtleties that give it life.

But good and great novelists are thin on the ground. That is to say, there are relatively few persons among us who are able to grasp and communicate effectively a broad range of the kinds of thoughts and feelings that lurk in the minds of human beings. And even those few have their blind spots. Most of them, it seems to me, are persons of the left, and are therefore unable to empathize with the thoughts and feelings of the working-class people who seethe with resentment about fawning over and favoritism toward blacks, illegal immigrants, gender-confused persons, and other so-called victims. In fact, those few otherwise perceptive and articulate writers make it a point to write off the working-class people as racists, bigots, and ignoramuses.

There are exceptions, of course. A contemporary exception is Tom Wolfe. But his approach to class issues is top-down rather than bottom-up.

Which just underscores my point that we human beings find it hard to formulate and organize our own thoughts and feelings about the world around us and the other people in it. And we’re practically tongue-tied when it comes to expressing those thoughts and feelings to others. We just don’t know ourselves well enough to explain ourselves to others. And our feelings — such as our political preferences, which probably are based more on temperament than on facts — get in the way.

Love, to take a leading example, is a feeling that just is. The why and wherefore of it is beyond our ability to understand and explain. Some of the feelings attached to it can be expressed in prose, poetry, and song, but those are superficial expressions that don’t capture the depth of love and why it exists.

The world of science is of no real help. Even if feelings of love could be expressed in scientific terms — the action of hormone A on brain region X — that would be worse than useless. It would reduce love to chemistry, when we know that there’s more to it than that. Why, for example, is hormone A activated by the presence or thought of person M but not person N, even when they’re identical twins?

The world of science is of no real help about “getting to the bottom of things.” Science is an infinite regress. S is explained in terms of T, which is explained in terms of U, which is explained in terms of V, and on and on. For example, there was the “indivisible” atom, which turned out to consist of electrons, protons, and neutrons. But electrons have turned out to be more complicated than originally believed, and protons and neutrons have been found to be made of smaller particles with distinctive characteristics. So it’s reasonable to ask if all of the particles now considered elementary are really indivisible. Perhaps there other more-elementary particles yet to be hypothesized and discovered. And even if all of the truly elementary particles are discovered, scientists will still be unable to explain what those particles really “are.”

Words fail us.

*      *      *

Related reading:
Modeling Is Not Science
Physics Envy
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
We, the Children of the Enlightenment
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Mysteries: Sacred and Profane
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design
Mind, Cosmos, and Consciousness
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
“Settled Science” and the Monty Hall Problem
The Limits of Science, Illustrated by Scientists
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
The “Marketplace” of Ideas
My War on the Misuse of Probability
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Taleb’s Ruinous Rhetoric

Intelligence, Assortative Mating, and Social Engineering

UPDATED 11/18/16 (AT THE END)

What is intelligence? Why does it matter in “real life”? Are intelligence-driven “real life” outcomes — disparities in education and income — driving Americans apart? In particular, is the intermarriage of smart, educated professionals giving rise to a new hereditary class whose members have nothing in common with less-intelligent, poorly educated Americans, who will fall farther and farther behind economically? And if so, what should be done about it, if anything?

INTELLIGENCE AND WHY IT MATTERS IN “REAL LIFE”

Thanks to a post at Dr. James Thompson’s blog, Psychological comments, I found Dr. Linda Gottredson‘s paper, “Why g Matters: The Complexity of Everyday Life” (Intelligence 24:1, 79-132, 1997). The g factor — or just plain g — is general intelligence. I quote Gottredson’s article at length because it makes several key points about intelligence and why it matters in “real life.” For ease of reading, I’ve skipped over the many citations and supporting tables than lend authority to the article.

[W]hy does g have such pervasive practical utility? For example, why is a higher level of g a substantial advantage in carpentry, managing people, and navigating vehicles of all kinds? And, very importantly, why do those advantages vary in the ways they do? Why is g more helpful in repairing trucks than in driving them for a living? Or more for doing well in school than staying out of trouble?…

Also, can we presume that similar activities in other venues might be similarly affected by intelligence? For example, if differences in intelligence change the odds of effectively managing and motivating people on the job, do they also change the odds of successfully dealing with one’s own children? If so, why, and how much?

The heart of the argument I develop here is this: For practical purposes, g is the ability to deal with cognitive complexity — in particular, with complex information processing. All tasks in life involve some complexity, that is, some information processing. Life tasks, like job duties, vary greatly in their complexity (g loadedness). This means that the advantages of higher g are large in some situations and small in others, but probably never zero….

Although researchers disagree on how they define intelligence, there is virtual unanimity that it reflects the ability to reason, solve problems, think abstractly, and acquire knowledge. Intelligence is not the amount of information people know, but their ability to recognize, acquire, organize, update, select, and apply it effectively. In educational contexts, these complex mental behaviors are referred to as higher order thinking skills.

Stated at a more molecular level, g is the ability to mentally manipulate information — “to fill a gap, turn something over in one’s mind, make comparisons, transform the input to arrive at the output”….

[T]he active ingredient in test items seems to reside in their complexity. Any kind of item content-words, numbers, figures, pictures, symbols, blocks, mazes, and so on-can be used to create less to more g-loaded tests and test items. Differences in g loading seem to arise from variations in items’ cognitive complexity and thus the amount of mental manipulation they require….

Life is replete with uncertainty, change, confusion, and misinformation, sometimes minor and at times massive. From birth to death, life continually requires us to master abstractions, solve problems, draw inferences, and make judgments on the basis of inadequate information. Such demands may be especially intense in school, but they hardly cease when one walks out the school door. A close look at job duties in the workplace shows why….

When job analysis data for any large set of jobs are factor analyzed, they always reveal the major distinction among jobs to be the mental complexity of the work they require workers to perform. Arvey’s job analysis is particularly informative in showing that job complexity is quintessentially a demand for g….

Not surprisingly, jobs high in overall complexity require more education, .86 and .88, training, .76 and .51, and experience, .62 — and are viewed as the most prestigious, . 82. These correlations have sometimes been cited in support of the training hypothesis discussed earlier, namely, that sufficient training can render differences in g moot.

However, prior training and experience in a job never fully prepare workers for all contingencies. This is especially so for complex jobs, partly because they require workers to continually update job knowledge, .85. As already suggested, complex tasks often involve not only the appropriate application of old knowledge, but also the quick apprehension and use of new information in changing environments….

Many of the duties that correlate highly with overall job complexity suffuse our lives: advising, planning, negotiating, persuading, supervising others, to name just a few….

The National Adult Literacy Survey (NALS) of 26,000 persons aged 16 and older is one in a series of national literacy assessments developed by the Educational Testing Service (ETS) for the U.S. Department of Education. It is a direct descendent, both conceptually and methodologically, of the National Assessment of Educational Progress (NAEP) studies of reading among school-aged children and literacy among adults aged 21 to 25.

NALS, like its NAEP predecessors, is extremely valuable in understanding the complexity of everyday life and the advantages that higher g provides. In particular, NALS provides estimates of the proportion of adults who are able to perform everyday tasks of different complexity levels….

A look at the items in Figure 2 reveals their general relevance to social life. These are not obscure skills or bits of knowledge whose value is limited to academic pursuits. They are skills needed to carry out routine transactions with banks, social welfare agencies, restaurants, the post office, and credit card agencies; to understand contrasting views on public issues (fuel efficiency, parental involvement in schools); and to comprehend the events of the day (sports stories, trends in oil exports) and one’s personal options (welfare benefits, discount for early payment of bills, relative merits between two credit cards)….

[A]lthough the NALS items represent skills that are valuable in themselves, they are merely samples from broad domains of such skill. As already suggested, scores on the NALS reflect people’s more general ability (the latent trait) to master on a routine basis skills of different information-processing complexity….

[I]ndeed, the five levels of NALS literacy are associated with very different odds of economic well-being….

Each higher level of proficiency substantially improves the odds of economic well-being, generally halving the percentage living in poverty and doubling the percentage employed in the professions or management….

The effects of intelligence-like other psychological traits-are probabilistic, not deterministic. Higher intelligence improves the odds of success in school and work. It is an advantage, not a guarantee. Many other things matter.

However, the odds disfavor low-IQ people just about everywhere they turn. The differences in odds are relatively small in some aspects of life (law-abidingness), moderate in some (income), and large in others (educational, occupational attainment). But they are consistent. At a minimum (say, under conditions of simple tasks and equal prior knowledge), higher levels of intelligence act like the small percentage (2.7%) favoring the house in roulette at Monte Carlo — it yields enormous gains over the long run. Similarly, all of us make stupid mistakes from time to time, but higher intelligence helps protect us from accumulating a long, debilitating record of them.

To mitigate unfavorable odds attributable to low IQ, an individual must have some equally pervasive compensatory advantage-family wealth, winning personality, enormous resolve, strength of character, an advocate or benefactor, and the like. Such compensatory advantages may frequently soften but probably never eliminate the cumulative impact of low IQ. Conversely, high IQ acts like a cushion against some of life’s adverse circumstances, perhaps partly accounting for
why some children are more resilient than others in the face of deprivation and abuse….

For the top 5% of the population (over IQ 125), success is really “yours to lose.” These people meet the minimum intelligence requirements of all occupations, are highly sought after for their extreme trainability, and have a relatively easy time with the normal cognitive demands of life. Their jobs are often high pressure, emotionally draining, and socially demanding …, but these jobs are prestigious and generally pay well. Although very high IQ individuals share many of the vicissitudes of life, such as divorce, illness, and occasional unemployment, they rarely become trapped in poverty or social pathology. They may be saints or sinners, healthy or unhealthy, content or emotionally troubled. They may or may not work hard and apply their talents to get ahead, and some will fail miserably. But their lot in life and their prospects for living comfortably are comparatively rosy.

There are, of course, multiple causes of different social and economic outcomes in life. However, g seems to be at the center of the causal nexus for many. Indeed, g is more important than social class background in predicting whether White adults obtain college degrees, live in poverty, are unemployed, go on welfare temporarily, divorce, bear children out of wedlock, and commit crimes.

There are many other valued human traits besides g, but none seems to affect individuals’ life chances so systematically and so powerfully in modern life as does g. To the extent that one is concerned about inequality in life chances, one must be concerned about differences in g….

Society has become more complex-and g loaded-as we have entered the information age and postindustrial economy. Major reports on the U.S. schools, workforce, and economy routinely argue, in particular, that the complexity of work is rising.

Where the old industrial economy rewarded mass production of standardized products for large markets, the new postindustrial economy rewards the timely customization and delivery of high-quality, convenient products for increasingly specialized markets. Where the old economy broke work into narrow, routinized, and closely supervised tasks, the new economy increasingly requires workers to work in cross-functional teams, gather information, make decisions, and undertake diverse, changing, and challenging sets of tasks in a fast-changing and dynamic global market….

Such reports emphasize that the new workplace puts a premium on higher order thinking, learning, and information-processing skills — in other words, on intelligence. Gone are the many simple farm and factory jobs where a strong back and willing disposition were sufficient to sustain a respected livelihood, regardless of IQ. Fading too is the need for highly developed perceptual-motor skills, which were once critical for operating and monitoring machines, as technology advances.

Daily life also seems to have become considerably more complex. For instance, we now have a largely moneyless economy-checkbooks, credit cards, and charge accounts-that requires more abstract thought, foresight, and complex management. More self-service, whether in banks or hardware stores, throws individuals back onto their own capabilities. We struggle today with a truly vast array of continually evolving complexities: the changing welter of social services across diverse, large bureaucracies; increasing options for health insurance, cable, and phone service; the steady flow of debate over health hazards in our food and environment; the maze of transportation systems and schedules; the mushrooming array of over-the-counter medicines in the typical drugstore; new technologies (computers) and forms of communication (cyberspace) for home as well as office.

Brighter individuals, families, and communities will be better able to capitalize on the new opportunities this increased complexity brings. The least bright will use them less effectively, if at all, and so fail to reap in comparable measure any benefits they offer. There is evidence that increasing proportions of individuals with below-average IQs are having trouble adapting to our increasingly complex modern life and that social inequality along IQ lines is increasing.

CHARLES MURRAY AND FISHTOWN VS. BELMONT

At the end of the last sentence, Gottfredson refers to Richard J. Herrnstein and Charles Murray’s The Bell Curve: Intelligence and Class Structure in American Life (1994). In a later book, Coming Apart: The State of White America, 1960-2010 (2012), Murray tackles the issue of social (and economic) inequality. Kay S. Hymowitz summarizes Murray’s thesis:

According to Murray, the last 50 years have seen the emergence of a “new upper class.” By this he means something quite different from the 1 percent that makes the Occupy Wall Streeters shake their pitchforks. He refers, rather, to the cognitive elite that he and his coauthor Richard Herrnstein warned about in The Bell Curve. This elite is blessed with diplomas from top colleges and with jobs that allow them to afford homes in Nassau County, New York and Fairfax County, Virginia. They’ve earned these things not through trust funds, Murray explains, but because of the high IQs that the postindustrial economy so richly rewards.

Murray creates a fictional town, Belmont, to illustrate the demographics and culture of the new upper class. Belmont looks nothing like the well-heeled but corrupt, godless enclave of the populist imagination. On the contrary: the top 20 percent of citizens in income and education exemplify the core founding virtues Murray defines as industriousness, honesty, marriage, and religious observance….

The American virtues are not doing so well in Fishtown, Murray’s fictional working-class counterpart to Belmont. In fact, Fishtown is home to a “new lower class” whose lifestyle resembles The Wire more than Roseanne. Murray uncovers a five-fold increase in the percentage of white male workers on disability insurance since 1960, a tripling of prime-age men out of the labor force—almost all with a high school degree or less—and a doubling in the percentage of Fishtown men working less than full-time…..

Most disastrous for Fishtown residents has been the collapse of the family, which Murray believes is now “approaching a point of no return.” For a while after the 1960s, the working class hung on to its traditional ways. That changed dramatically by the 1990s. Today, under 50 percent of Fishtown 30- to 49-year-olds are married; in Belmont, the number is 84 percent. About a third of Fishtowners of that age are divorced, compared with 10 percent of Belmonters. Murray estimates that 45 percent of Fishtown babies are born to unmarried mothers, versus 6 to 8 percent of those in Belmont.

And so it follows: Fishtown kids are far less likely to be living with their two biological parents. One survey of mothers who turned 40 in the late nineties and early 2000s suggests the number to be only about 30 percent in Fishtown. In Belmont? Ninety percent—yes, ninety—were living with both mother and father….

For all their degrees, the upper class in Belmont is pretty ignorant about what’s happening in places like Fishtown. In the past, though the well-to-do had bigger houses and servants, they lived in towns and neighborhoods close to the working class and shared many of their habits and values. Most had never gone to college, and even if they had, they probably married someone who hadn’t. Today’s upper class, on the other hand, has segregated itself into tony ghettos where they can go to Pilates classes with their own kind. They marry each other and pool their incomes so that they can move to “Superzips”—the highest percentiles in income and education, where their children will grow up knowing only kids like themselves and go to college with kids who grew up the same way.

In short, America has become a segregated, caste society, with a born elite and an equally hereditary underclass. A libertarian, Murray believes these facts add up to an argument for limited government. The welfare state has sapped America’s civic energy in places like Fishtown, leaving a population of disengaged, untrusting slackers….

But might Murray lay the groundwork for fatalism of a different sort? “The reason that upper-middle-class children dominate the population of elite schools,” he writes, “is that the parents of the upper-middle class now produce a disproportionate number of the smartest children.” Murray doesn’t pursue this logic to its next step, and no wonder. If rich, smart people marry other smart people and produce smart children, then it follows that the poor marry—or rather, reproduce with—the less intelligent and produce less intelligent children. [“White Blight,” City Journal, January 25, 2012]

In the last sentence of that quotation, Hymowitz alludes to assortative mating.

ADDING 2 AND 2 TO GET ?

So intelligence is real; it’s not confined to “book learning”; it has a strong influence on one’s education, work, and income (i.e., class); and because of those things it leads to assortative mating, which (on balance) reinforces class differences. Or so the story goes.

But assortative mating is nothing new. What might be new, or more prevalent than in the past, is a greater tendency for intermarriage within the smart-educated-professional class instead of across class lines, and for the smart-educated-professional class to live in “enclaves” with their like, and to produce (generally) bright children who’ll (mostly) follow the lead of their parents.

How great are those tendencies? And in any event, so what? Is there a potential social problem that will  have to be dealt with by government because it poses a severe threat to the nation’s political stability or economic well-being? Or is it just a step in the voluntary social evolution of the United States — perhaps even a beneficial one?

Is there a growing tendency toward intermarriage among the smart-educated-professional class? It depends on how you look at it. Here, for example, are excerpts of commentaries about a paper by Jeremy Greenwood et al., “Marry Your Like: Assortative Mating and Income Inequality” (American Economic Review, 104:5, 348-53, May 2014 — also published as NBER Working Paper 19289):

[T]he abstract is this:

Has there been an increase in positive assortative mating? Does assortative mating contribute to household income inequality? Data from the United States Census Bureau suggests there has been a rise in assortative mating. Additionally, assortative mating affects household income inequality. In particular, if matching in 2005 between husbands and wives had been random, instead of the pattern observed in the data, then the Gini coefficient would have fallen from the observed 0.43 to 0.34, so that income inequality would be smaller. Thus, assortative mating is important for income inequality. The high level of married female labor-force participation in 2005 is important for this result.

That is quite a significant effect. [Tyler Cowen, “Assortative Mating and Income Inequality,” Marginal Revolution, January 27, 2014]

__________

The wage gap between highly and barely educated workers has grown, but that could in theory have been offset by the fact that more women now go to college and get good jobs. Had spouses chosen each other at random, many well-paid women would have married ill-paid men and vice versa. Workers would have become more unequal, but households would not. With such “random” matching, the authors estimate that the Gini co-efficient, which is zero at total equality and one at total inequality, would have remained roughly unchanged, at 0.33 in 1960 and 0.34 in 2005.

But in reality the highly educated increasingly married each other. In 1960 25% of men with university degrees married women with degrees; in 2005, 48% did. As a result, the Gini rose from 0.34 in 1960 to 0.43 in 2005.

Assortative mating is hardly mysterious. People with similar education tend to work in similar places and often find each other attractive. On top of this, the economic incentive to marry your peers has increased. A woman with a graduate degree whose husband dropped out of high school in 1960 could still enjoy household income 40% above the national average; by 2005, such a couple would earn 8% below it. In 1960 a household composed of two people with graduate degrees earned 76% above the average; by 2005, they earned 119% more. Women have far more choices than before, and that is one reason why inequality will be hard to reverse. [The Economist, “Sex, Brains, and Inequality,” February 8, 2014]

__________

I’d offer a few caveats:

  • Comparing observed GINI with a hypothetical world in which marriage patterns are completely random is a bit misleading. Marriage patterns weren’t random in 1960 either, and the past popularity of “Cinderella marriages” is more myth than reality. In fact, if you look at the red diagonals [in the accompanying figures], you’ll notice that assortative mating has actually increased only modestly since 1960.
  • So why bother with a comparison to a random counterfactual? That’s a little complicated, but the authors mainly use it to figure out why 1960 is so different from 2005. As it turns out, they conclude that rising income inequality isn’t really due to a rise in assortative mating per se. It’s mostly due to the simple fact that more women work outside the home these days. After all, who a man marries doesn’t affect his household income much if his wife doesn’t have an outside job. But when women with college degrees all started working, it caused a big increase in upper class household incomes regardless of whether assortative mating had increased.
  • This can get to sound like a broken record, but whenever you think about rising income inequality, you always need to keep in mind that over the past three decades it’s mostly been a phenomenon of the top one percent. It’s unlikely that either assortative mating or the rise of working women has had a huge impact at those income levels, and therefore it probably hasn’t had a huge impact on increasing income inequality either. (However, that’s an empirical question. I might be wrong about it.)

[Kevin Drum, “No the Decline of Cinderella Marriages Probably Hasn’t Played a Big Role in Rising Income Inequality,” Mother Jones, January 27, 2014]

In sum:

  • The rate of intermarriage at every level of education rose slightly between 1960 and 2005.
  • But the real change between 1960 and 2005 was that more and more women worked outside the home — a state of affairs that “progressives” applaud. It is that change which has led to a greater disparity between the household incomes of poorly educated couples and those of highly educated couples. (Hereinafter, I omit the “sneer quotes” around “progressives,” “progressive,” and “Progressivism,” but only to eliminate clutter.)
  • While that was going on, the measure of inequality in the incomes of individuals didn’t change. (Go to “In Which We’re Vindicated. Again,” Political Calculations, January 28, 2014, and scroll down to the figure titled “GINI Ratios for U.S. Households, Families, and Individuals, 1947-2010.”)
  • Further, as Kevin Drum notes, the rise in income inequality probably has almost nothing to do with a rise in the rate of assortative mating and much to do with the much higher incomes commanded by executives, athletes, entrepreneurs, financiers, and “techies” — a development that shouldn’t bother anyone, even though it does bother a lot of people. (See my post “Mass (Economic) Hysteria: Income Inequality and Related Themes,” and follow the many links therein to other posts of mine and to the long list of related readings.)

Moreover, intergenerational mobility in the United States hasn’t changed in the past several decades:

Our analysis of new administrative records on income shows that children entering the labor market today have the same chances of moving up in the income distribution relative to their parents as children born in the 1970s. Putting together our results with evidence from Hertz (2007) and Lee and Solon (2009) that intergenerational elasticities of income did not change significantly between the 1950 and 1970 birth cohorts, we conclude that rank-based measures of social mobility have remained remarkably stable over the second half of the twentieth century in the United States….

The lack of a trend in intergenerational mobility contrasts with the increase in income inequality in recent decades. This contrast may be surprising given the well-known negative correlation between inequality and mobility across countries (Corak 2013). Based on this “Great Gatsby curve,” Krueger (2012) predicted that recent increases in inequality would increase the intergenerational persistence of income by 20% in the U.S. One explanation for why this prediction was not borne out is that much of the increase in inequality has been driven by the extreme upper tail (Piketty and Saez 2003, U.S. Census Bureau 2013). In [Chetty et al. 2014, we show that there is little or no correlation between mobility and extreme upper tail inequality – as measured e.g. by top 1% income shares – both across countries and across areas within the U.S….

The stability of intergenerational mobility is perhaps more surprising in light of evidence that socio-economic gaps in early indicators of success such as test scores (Reardon 2011), parental inputs (Ramey and Ramey 2010), and social connectedness (Putnam, Frederick, and Snellman 2012) have grown over time. Indeed, based on such evidence, Putnam, Frederick, and Snellman predicted that the “adolescents of the 1990s and 2000s are yet to show up in standard studies of intergenerational mobility, but the fact that working class youth are relatively more disconnected from social institutions, and increasingly so, suggests that mobility is poised to plunge dramatically.” An important question for future research is why such a plunge in mobility has not occurred. [Raj Chetty et al., “Is the United States Still a Land of Opportunity? Recent Trends in Intergenerational Mobility,” NBER Working Paper 19844, January 2014]

Figure 3 of the paper by Chetty et al. nails it down:

chetty-et-al-figure-3

The results for ages 29-30 are close to the results for age 26.

What does it all mean? For one thing, it means that the children of top-quintile parents reach the top quintile about 30 percent of the time. For another thing, it means that, unsurprisingly, the children of top-quintile parents reach the top quintile more often than children of second-quintile parents, who reach the top quintile more often than children of third-quintile parents, and so on.

There is nevertheless a growing, quasi-hereditary, smart-educated-professional-affluent class. It’s almost a sure thing, given the rise of the two-professional marriage, and given the correlation between the intelligence of parents and that of their children, which may be as high as 0.8. However, as a fraction of the total population, membership in the new class won’t grow as fast as membership in the “lower” classes because birth rates are inversely related to income.

And the new class probably will be isolated from the “lower” classes. Most members of the new class work and live where their interactions with persons of “lower” classes are restricted to boss-subordinate and employer-employee relationships. Professionals, for the most part, work in office buildings, isolated from the machinery and practitioners of “blue collar” trades.

But the segregation of housing on class lines is nothing new. People earn more, in part, so that they can live in nicer houses in nicer neighborhoods. And the general rise in the real incomes of Americans has made it possible for persons in the higher income brackets to afford more luxurious homes in more luxurious neighborhoods than were available to their parents and grandparents. (The mansions of yore, situated on “Mansion Row,” were occupied by the relatively small number of families whose income and wealth set them widely apart from the professional class of the day.) So economic segregation is, and should be, as unsurprising as a sunrise in the east.

WHAT’S THE PROGRESSIVE SOLUTION TO THE NON-PROBLEM?

None of this will assuage progressives, who like to claim that intelligence (like race) is a social construct (while also claiming that Republicans are stupid); who believe that incomes should be more equal (theirs excepted); who believe in “diversity,” except when it comes to where most of them choose to live and school their children; and who also believe that economic mobility should be greater than it is — just because. In their superior minds, there’s an optimum income distribution and an optimum degree of economic mobility — just as there is an optimum global temperature, which must be less than the ersatz one that’s estimated by combining temperatures measured under various conditions and with various degrees of error.

The irony of it is that the self-segregated, smart-educated-professional-affluent class is increasingly progressive. Consider the changing relationship between party preference and income:

voting-vs-income
Source: K.K. Rebecca Lai et al., “How Trump Won the Election According to Exit Polls,” The New York Times, November 16, 2016.

The elections between 2004 and 2016 are indicated by the elbows in the zig-zag lines for each of the income groups. For example, among voters earning more than $200,000,  the Times estimates that almost 80 percent (+30) voted Republican in 2004, as against 45 percent in 2008, 60 percent in 2012, and just over 50 percent in 2016. Even as voters in the two lowest brackets swung toward the GOP (and Trump) between 2004 and 2016, voters in the three highest brackets were swinging toward the Democrat Party (and Clinton).

Those shifts are consistent with the longer trend among persons with bachelor’s degrees and advanced degrees toward identification with the Democrat Party. See, for example, the graphs showing relationships between party affiliation and level of education at “Party Identification Trends, 1992-2014” (Pew Research Center, April 7, 2015). The smart-educated-professional-affluent class consists almost entirely of persons with bachelor’s and advanced degrees.

So I ask progressives, given that you have met the new class and it is you, what do you want to do about it? Is there a social problem that might arise from greater segregation of socio-economic classes, and is it severe enough to warrant government action. Or is the real “problem” the possibility that some people — and their children and children’s children, etc. — might get ahead faster than other people — and their children and children’s children, etc.?

Do you want to apply the usual progressive remedies? Penalize success through progressive (pun intended) personal income-tax rates and the taxation of corporate income; force employers and universities to accept low-income candidates (whites included) ahead of better-qualified ones (e.g., your children) from higher-income brackets; push “diversity” in your neighborhood by expanding the kinds of low-income housing programs that helped to bring about the Great Recession; boost your local property and sales taxes by subsidizing “affordable housing,” mandating the payment of a “living wage” by the local government, and applying that mandate to contractors seeking to do business with the local government; and on and on down the list of progressive policies?

Of course you do, because you’re progressive. And you’ll support such things in the vain hope that they’ll make a difference. But not everyone shares your naive beliefs in blank slates, equal ability, and social homogenization (which you don’t believe either, but are too wedded to your progressive faith to admit). What will actually be accomplished — aside from tokenism — is social distrust and acrimony, which had a lot to do with the electoral victory of Donald J. Trump, and economic stagnation, which hurts the “little people” a lot more than it hurts the smart-educated-professional-affluent class.

Where the progressive view fails, as it usually does, is in its linear view of the world and dependence on government “solutions.” As the late Herbert Stein said, “If something cannot go on forever, it will stop.” The top 1-percent doesn’t go on forever; its membership is far more volatile than that of lower income groups. Neither do the top 10-percent or top quintile go on forever. There’s always a top 1-percent, a top 10-percent and top quintile, by definition. But the names change constantly, as the paper by Chetty et al. attests.

The solution to the pseudo-problem of economic inequality is benign neglect, which isn’t a phrase that falls lightly from the lips of progressives. For more than 80 years, a lot of Americans — and too many pundits, professors, and politicians — have been led astray by that one-off phenomenon: the Great Depression. FDR and his sycophants and their successors created and perpetuated the myth that an activist government saved America from ruin and totalitarianism. The truth of the matter is that FDR’s policies prolonged the Great Depression by several years, and ushered in soft despotism, which is just “friendly” fascism. And all of that happened at the behest of people of above-average intelligence and above-average incomes.

Progressivism is the seed-bed of eugenics, and still promotes eugenics through abortion on demand (mainly to rid the world of black babies). My beneficial version of eugenics would be the sterilization of everyone with an IQ above 125 or top-40-percent income who claims to be progressive.

WHAT IS THE REAL PROBLEM? (ADDED 11/18/16)

It’s not the rise of the smart-educated-professional-affluent class. It’s actually a problem that has nothing to do with that. It’s the problem pointed to by Charles Murray, and poignantly underlined by a blogger named Tori:

Over the summer, my little sister had a soccer tournament at Bloomsburg University, located in central Pennsylvania. The drive there was about three hours and many of the towns we drove through shocked me. The conditions of these towns were terrible. Houses were falling apart. Bars and restaurants were boarded up. Scrap metal was thrown across front lawns. White, plastic lawn chairs were out on the drooping front porches. There were no malls. No outlets. Most of these small towns did not have a Walmart, only a dollar store and a few run down thrift stores. In almost every town, there was an abandoned factory.

My father, who was driving the car, turned to me and pointed out a Trump sign stuck in a front yard, surrounded by weeds and dead grass. “This is Trump country, Tori,” He said. “These people are desperate, trapped for life in these small towns with no escape. These people are the ones voting for Trump.”

My father understood Trump’s key to success, even though it would leave the media and half of America baffled and terrified on November 9th. Trump’s presidency has sparked nationwide outrage, disbelief and fear.

And, while I commend the passion many of my fellow millennials feels towards minorities and the fervency they oppose the rhetoric they find dangerous, I do find many of their fears unfounded.  I don’t find their fears unfounded because I negate the potency of racism. Or the potency of oppression. Or the potency of hate.

I find these fears unfounded because these people groups have an army fighting for them. This army is full of celebrities, politicians, billionaires, students, journalists and passionate activists. Trust me, minorities will be fine with an army like this defending them.

And, I would argue, that these minorities aren’t the only ones who need our help. The results of Tuesday night did not expose a red shout of racism but a red shout for help….

The majority of rhetoric going around says that if you’re white, you have an inherent advantage in life. I would argue that, at least for the members of these small impoverished communities, their whiteness only harms them as it keeps their immense struggles out of the public eye.

Rural Americans suffer from a poverty rate that is 3 points higher than the poverty rate found in urban America. In Southern regions, like Appalachia, the poverty rate jumps to 8 points higher than those found in cities. One fifth of the children living in poverty live rural areas. The children in this “forgotten fifth” are more likely to live in extreme poverty and live in poverty longer than their urban counterparts. 57% of these children are white….

Lauren Gurley, a freelance journalist, wrote a piece that focuses on why politicians, namely liberal ones, have written off rural America completely. In this column she quotes Lisa Pruitt, a law professor at the University of California who focuses many of her studies on life in rural America. Pruitt argues that mainstream America ignores poverty stricken rural America because the majority of America associates rural poverty with whiteness. She attributes America’s lack of empathy towards white poverty to the fact that black poverty is attributed to institutionalized racism, while white people have no reason to be poor, unless poor choices were made….

For arguably the first time since President Kennedy in the 1950’s, Donald Trump reached out to rural America. Trump spoke out often about jobs leaving the US, which has been felt deeply by those living in the more rural parts of the country. Trump campaigned in rural areas, while Clinton mostly campaigned in cities. Even if you do not believe Trump will follow through on his promises, he was still one of the few politicians who focused his vision on rural communities and said “I see you, I hear you and I want to help you.”

Trump was the “change” candidate of the 2016 election. Whether Trump proposed a good change or bad change is up to you, but it can’t be denied that Trump offered change. Hillary Clinton, on the other hand, was the establishment candidate. She ran as an extension of Obama and, even though this appealed to the majority of voters located in cities, those in the country were looking for something else. Obama’s policies did little to help  alleviate the many ailments felt by those in rural communities. In response, these voters came out for the candidate who offered to “make America great again.”

I believe that this is why rural, white communities voted for Trump in droves. I do not believe it was purely racism. I believe it is because no one has listened to these communities’ cries for help. The media and our politicians focus on the poverty and deprivation found in cities and, while bringing these issues to light is immensely important, we have neglected another group of people who are suffering. It is not right to brush off all of these rural counties with words like “deplorable” and not look into why they might have voted for Trump with such desperation.

It was not a racist who voted for Trump, but a father who has no possible way of providing a steady income for his family. It was not a misogynist who voted for Trump, but a mother who is feeding her baby mountain dew out of a bottle. It was not a deplorable who voted for Trump, but a young man who has no possibility of getting out of a small town that is steadily growing smaller.

The people America has forgotten about are the ones who voted for Donald Trump. It does not matter if you agree with Trump. It does not matter if you believe that these people voted for a candidate who won’t actually help them. What matters is that the red electoral college map was a scream for help, and we’re screaming racist so loud we don’t hear them. Hatred didn’t elect Donald Trump; People did. [“Hate Didn’t Elect Donald Trump; People Did,” Tori’s Thought Bubble, November 12, 2016]

Wise words. The best way to help the people of whom Tori writes — the people of Charles Murray’s Fishtown — is to ignore the smart-educated-professional-affluent class. It’s a non-problem, as discussed above. The best way to help the forgotten people of America is to unleash the latent economic power of the United States by removing the dead hand of government from the economy.

 

Taleb’s Ruinous Rhetoric

A correspondent sent me some links to writings of Nicholas Nassim Taleb. One of them is “The Intellectual Yet Idiot,” in which Taleb makes some acute observations; for example:

What we have been seeing worldwide, from India to the UK to the US, is the rebellion against the inner circle of no-skin-in-the-game policymaking “clerks” and journalists-insiders, that class of paternalistic semi-intellectual experts with some Ivy league, Oxford-Cambridge, or similar label-driven education who are telling the rest of us 1) what to do, 2) what to eat, 3) how to speak, 4) how to think… and 5) who to vote for.

But the problem is the one-eyed following the blind: these self-described members of the “intelligentsia” can’t find a coconut in Coconut Island, meaning they aren’t intelligent enough to define intelligence hence fall into circularities — but their main skill is capacity to pass exams written by people like them….

The Intellectual Yet Idiot is a production of modernity hence has been accelerating since the mid twentieth century, to reach its local supremum today, along with the broad category of people without skin-in-the-game who have been invading many walks of life. Why? Simply, in most countries, the government’s role is between five and ten times what it was a century ago (expressed in percentage of GDP)….

The IYI pathologizes others for doing things he doesn’t understand without ever realizing it is his understanding that may be limited. He thinks people should act according to their best interests and he knows their interests, particularly if they are “red necks” or English non-crisp-vowel class who voted for Brexit. When plebeians do something that makes sense to them, but not to him, the IYI uses the term “uneducated”. What we generally call participation in the political process, he calls by two distinct designations: “democracy” when it fits the IYI, and “populism” when the plebeians dare voting in a way that contradicts his preferences….

The IYI has been wrong, historically, on Stalinism, Maoism, GMOs, Iraq, Libya, Syria, lobotomies, urban planning, low carbohydrate diets, gym machines, behaviorism, transfats, freudianism, portfolio theory, linear regression, Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, selfish gene, Bernie Madoff (pre-blowup) and p-values. But he is convinced that his current position is right.

That’s all yummy red meat to a person like me, especially in the wake of November 8, which Taleb’s piece predates. But the last paragraph quoted above reminded me that I had read something critical about a paper in which Taleb applies the precautionary principle. So I found the paper, which is by Taleb (lead author) and several others. This is from the abstract:

Here we formalize PP [the precautionary principle], placing it within the statistical and probabilistic structure of “ruin” problems, in which a system is at risk of total failure, and in place of risk we use a formal “fragility” based approach. In these problems, what appear to be small and reasonable risks accumulate inevitably to certain irreversible harm….

Our analysis makes clear that the PP is essential for a limited set of contexts and can be used to justify only a limited set of actions. We discuss the implications for nuclear energy and GMOs. GMOs represent a public risk of global harm, while harm from nuclear energy is comparatively limited and better characterized. PP should be used to prescribe severe limits on GMOs. [“The Precautionary Principle (With Application to the Genetic Modification of Organisms),” Extreme Risk Initiative – NYU School of Engineering Working Paper Series]

Jon Entine demurs:

Taleb has recently become the darling of GMO opponents. He and four colleagues–Yaneer Bar-Yam, Rupert Read, Raphael Douady and Joseph Norman–wrote a paper, The Precautionary Principle (with Application to the Genetic Modification of Organisms, released last May and updated last month, in which they claim to bring risk theory and the Precautionary Principle to the issue of whether GMOS might introduce “systemic risk” into the environment….

The crux of his claims: There is no comparison between conventional selective breeding of any kind, including mutagenesis which requires the radiation or chemical dousing of seeds (and has resulted in more than 2500 varieties of fruits, vegetables, and nuts, almost all available in organic varieties) versus what his calls the top-down engineering that occurs when a gene is taken from an organism and transferred to another (ignoring that some forms of genetic engineering, including gene editing, do not involve gene transfers). Taleb goes on to argue that the chance of ecocide, or the destruction of the environment and potentially of humans, increases incrementally with each additional transgenic trait introduced into the environment. In other words, in his mind genetic engineering is a classic “black swan” scenario.

Neither Taleb nor any of the co-authors has any background in genetics or agriculture or food, or even familiarity with the Precautionary Principle as it applies to biotechology, which they liberally invoke to justify their positions….

One of the paper’s central points displays his clear lack of understanding of modern crop breeding. He claims that the rapidity of the genetic changes using the rDNA technique does not allow the environment to equilibrate. Yet rDNA techniques are actually among the safest crop breeding techniques in use today because each rDNA crop represents only 1-2 genetic changes that are more thoroughly tested than any other crop breeding technique. The number of genetic changes caused by hybridization or mutagensis techniques are orders of magnitude higher than rDNA methods. And no testing is required before widespread monoculture-style release. Even selective breeding likely represents a more rapid change than rDNA techniques because of the more rapid employment of the method today.

In essence. Taleb’s ecocide argument applies just as much to other agricultural techniques in both conventional and organic agriculture. The only difference between GMOs and other forms of breeding is that genetic engineering is closely evaluated, minimizing the potential for unintended consequences. Most geneticists–experts in this field as opposed to Taleb–believe that genetic engineering is far safer than any other form of breeding.

Moreover, as Maxx Chatsko notes, the natural environment has encountered new traits from unthinkable events (extremely rare occurrences of genetic transplantation across continents, species and even planetary objects, or extremely rare single mutations that gave an incredible competitive advantage to a species or virus) that have led to problems and genetic bottlenecks in the past — yet we’re all still here and the biosphere remains tremendously robust and diverse. So much for Mr. Doomsday. [“Is Nassim Taleb a ‘Dangerous Imbecile’ or on [sic] the Pay of Anti-GMO Activists?Genetic Literacy Project, November 13, 2014 — see footnote for an explanation of “dangerous imbecile”]

Gregory Conko also demurs:

The paper received a lot of attention in scientific circles, but was roundly dismissed for being long on overblown rhetoric but conspicuously short on any meaningful reference to the scientific literature describing the risks and safety of genetic engineering, and for containing no understanding of how modern genetic engineering fits within the context of centuries of far more crude genetic modification of plants, animals, and microorganisms.

Well, Taleb is back, this time penning a short essay published on The New York Times’s DealB%k blog with co-author Mark Spitznagel. The authors try to draw comparisons between the recent financial crisis and GMOs, claiming the latter represent another “Too Big to Fail” crisis in waiting. Unfortunately, Taleb’s latest contribution is nothing more than the same sort of evidence-free bombast posing as thoughtful analysis. The result is uninformed and/or unintelligible gibberish….

“In nature, errors stay confined and, critically, isolated.” Ebola, anyone? Avian flu? Or, for examples that are not “in nature” but the “small step” changes Spitznagel and Taleb seem to prefer, how about the introduction of hybrid rice plants into parts of Asia that have led to widespread outcrossing to and increased weediness in wild red rices? Or kudzu? Again, this seems like a bold statement designed to impress. But it is completely untethered to any understanding of what actually occurs in nature or the history of non-genetically engineered crop introductions….

“[T]he risk of G.M.O.s are more severe than those of finance. They can lead to complex chains of unpredictable changes in the ecosystem, while the methods of risk management with G.M.O.s — unlike finance, where some effort was made — are not even primitive.” Again, the authors evince no sense that they understand how extensively breeders have been altering the genetic composition of plants and other organisms for the past century, or what types of risk management practices have evolved to coincide.

In fact, compared with the wholly voluntary (and yet quite robust) risk management practices that are relied upon to manage introductions of mutant varieties, somaclonal variants, wide crosses, and the products of cell fusion, the legally obligatory risk management practices used for genetically engineered plant introductions are vastly over-protective.

In the end, Spitznagel and Taleb’s argument boils down to a claim that ecosystems are complex and rDNA modification seems pretty mysterious to them, so nobody could possibly understand it. Until they can offer some arguments that take into consideration what we actually know about genetic modification of organisms (by various methods) and why we should consider rDNA modification uniquely risky when other methods result in even greater genetic changes, the rest of us are entitled to ignore them. [“More Unintelligible Gibberish on GMO Risks from Nicholas Nassim Taleb,” Competitive Enterprise Institute, July 16, 2015]

And despite my enjoyment of Taleb’s red-meat commentary about IYIs, I have to admit that I’ve had my fill of Taleb’s probabilistic gibberish. This is from “Fooled by Non-Randomness,” which I wrote seven years ago about Taleb’s Fooled by Randomness:

The first reason that I am unfooled by Fooled… might be called a meta-reason. Standing back from the book, I am able to perceive its essential defect: According to Taleb, human affairs — especially economic affairs, and particularly the operations of financial markets — are dominated by randomness. But if that is so, only a delusional person can truly claim to understand the conduct of human affairs. Taleb claims to understand the conduct of human affairs. Taleb is therefore either delusional or omniscient.

Given Taleb’s humanity, it is more likely that he is delusional — or simply fooled, but not by randomness. He is fooled because he proceeds from the assumption of randomness instead of exploring the ways and means by which humans are actually capable of shaping events. Taleb gives no more than scant attention to those traits which, in combination, set humans apart from other animals: self-awareness, empathy, forward thinking, imagination, abstraction, intentionality, adaptability, complex communication skills, and sheer brain power. Given those traits (in combination) the world of human affairs cannot be random. Yes, human plans can fail of realization for many reasons, including those attributable to human flaws (conflict, imperfect knowledge, the triumph of hope over experience, etc.). But the failure of human plans is due to those flaws — not to the randomness of human behavior.

What Taleb sees as randomness is something else entirely. The trajectory of human affairs often is unpredictable, but it is not random. For it is possible to find patterns in the conduct of human affairs, as Taleb admits (implicitly) when he discusses such phenomena as survivorship bias, skewness, anchoring, and regression to the mean….

[R]andom events as events which are repeatable, convergent on a limiting value, and truly patternless over a large number of repetitions. Evolving economic events (e.g., stock-market trades, economic growth) are not alike (in the way that dice are, for example), they do not converge on limiting values, and they are not patternless, as I will show.

In short, Taleb fails to demonstrate that human affairs in general or financial markets in particular exhibit randomness, properly understood….

A bit of unpredictability (or “luck”) here and there does not make for a random universe, random lives, or random markets. If a bit of unpredictability here and there dominated our actions, we wouldn’t be here to talk about randomness — and Taleb wouldn’t have been able to marshal his thoughts into a published, marketed, and well-sold book.

Human beings are not “designed” for randomness. Human endeavors can yield unpredictable results, but those results do not arise from random processes, they derive from skill or the lack therof, knowledge or the lack thereof (including the kinds of self-delusions about which Taleb writes), and conflicting objectives….

No one believes that Ty Cobb, Babe Ruth, Ted Williams, Christy Matthewson, Warren Spahn, and the dozens of other baseball players who rank among the truly great were lucky. No one believes that the vast majority of the the tens of thousands of minor leaguers who never enjoyed more than the proverbial cup of coffee were unlucky. No one believes that the vast majority of the millions of American males who never made it to the minor leagues were unlucky. Most of them never sought a career in baseball; those who did simply lacked the requisite skills.

In baseball, as in life, “luck” is mainly an excuse and rarely an explanation. We prefer to apply “luck” to outcomes when we don’t like the true explanations for them. In the realm of economic activity and financial markets, one such explanation … is the exogenous imposition of governmental power….

Given what I have said thus far, I find it almost incredible that anyone believes in the randomness of financial markets. It is unclear where Taleb stands on the random-walk hypothesis, but it is clear that he believes financial markets to be driven by randomness. Yet, contradictorily, he seems to attack the efficient-markets hypothesis (see pp. 61-62), which is the foundation of the random-walk hypothesis.

What is the random-walk hypothesis? In brief, it is this: Financial markets are so efficient that they instantaneously reflect all information bearing on the prices of financial instruments that is then available to persons buying and selling those instruments….

When we step back from day-to-day price changes, we are able to see the underlying reality: prices (instead of changes) and price trends (which are the opposite of randomness). This (correct) perspective enables us to see that stock prices (on the whole) are not random, and to identify the factors that influence the broad movements of the stock market. For one thing, if you look at stock prices correctly, you can see that they vary cyclically….

[But] the long-term trend of the stock market (as measured by the S&P 500) is strongly correlated with GDP. And broad swings around that trend can be traced to governmental intervention in the economy….

The wild swings around the trend line began in the uncertain aftermath of World War I, which saw the imposition of production and price controls. The swings continued with the onset of the Great Depression (which can be traced to governmental action), the advent of the anti-business New Deal, and the imposition of production and price controls on a grand scale during World War II. The next downswing was occasioned by the culmination the Great Society, the “oil shocks” of the early 1970s, and the raging inflation that was touched off by — you guessed it — government policy. The latest downswing is owed mainly to the financial crisis born of yet more government policy: loose money and easy loans to low-income borrowers.

And so it goes, wildly but predictably enough if you have the faintest sense of history. The moral of the story: Keep your eye on government and a hand on your wallet.

There is randomness in economic affairs, but they are not dominated by randomness. They are dominated by intentions, including especially the intentions of the politicians and bureaucrats who run governments. Yet, Taleb has no space in his book for the influence of their deeds economic activity and financial markets.

Taleb is right to disparage those traders (professional and amateur) who are lucky enough to catch upswings, but are unprepared for downswings. And he is right to scoff at their readiness to believe that the current upswing (uniquely) will not be followed by a downswing (“this time it’s different”).

But Taleb is wrong to suggest that traders are fooled by randomness. They are fooled to some extent by false hope, but more profoundly by their inablity to perceive the economic damage wrought by government. They are not alone of course; most of the rest of humanity shares their perceptual failings.

Taleb, in that respect, is only somewhat different than most of the rest of humanity. He is not fooled by false hope, but he is fooled by non-randomness — the non-randomness of government’s decisive influence on economic activity and financial markets. In overlooking that influence he overlooks the single most powerful explanation for the behavior of markets in the past 90 years.

I followed up a few days later with “Randomness Is Over-Rated“:

What we often call random events in human affairs really are non-random events whose causes we do not and, in some cases, cannot know. Such events are unpredictable, but they are not random….

Randomness … is found in (a) the results of non-intentional actions, where (b) we lack sufficient knowledge to understand the link between actions and results.

It is unreasonable to reduce intentional human behavior to probabilistic formulas. Humans don’t behave like dice, roulette balls, or similar “random” devices. But that is what Taleb (and others) do when they ascribe unusual success in financial markets to “luck.”…

I say it again: The most successful professionals are not successful because of luck, they are successful because of skill. There is no statistically predetermined percentage of skillful traders; the actual percentage depends on the skills of entrants and their willingness (if skillful) to make a career of it….

The outcomes of human endeavor are skewed because the distribution of human talents is skewed. It would be surprising to find as many as one-half of traders beating the long-run average performance of the various markets in which they operate….

[Taleb] sees an a priori distribution of “winners” and losers,” where “winners” are determined mainly by luck, not skill. Moreover, we — the civilians on the sidelines — labor under the false impression about the relative number of “winners”….

[T]here are no “odds” favoring success — even in financial markets. Financial “players” do what they can do, and most of them — like baseball players — simply don’t have what it takes for great success. Outcomes are skewed, not because of (fictitious) odds but because talent is distributed unevenly.

The real lesson … is not to assume that the “winners” are merely lucky. No, the real lesson is to seek out those “winners” who have proven their skills over a long period of time, through boom and bust and boom and bust.

Those who do well, over the long run, do not do so merely because they have survived. They have survived because they do well.

There’s much more, and you should read the whole thing(s), as they say.

I turn now to Taleb’s version of the precautionary principle, which seems tailored to support the position that Taleb wants to support, namely, that GMOs should be banned. Who gets to decide what “threats” should be included in the “limited set of contexts” where the PP applies? Taleb, of course. Taleb has excreted a circular pile of horse manure; thus:

  • The PP applies only where I (Taleb) say it applies.
  • I (Taleb) say that the PP applies to GMOs.
  • Therefore, the PP applies to GMOs.

I (the proprietor of this blog) say that the PP ought to apply to the works of Nicholas Nassim Taleb. They ought to be banned because they may perniciously influence gullible readers.

I’ll justify my facetious proposal to ban Taleb’s writings by working my way through the “logic” of what Taleb calls the non-naive version of the PP, on which he bases his anti-GMO stance. Here are the main points of Taleb’s argument, extracted from “The Precautionary Principle (With Application to the Genetic Modification of Organisms).” Taleb’s statements (with minor, non-substantive elisions) are in roman type, followed by my comments in bold type.

The purpose of the PP is to avoid a certain class of what, in probability and insurance, is called “ruin” problems. A ruin problem is one where outcomes of risks have a non-zero probability of resulting in unrecoverable losses. An often-cited illustrative case is that of a gambler who loses his entire fortune and so cannot return to the game. In biology, an example would be a species that has gone extinct. For nature, “ruin” is ecocide: an irreversible termination of life at some scale, which could be planetwide.

The extinction of a species is ruinous only if one believes that species shouldn’t become extinct. But they do, because that’s the way nature works. Ruin, as Taleb means it, is avoidable, self-inflicted, and (at some point) irreversibly catastrophic. Let’s stick to that version of it.

Our concern is with public policy. While an individual may be advised to not “bet the farm,” whether or not he does so is generally a matter of individual preferences. Policy makers have a responsibility to avoid catastrophic harm for society as a whole; the focus is on the aggregate, not at the level of single individuals, and on globalsystemic, not idiosyncratic, harm. This is the domain of collective “ruin” problems.

This assumes that government can do something about a potentially catastrophic harm — or should do something about it. The Great Depression, for example, began as a potentially catastrophic harm that government made into a real catastrophic harm (for millions of Americans, though not all of them) and prolonged through its actions. Here Taleb commits the nirvana fallacy, by implicitly ascribing to government the power to anticipate harm without making a Type I or Type II error,  and then to take appropriate and effective action to prevent or ameliorate that harm.

By the ruin theorems, if you incur a tiny probability of ruin as a “one-off” risk, survive it, then do it again (another “one-off” deal), you will eventually go bust with probability 1. Confusion arises because it may seem that the “one-off” risk is reasonable, but that also means that an additional one is reasonable. This can be quantified by recognizing that the probability of ruin approaches 1 as the number of exposures to individually small risks, say one in ten thousand, increases. For this reason a strategy of risk taking is not sustainable and we must consider any genuine risk of total ruin as if it were inevitable.

But you have to know in advance that a particular type of risk will be ruinous. Which means that — given the uncertainty of such knowledge — the perception of (possible) ruin is in the eye of the assessor. (I’ll have a lot more to say about uncertainty.)

A way to formalize the ruin problem in terms of the destructive consequences of actions identifies harm as not about the amount of destruction, but rather a measure of the integrated level of destruction over the time it persists. When the impact of harm extends to all future times, i.e. forever, then the harm is infinite. When the harm is infinite, the product of any non-zero probability and the harm is also infinite, and it cannot be balanced against any potential gains, which are necessarily finite.

As discussed below, the concept of probability is inapplicable here. Further, and granting the use of probability for the sake of argument, Taleb’s contention holds only if there’s no doubt that the harm will be infinite, that is, totally ruinous. If there’s room for doubt, there’s room for disagreement as to the extent of the harm (if any) and the value of attempting to counter it (or not). Otherwise, it would be “rational” to devote as much as the entire economic output of the world to combat so-called catastrophic anthropogenic global warming (CAGW) because some “expert” says that there’s a non-zero probability of its occurrence. In practical terms, the logic of such a policy is that if you’re going to die of heat stroke, you might as well do it sooner rather than later — which would be one of the consequences of, say, banning the use of fossil fuels. Other consequences would be freezing to death if you live in a cold climate and starving to death because foodstuffs couldn’t be grown, harvested, processed, or transported. Those are also infinite harms, and they arise from Taleb’s preferred policy of acting on little information about a risk because (in someone’s view) it could lead to infinite harm. There’s a relevant cost-benefit analysis for you.

Because the “cost” of ruin is effectively infinite, costbenefit analysis (in which the potential harm and potential gain are multiplied by their probabilities and weighed against each other) is no longer a useful paradigm.

Here, Taleb displays profound ignorance in two fields: economics and probability. His ignorance of economics might be excusable, but his ignorance of probability isn’t, inasmuch as he’s made a name for himself (and probably a lot of money) by parading his “sophisticated” understanding of it in books and on the lecture circuit.

Regarding the economics of cost-benefit analysis (CBA), it’s properly an exercise for individual persons and firms, not governments. When a government undertakes CBA, it implicitly (and arrogantly) assumes that the costs of a project (which are defrayed in the end by taxpayers) can be weighed against the monetary benefits of the project (which aren’t distributed in proportion to the costs and are often deliberately distributed so that taxpayers bear most of the costs and non-taxpayers reap most of the benefits).

Regarding probability, Taleb quite wrongly insists on ascribing probabilities to events that might (or might not) occur in the future. A probability is a statement about a very large number of like events, each of which has an unpredictable (random) outcome. A valid probability is based either on a large number of past “trials” or a mathematical certainty (e.g., a fair coin, tossed a large number of times — 100 or more  — will come up heads about half the time and tails about half the time). Probability, properly understood, says nothing about the outcome of an individual future event; that is, it says nothing about what will happen next in a truly random trial, such as a coin toss. Probability certainly says nothing about the occurrence of a unique event. Therefore, Taleb cannot validly assign a probability of “ruin” to a speculative event as little understood (by him)  as the effect of GMOs on the world’s food supply.

The non-naive PP bridges the gap between precaution and evidentiary action using the ability to evaluate the difference between local and global risks.

In other words, if there’s a subjective, non-zero probability of CAGW in Taleb’s mind, that probability should outweigh evidence about the wrongness of a belief in CAGW. And such evidence is ample, not only in the various scientific fields that impinge on climatology, but also in the failure of almost all climate models to predict the long pause in what’s called global warming. Ah, but “almost all” — in Taleb’s mind — means that there’s a non-zero probability of CAGW.  It’s the “heads I win, tails you lose” method of gambling on the flip of a coin.

Here’s another way of putting it: Taleb turns the scientific method upside down by rejecting the null hypothesis (e.g., no CAGW) on the basis of evidence that confirms it (no observable rise in temperatures approaching the predictions of CAGW theory) because a few predictions happened to be close to the truth. Taleb, in his guise as the author of Fooled by Randomness, would correctly label such predictions as lucky.

While evidentiary approaches are often considered to reflect adherence to the scientific method in its purest form, it is apparent that these approaches do not apply to ruin problems. In an evidentiary approach to risk (relying on evidence-based methods), the existence of a risk or harm occurs when we experience that risk or harm. In the case of ruin, by the time evidence comes it will by definition be too late to avoid it. Nothing in the past may predict one fatal event. Thus standard evidence-based approaches cannot work.

It’s misleading to say that “by the time the evidence comes it will be by definition too late to avoid it.” Taleb assumes, without proof, that the linkage between GMOs, say, and a worldwide food crisis will occur suddenly and without warning (or sufficient warning), as if GMOs will be everywhere at once and no one will have been paying attention to their effects as their use spread. That’s unlikely given broad disparities in the distribution of GMOs, the state of vigilance about them, and resistance to them in many quarters. What Taleb really says is this: Some people (Taleb among them) believe that GMOs pose an existential risk with a probability greater than zero. (Any such “probability” is fictional, as discussed above.) Therefore, the risk of ruin from GMOs is greater than zero and ruin is inevitable. By that logic, there must be dozens of certain-death scenarios for the planet. Why is Taleb wasting his time on GMOs, which are small potatoes compared with, say, asteroids? And why don’t we just slit our collective throat and get it over with?

Since there are mathematical limitations to predictability of outcomes in a complex system, the central issue to determine is whether the threat of harm is local (hence globally benign) or carries global consequences. Scientific analysis can robustly determine whether a risk is systemic, i.e. by evaluating the connectivity of the system to propagation of harm, without determining the specifics of such a risk. If the consequences are systemic, the associated uncertainty of risks must be treated differently than if it is not. In such cases, precautionary action is not based on direct empirical evidence but on analytical approaches based upon the theoretical understanding of the nature of harm. It relies on probability theory without computing probabilities. The essential question is whether or not global harm is possible or not.

More of the same.

Everything that has survived is necessarily non-linear to harm. If I fall from a height of 10 meters I am injured more than 10 times than if I fell from a height of 1 meter, or more than 1000 times than if I fell from a height of 1 centimeter, hence I am fragile. In general, every additional meter, up to the point of my destruction, hurts me more than the previous one.

This explains the necessity of considering scale when invoking the PP. Polluting in a small way does not warrant the PP because it is essentially less harmful than polluting in large quantities, since harm is non-linear.

This is just a way of saying that there’s a threshold of harm, and harm becomes ruinous when the threshold is surpassed. Which is true in some cases, but there’s a wide variety of cases and a wide range of thresholds. This is just a framing device meant to set the reader up for the sucker punch, which is that the widespread use of GMOs will be ruinous, at some undefined point. Well, we’ve been hearing that about CAGW for twenty years, and the undefined point keeps receding into the indefinite future.

Thus, when impacts extend to the size of the system, harm is severely exacerbated by non-linear effects. Small impacts, below a threshold of recovery, do not accumulate for systems that retain their structure. Larger impacts cause irreversible damage.We should be careful, however, of actions that may seem small and local but then lead to systemic consequences.

“When impacts extend to the size of the system” means “when ruin is upon us there is ruin.” It’s a tautology without empirical content.

An increase in uncertainty leads to an increase in the probability of ruin, hence “skepticism” is that its impact on decisions should lead to increased, not decreased conservatism in the presence of ruin. Hence skepticism about climate models should lead to more precautionary policies.

This is through the looking glass and into the wild blue yonder. More below.

The rest of the paper is devoted to two things. One of them is making the case against GMOs because they supposedly exemplify the kind of risk that’s covered by the non-naive PP. I’ll let Jon Entine and Gregory Conko (quoted above) speak for me on that issue.

The other thing that the rest of the paper does is to spell out and debunk ten supposedly fallacious arguments against PP. I won’t go into them here because Taleb’s version of PP is self-evidently fallacious. The fallacy can be found in figure 6 of the paper:

taleb-figure-6

Taleb pulls an interesting trick here — or perhaps he exposes his fundamental ignorance about probability. Let’s take it a step at a time:

  1. Figure 6 depicts two normal distributions. But what are they normal distributions of? Let’s say that they’re supposed to be normal distributions of the probability of the occurrence of CAGW (however that might be defined) by a certain date, in the absence of further steps to mitigate it (e.g., banning the use of fossil fuels forthwith). There’s no known normal distribution of the probability of CAGW because, as discussed above, CAGW is a unique, hypothesized (future) event which cannot have a probability. It’s not 100 tosses of a fair coin.
  2. The curves must therefore represent something about models that predict the arrival of CAGW by a certain date. Perhaps those predictions are normally distributed, though that has nothing to do with the “probability” of CAGW if all of the predictions are wrong.
  3. The two curves shown in Taleb’s figure 6 are meant (by Taleb) to represent greater and lesser certainty about the arrival of CAGW (or the ruinous scenario of his choice), as depicted by climate models.
  4. But if models are adjusted or built anew in the face of evidence about their shortcomings (i.e., their gross overprediction of temperatures since 1998), the newer models (those with presumably greater certainty) will have two characteristics: (a) The tails will be thinner, as Taleb suggests. (b) The mean will shift to the left or right; that is they won’t have the same mean.
  5. In the case of CAGW, the mean will shift to the right because it’s already known that extant models overstate the risk of “ruin.” The left tail of the distribution of the new models will therefore shift to the right, further reducing the “probability” of CAGW.
  6. Taleb’s trick is to ignore that shift and, further, to implicitly assume that the two distributions coexist. By doing that he can suggest that there’s an “increase in uncertainty [that] leads to an increase in the probability of ruin.” In fact, there’s a decrease in uncertainty, and therefore a decrease in the probability of ruin.

I’ll say it again: As evidence is gathered, there is less uncertainty; that is, the high-uncertainty condition precedes the low-uncertainty one. The movement from high uncertainty to low uncertainty would result in the assignment of a lower probability to a catastrophic outcome (assuming, for the moment, that such a probability is meaningful). And that would be a good reason to worry less about the eventuality of the catastrophic outcome. Taleb wants to compare the two distributions, as if the earlier one (based on little evidence) were as valid as the later one (based on additional evidence).

That’s why Taleb counsels against “evidentiary approaches.” In Taleb’s worldview, knowing little about a potential risk to health, welfare, and existence is a good reason to take action with respect to that risk. Therefore, if you know little about the risk, you should act immediately and with all of the resources at your disposal. Why? Because the risk might suddenly cause an irreversible calamity. But that’s not true of CAGW or GMOs. There’s time to gather evidence as to whether there’s truly a looming calamity, and then — if necessary — to take steps to avert it, steps that are more likely to be effective because they’re evidence-based. Further, if there’s not a looming calamity, a tremendous wast of resources will be averted.

It follows from the non-naive PP — as interpreted by Taleb — that all human beings should be sterilized and therefore prevented from procreating. This is so because sometimes just a few human beings  — Hitler, Mussolini, and Tojo, for example — can cause wars. And some of those wars have harmed human beings on a nearly global scale. Global sterilization is therefore necessary, to ensure against the birth of new Hitlers, Mussolinis, and Tojos — even if it prevents the birth of new Schweitzers, Salks, Watsons, Cricks, and Mother Teresas.

In other words, the non-naive PP (or Taleb’s version of it) is pseudo-scientific claptrap. It can be used to justify any extreme and nonsensical position that its user wishes to advance. It can be summed up in an Orwellian sentence: There is certainty in uncertainty.

Perhaps this is better: You shouldn’t get out of bed in the morning because you don’t know with certainty everything that will happen to you in the course of the day.

*     *     *

NOTE: The title of Jon Entine’s blog post, quoted above, refers to Taleb as a “dangerous imbecile.” Here’s Entine’s explanation of that characterization:

If you think the headline of this blog [post] is unnecessarily inflammatory, you are right. It’s an ad hominem way to deal with public discourse, and it’s unfair to Nassim Taleb, the New York University statistician and risk analyst. I’m using it to make a point–because it’s Taleb himself who regularly invokes such ugly characterizations of others….

…Taleb portrays GMOs as a ‘castrophe in waiting’–and has taken to personally lashing out at those who challenge his conclusions–and yes, calling them “imbeciles” or paid shills.

He recently accused Anne Glover, the European Union’s Chief Scientist, and one of the most respected scientists in the world, of being a “dangerous imbecile” for arguing that GM crops and foods are safe and that Europe should apply science based risk analysis to the GMO approval process–views reflected in summary statements by every major independent science organization in the world.

Taleb’s ugly comment was gleefully and widely circulated by anti-GMO activist web sites. GMO Free USA designed a particularly repugnant visual to accompany their post.

Taleb is known for his disagreeable personality–as Keith Kloor at Discover noted, the economist Noah Smith had called Taleb  a “vulgar bombastic windbag”, adding, “and I like him a lot”. He has a right to flaunt an ego bigger than the Goodyear blimp. But that doesn’t make his argument any more persuasive.

*     *     *

Related posts:
“Warmism”: The Myth of Anthropogenic Global Warming
More Evidence against Anthropogenic Global Warming
Yet More Evidence against Anthropogenic Global Warming
Pascal’s Wager, Morality, and the State
Modeling Is Not Science
Fooled by Non-Randomness
Randomness Is Over-Rated
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
Beware the Rare Event
Demystifying Science
Pinker Commits Scientism
AGW: The Death Knell
The Pretence of Knowledge
“The Science Is Settled”
“Settled Science” and the Monty Hall Problem
The Limits of Science, Illustrated by Scientists
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
The “Marketplace” of Ideas
My War on the Misuse of Probability
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II)
Is Science Self-Correcting?