Science and Understanding

Nature, Nurture, and Leniency

I recently came across an article by Brian Boutwell, “Why Parenting May not Matter and Why Most Social Science Research Is Probably Wrong” (Quillette, December 1, 2015). Boutwell is an associate professor of criminology and criminal justice at Saint Louis University. Here’s some of what he has to say about nature, nurture, and behavior:

Despite how it feels, your mother and father (or whoever raised you) likely imprinted almost nothing on your personality that has persisted into adulthood…. I do have evidence, though, and by the time we’ve strolled through the menagerie of reasons to doubt parenting effects, I think another point will also become evident: the problems with parenting research are just a symptom of a larger malady plaguing the social and health sciences. A malady that needs to be dealt with….

[L]et’s start with a study published recently in the prestigious journal Nature Genetics.1 Tinca Polderman and colleagues just completed the Herculean task of reviewing nearly all twin studies published by behavior geneticists over the past 50 years….

Genetic factors were consistently relevant, differentiating humans on a range of health and psychological outcomes (in technical parlance, human differences are heritable). The environment, not surprisingly, was also clearly and convincingly implicated….

[B]ehavioral geneticists make a finer grain distinction than most about the environment, subdividing it into shared and non-shared components. Not much is really complicated about this. The shared environment makes children raised together similar to each other. The term encompasses the typical parenting effects that we normally envision when we think about environmental variables. Non-shared influences capture the unique experiences of siblings raised in the same home; they make siblings different from one another….

Based on the results of classical twin studies, it just doesn’t appear that parenting—whether mom and dad are permissive or not, read to their kid or not, or whatever else—impacts development as much as we might like to think. Regarding the cross-validation that I mentioned, studies examining identical twins separated at birth and reared apart have repeatedly revealed (in shocking ways) the same thing: these individuals are remarkably similar when in fact they should be utterly different (they have completely different environments, but the same genes).3 Alternatively, non-biologically related adopted children (who have no genetic commonalities) raised together are utterly dissimilar to each other—despite in many cases having decades of exposure to the same parents and home environments.

One logical explanation for this is a lack of parenting influence for psychological development. Judith Rich Harris made this point forcefully in her book The Nurture Assumption…. As Harris notes, parents are not to blame for their children’s neuroses (beyond the genes they contribute to the manufacturing of that child), nor can they take much credit for their successful psychological adjustment. To put a finer point on what Harris argued, children do not transport the effects of parenting (whatever they might be) outside the home. The socialization of children certainly matters (remember, neither personality nor temperament is 100 percent heritable), but it is not the parents who are the primary “socializers”, that honor goes to the child’s peer group….

Is it possible that parents really do shape children in deep and meaningful ways? Sure it is…. The trouble is that most research on parenting will not help you in the slightest because it doesn’t control for genetic factors….

Natural selection has wired into us a sense of attachment for our offspring. There is no need to graft on beliefs about “the power of parenting” in order to justify our instinct that being a good parent is important. Consider this: what if parenting really doesn’t matter? Then what? The evidence for pervasive parenting effects, after all, looks like a foundation of sand likely to slide out from under us at any second. If your moral constitution requires that you exert god-like control over your kid’s psychological development in order to treat them with the dignity afforded any other human being, then perhaps it is time to recalibrate your moral compass…. If you want happy children, and you desire a relationship with them that lasts beyond when they’re old enough to fly the nest, then be good to your kids. Just know that it probably will have little effect on the person they will grow into.

Color me unconvinced. There’s a lot of hand-waving in Boutwell’s piece, but little in the way of crucial facts, such as:

  • How is behavior quantified?
  • Does the quantification account for all aspects of behavior (unlikely), or only those aspects that are routinely quantified (e.g., criminal convictions)?
  • Is it meaningful to say that about 50 percent of behavior is genetically determined, 45 percent is peer-driven, and 0-5 percent is due to “parenting” (as Judith Rich Harris does)? Which 50 percent, 45 percent, and 0-5 percent? And how does one add various types of behavior?
  • How does one determine (outside an unrealistic experiment) the extent to which “children do not transport the effects of parenting (whatever they might be) outside the home”?

The measurement of behavior can’t possibly be as rigorous and comprehensive as the measurement of intelligence. And even those researchers who are willing to countenance and estimate the heritability of intelligence give varying estimates of its magnitude, ranging from 50 to 80 percent.

I wonder if Boutwell, Harris, et al. would like to live in a world in which parents quit teaching their children to obey the law; refrain from lying, stealing, and hurting others; honor their obligations; respect old people; treat babies with care; and work for a living (“money doesn’t grow on trees”).

Unfortunately, the world in which we live — even in the United States — seems more and more to resemble the kind of world in which parents have failed in their duty to inculcate in their children the values of honesty, respect, and hard work. This is from a post at Dyspepsia Generation, “The Spoiled Children of Capitalism“ (no longer online):

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it. And, sadly, they passed their principles, if one may use the term so loosely, down the generations to the point where young people today are scarcely worth using for fertilizer.

In 1919, or 1929, or especially 1939, the adolescents of 1969 would have had neither the leisure nor the money to create the Woodstock Nation. But mommy and daddy shelled out because they didn’t want their little darlings to be caught short, and consequently their little darlings became the worthless whiners who voted for people like Bill Clinton and Barack Obama [and who were people like Bill Clinton and Barack Obama: ED.], with results as you see them. Now that history is catching up to them, a third generation of losers can think of nothing better to do than camp out on Wall Street in hopes that the Cargo will suddenly begin to arrive again.

Good luck with that.

I subscribe to the view that the rot set in after World War II. That rot, in the form of slackerism, is more prevalent now than it ever was. It is not for nothing that Gen Y is also known as the Boomerang Generation.

Nor is it unsurprising that campuses have become hotbeds of petulant and violent behavior. And it’s not just students, but also faculty and administrators — many of whom are boomers. Where were these people before the 1960s, when the boomers came of age? Do you suppose that their sudden emergence was the result of a massive genetic mutation that swept across the nation in the late 1940s? I doubt it very much.

Their sudden emergence was due to the failure of too many members of the so-called Greatest Generation to inculcate in their children the values of honesty, respect, and hard work. How does one do that? By being clear about expectations and by setting limits on behavior — limits that are enforced swiftly, unequivocally, and sometimes with the palm of a hand. When children learn that they can “get away” with dishonesty, disrespect, and sloth, guess what? They become dishonest, disrespectful, and slothful. They give vent to their disrespect through whining, tantrum-like behavior, and even violence.

The leniency that’s being shown toward campus jerks — students, faculty, and administrators — is especially disgusting to this pre-boomer. University presidents need to grow backbones. Campus and municipal police should be out in force, maintaining order and arresting whoever fails to provide a “safe space” for a speaker who might offend their delicate sensibilities. Disruptive and violent behavior should be met with expulsions, firings, and criminal charges.

“My genes made me do it” is neither a valid explanation nor an acceptable excuse.


Related reading: There is a page on Judith Rich Harris’s website with a long list of links to reviews, broadcast commentary, and other discussions of The Nurture Assumption. It is to Harris’s credit that she links to negative as well as positive views of her work.

Special Relativity III: The Velocity Conundrum

Einstein’s special theory of relativity (STR) depends on two postulates:

  1. The laws of physics are invariant (i.e. identical) in all inertial systems (non-accelerating frames of reference).
  2. The speed of light in a vacuum is the same for all observers, regardless of the motion of the light source.

In sum, the speed of light is the same for all observers, regardless of their relative motion and regardless of the motion of the source of light. There seems to be strong experimental and observational evidence for these propositions.

As discussed in the first and second entries in this series, the postulates of STR purportedly negate GalileiNewton relativity, where time is the same in all frames of reference (i.e., there is absolute time) and spatial differences between frames of reference depend simply on their relative velocity (i.e., there is absolute space). In mathematical terms:

t’ = t ,
x’ = x – vt ,
y’ = y ,
z’ = z ,

where t’, x’, y’, and z’ denote the time and position of a frame of reference S’ that is in motion along the x-axis of frame of reference S, the coordinates of which are denoted by t, x, y, and z, and where v is the velocity of S’ relative to S (in the x-direction); and where x’ = x at x = 0 .

Graphically:


Source: Susskind Lectures, Special Relativity, Lecture 1 (Galilean relativity).

(It would seem that x’ = x  + vt , but x’ = x – vt because the x’ axis is represented by the vector τ .)

Einstein introduced STR in his paper, “On The Electrodynamics of Moving Bodies” (1905). He later explained STR in a somewhat less technical book, Relativity: The Special and General Theory (1916, translated 1920). The “kinematical” part of STR is summarized in the equations of the Lorentz transformation:

[T]he Lorentz transformation … relates the coordinates used by one observer to coordinates used by another in uniform relative motion with respect to the first.

Assume that the first observer uses coordinates labeled t, x, y, and z, while the second observer uses coordinates labeled t’, x’, y’, and z’. Now suppose that the first observer sees the second moving in the x-direction at a velocity v. And suppose that the observers’ coordinate axes are parallel and that they have the same origin. Then the Lorentz transformation expresses how the coordinates are related:

 t'={\frac {t-{v\,x/c^{2}}}{{\sqrt {1-v^{2}/c^{2}}}}}\ ,
x'={\frac {x-v\,t}{{\sqrt {1-v^{2}/c^{2}}}}}\ ,
y'=y\ ,
z'=z\ ,

where c is the speed of light.

Here is a graph of the relationship between the velocity of a body (frame of reference) and the distance it travels in, say, a year of proper time (the time that would elapse in a “stationary” body):

Space-time tradeoff vs. velocity

This graph is based Lewis Carroll Epstein’s Relativity Visualized, specifically, figures 5-6 through 5-10 of and the accompanying discussion on pages 79-85. The graph implies that an object moving at the speed of light would use no time; it would move only along the spaced-used axis, at a constant time value of zero. Here is Epstein’s explanation:

The object does not age at all. The object has the maximum speed through space, the speed of light. Its speed through time is zero. It is stationary in time. “Right now is forever….”

This seems inconsistent with the fact that light has a finite velocity. Even an object that moves at the speed of light would take some amount of time to go any distance. Perhaps I will explore this seeming contradiction in a future post.

Returning to the graph, a “stationary” body (an abstract point) would use no space; it would move only along the time-used axis, at a constant space value of zero. The curve depicts the intermediate relationships between velocity (and space used) and proper time used. For example, a body that moves at 0.7c would travel 0.7 light-year in a year, and would age only a bit more than 0.7 year.

The relationship between v and t’ can be computed as follows:

t’ = (1 – v2)1/2 ,

where t’ is the elapsed time (proper time), as perceived by S, when S’ is traveling at v (expressed as a fraction of c ).

The slower aging rate of a moving body is an important and strange aspect of STR. It is called time dilation. The faster a body moves the slower it ages relative to an observer in a “stationary” reference frame; that is, a clock in the moving body is seen by the “stationary” observer to advance at a slower rate than his own, identically constructed clock. (This is a reciprocal relationship, which I will explore in a future post.)

The aging rate is determined by the first of the four Lorentz equations given above, where t’ refers to the length of a time interval in S’ relative to the length of a time interval in S . For t > 0 and x = 0 (i.e.. a “stationary” body), t’ is always greater than t ; that is, a clock that ticks every second in S would tick every 1+ seconds in S’ . The greater the interval, the slower the clock runs, which is why, according to STR, time slows as velocity increases.

This is so — according to STR — because of the inextricable relationship between space and time, which is really a unitary four-dimensional “thing” called space-time (or spacetime). Reverting to the usual conceptions of space and time as separate entities, we are all moving through time, whether or not we are moving through space. (STR is limited to inertial movement on a hypothetical plane, and does not address the gravitational movement of a body or of Earth, the Sun’s solar system, and the Milky Way galaxy.) Some of the time involved in moving through space would have been used anyway, just by standing still. So the time required to move through space is reduced by some amount. The amount depends on how fast a body is moving; the faster it is moving, the greater the reduction in the amount of time required to travel a given distance.

Here is a rough analogy: I can go downstream without paddling my canoe, but I can only go as fast as the current will take me. If I paddle, It will take me less time to travel a given distance. The faster I paddle, the less time it will take. A plot of the time required to go a given distance at various paddling rates would resemble the graph above, though it is impossible to reduce the travel time to zero, and the mathematical relationship between travel time and paddling rate isn’t the same as that represented by the graph.

Putting aside time, which I will address in a future post, STR hinges on the meaning and measurement of velocity. In fact, v is assumed to have an absolute and determinate value for a “moving” frame of reference. Otherwise, the equations of STR are meaningless; that is, if v simply represented the relative velocity of two bodies (both in motion), the solutions for a given body would vary with the body with which it is being compared.

Accordingly, STR assumes that all comparisons are made with a hypothetical stationary body with no velocity — a “fixed point” in the universe, if you will. But the “fixed point” would have to be an object around which the universe revolves, and there is no such body. It is therefore impossible to compute the absolute velocity of a body, which STR requires.

And if one can’t determine the velocity of a body, the Lorentz transformation is really meaningless. The values of t’ and x’ are indeterminate — unless one reverts to Galilei-Newton relativity, which assumes, in effect, that every part of the universe is comprised in a unitary frame of reference, with absolute space and absolute time (and, therefore, absolute measures of velocity).

I will end here with two thoughts. The first is that Thomas E. Phipps Jr. (1925-2016) may well have resolved the internal contradictions of STR. Near the end of Old Physics for New: A Worldview Alternative to Einstein’s Relativity Theory, Phipps summarizes with this:

By means of CT [collective time, which is not the same as Einsteinian time], the science of mechanics simplifies formally to its nineteenth-century canonical forms … ; … three-space geometry reverts to Euclidean … and … absoluteness of distant simultaneity … [is] restored. [2012 edition, p. 306]

(Regarding the absoluteness of simultaneity, see “Special Relativity II: A Fatal Flaw?“)

I will have more to say about Phipps’s work in future posts.

More speculatively, it seems to me that there might be a physical phenomenon which serves as the unitary frame of reference implicit in Galilei-Newton relativity: the Higgs field, the lattice of Higgs bosons which is believed to permeate the universe. But this is only a preliminary thought.

Institutional Bias

Arnold Kling:

On the question of whether Federal workers are overpaid relative to private sector workers, [Justin Fox] writes,

The Federal Salary Council, a government advisory body composed of labor experts and government-employee representatives, regularly finds that federal employees make about a third less than people doing similar work in the private sector. The conservative American Enterprise Institute and Heritage Foundation, on the other hand, have estimated that federal employees make 14 percent and 22 percent more, respectively, than comparable private-sector workers….

… Could you have predicted ahead of time which organization’s “research” would find a result favorable to Federal workers and which organization would find unfavorable results? Of course you could. So how do you sustain the belief that normative economics and positive economics are distinct from one another, that economic research cleanly separates facts from values?

I saw institutional bias at work many times in my career as an analyst at a tax-funded think-tank. My first experience with it came in the first project to which I was assigned. The issue at hand was a hot one on those days: whether the defense budget should be altered to increase the size of the Air Force’s land-based tactical air (tacair)  forces while reducing the size of Navy’s carrier-based counterpart. The Air Force’s think-tank had issued a report favorable to land-based tacair (surprise!), so the Navy turned to its think-tank (where I worked). Our report favored carrier-based tacair (surprise!).

How could two supposedly objective institutions study the same issue and come to opposite conclusions? Analytical fraud abetted by overt bias? No, that would be too obvious to the “neutral” referees in the Office of the Secretary of Defense. (Why “neutral”? Read this.)

Subtle bias is easily introduced when the issue is complex, as the tacair issue was. Where would tacair forces be required? What payloads would fighters and bombers carry? How easy would it be to set up land bases? How vulnerable would they be to an enemy’s land and air forces? How vulnerable would carriers be to enemy submarines and long-range bombers? How close to shore could carriers approach? How much would new aircraft, bases, and carriers cost to buy and maintain? What kinds of logistical support would they need, and how much would it cost? And on and on.

Hundreds, if not thousands, of assumptions underlay the results of the studies. Analysts at the Air Force’s think-tank chose those assumptions that favored the Air Force; analysts at the Navy’s think-tank chose those assumptions that favored the Navy.

Why? Not because analysts’ jobs were at stake; they weren’t. Not because the Air Force and Navy directed the outcomes of the studies; they didn’t. They didn’t have to because “objective” analysts are human beings who want “their side” to win. When you work for an institution you tend to identify with it; its success becomes your success, and its failure becomes your failure.

The same was true of the “neutral” analysts in the Office of the Secretary of Defense. They knew which way Mr. McNamara leaned on any issue, and they found themselves drawn to the assumptions that would justify his biases.

And so it goes. Bias is a rampant and ineradicable aspect of human striving. It’s ever-present in the political arena The current state of affairs in Washington, D.C., is just the tip of the proverbial iceberg.

The prevalence and influence of bias in matters that affect hundreds of millions of Americans is yet another good reason to limit the power of government.

Not-So-Random Thoughts (XX)

An occasional survey of web material that’s related to subjects about which I’ve posted. Links to the other posts in this series may be found at “Favorite Posts,” just below the list of topics.

In “The Capitalist Paradox Meets the Interest-Group Paradox,” I quote from Frédéric Bastiat’s “What Is Seen and What Is Not Seen“:

[A] law produces not only one effect, but a series of effects. Of these effects, the first alone is immediate; it appears simultaneously with its cause; it is seen. The other effects emerge only subsequently; they are not seen; we are fortunate if we foresee them.

This might also be called the law of unintended consequences. It explains why so much “liberal” legislation is passed: the benefits are focused a particular group and obvious (if overestimated); the costs are borne by taxpayers in general, many of whom fail to see that the sum of “liberal” legislation is a huge tax bill.

Ross Douthat understands:

[A] new paper, just released through the National Bureau of Economic Research, that tries to look at the Affordable Care Act in full. Its authors find, as you would expect, a substantial increase in insurance coverage across the country. What they don’t find is a clear relationship between that expansion and, again, public health. The paper shows no change in unhealthy behaviors (in terms of obesity, drinking and smoking) under
Obamacare, and no statistically significant improvement in self-reported health since the law went into effect….

[T]he health and mortality data [are] still important information for policy makers, because [they] indicate[] that subsidies for health insurance are not a uniquely death-defying and therefore sacrosanct form of social spending. Instead, they’re more like other forms of redistribution, with costs and benefits that have to be weighed against one another, and against other ways to design a safety net. Subsidies for employer-provided coverage crowd out wages, Medicaid coverage creates benefit cliffs and work disincentives…. [“Is Obamacare a Lifesaver?The New York Times, March 29, 2017]

So does Roy Spencer:

In a theoretical sense, we can always work to make the environment “cleaner”, that is, reduce human pollution. So, any attempts to reduce the EPA’s efforts will be viewed by some as just cozying up to big, polluting corporate interests. As I heard one EPA official state at a conference years ago, “We can’t stop making the environment ever cleaner”.

The question no one is asking, though, is “But at what cost?

It was relatively inexpensive to design and install scrubbers on smokestacks at coal-fired power plants to greatly reduce sulfur emissions. The cost was easily absorbed, and electricty rates were not increased that much.

The same is not true of carbon dioxide emissions. Efforts to remove CO2 from combustion byproducts have been extremely difficult, expensive, and with little hope of large-scale success.

There is a saying: don’t let perfect be the enemy of good enough.

In the case of reducing CO2 emissions to fight global warming, I could discuss the science which says it’s not the huge problem it’s portrayed to be — how warming is only progressing at half the rate forecast by those computerized climate models which are guiding our energy policy; how there have been no obvious long-term changes in severe weather; and how nature actually enjoys the extra CO2, with satellites now showing a “global greening” phenomenon with its contribution to increases in agricultural yields.

But it’s the economics which should kill the Clean Power Plan and the alleged Social “Cost” of Carbon. Not the science.

There is no reasonable pathway by which we can meet more than about 20% of global energy demand with renewable energy…the rest must come mostly from fossil fuels. Yes, renewable energy sources are increasing each year, usually because rate payers or taxpayers are forced to subsidize them by the government or by public service commissions. But global energy demand is rising much faster than renewable energy sources can supply. So, for decades to come, we are stuck with fossil fuels as our main energy source.

The fact is, the more we impose high-priced energy on the masses, the more it will hurt the poor. And poverty is arguably the biggest threat to human health and welfare on the planet. [“Trump’s Rollback of EPA Overreach: What No One Is Talking About,” Roy Spencer, Ph.D.[blog], March 29, 2017]

*     *     *

I mentioned the Benedict Option in “Independence Day 2016: The Way Ahead,” quoting Bruce Frohnen in tacit agreement:

[Rod] Dreher has been writing a good deal, of late, about what he calls the Benedict Option, by which he means a tactical withdrawal by people of faith from the mainstream culture into religious communities where they will seek to nurture and strengthen the faithful for reemergence and reengagement at a later date….

The problem with this view is that it underestimates the hostility of the new, non-Christian society [e.g., this and this]….

Leaders of this [new, non-Christian] society will not leave Christians alone if we simply surrender the public square to them. And they will deny they are persecuting anyone for simply applying the law to revoke tax exemptions, force the hiring of nonbelievers, and even jail those who fail to abide by laws they consider eminently reasonable, fair, and just.

Exactly. John Horvat II makes the same point:

For [Dreher], the only response that still remains is to form intentional communities amid the neo-barbarians to “provide an unintentional political witness to secular culture,” which will overwhelm the barbarian by the “sheer humanity of Christian compassion, and the image of human dignity it honors.” He believes that setting up parallel structures inside society will serve to protect and preserve Christian communities under the new neo-barbarian dispensation. We are told we should work with the political establishment to “secure and expand the space within which we can be ourselves and our own institutions” inside an umbrella of religious liberty.

However, barbarians don’t like parallel structures; they don’t like structures at all. They don’t co-exist well with anyone. They don’t keep their agreements or respect religious liberty. They are not impressed by the holy lives of the monks whose monastery they are plundering. You can trust barbarians to always be barbarians. [“Is the Benedict Option the Answer to Neo-Barbarianism?Crisis Magazine, March 29, 2017]

As I say in “The Authoritarianism of Modern Liberalism, and the Conservative Antidote,”

Modern liberalism attracts persons who wish to exert control over others. The stated reasons for exerting control amount to “because I know better” or “because it’s good for you (the person being controlled)” or “because ‘social justice’ demands it.”

Leftists will not countenance a political arrangement that allows anyone to escape the state’s grasp — unless, of course, the state is controlled by the “wrong” party, In which case, leftists (or many of them) would like to exercise their own version of the Benedict Option. See “Polarization and De Facto Partition.”

*     *     *

Theodore Dalrymple understands the difference between terrorism and accidents:

Statistically speaking, I am much more at risk of being killed when I get into my car than when I walk in the streets of the capital cities that I visit. Yet this fact, no matter how often I repeat it, does not reassure me much; the truth is that one terrorist attack affects a society more deeply than a thousand road accidents….

Statistics tell me that I am still safe from it, as are all my fellow citizens, individually considered. But it is precisely the object of terrorism to create fear, dismay, and reaction out of all proportion to its volume and frequency, to change everyone’s way of thinking and behavior. Little by little, it is succeeding. [“How Serious Is the Terrorist Threat?City Journal, March 26, 2017]

Which reminds me of several things I’ve written, beginning with this entry from “Not-So-Random Thoughts (VI)“:

Cato’s loony libertarians (on matters of defense) once again trot out Herr Doktor Professor John Mueller. He writes:

We have calculated that, for the 12-year period from 1999 through 2010 (which includes 9/11, of course), there was one chance in 22 million that an airplane flight would be hijacked or otherwise attacked by terrorists. (“Serial Innumeracy on Homeland Security,” Cato@Liberty, July 24, 2012)

Mueller’s “calculation” consists of an recitation of known terrorist attacks pre-Benghazi and speculation about the status of Al-Qaeda. Note to Mueller: It is the unknown unknowns that kill you. I refer Herr Doktor Professor to “Riots, Culture, and the Final Showdown” and “Mission Not Accomplished.”

See also my posts “Getting It All Wrong about the Risk of Terrorism” and “A Skewed Perspective on Terrorism.”

*     *     *

This is from my post, “A Reflection on the Greatest Generation“:

The Greatest tried to compensate for their own privations by giving their children what they, the parents, had never had in the way of material possessions and “fun”. And that is where the Greatest Generation failed its children — especially the Baby Boomers — in large degree. A large proportion of Boomers grew up believing that they should have whatever they want, when they want it, with no strings attached. Thus many of them divorced, drank, and used drugs almost wantonly….

The Greatest Generation — having grown up believing that FDR was a secular messiah, and having learned comradeship in World War II — also bequeathed us governmental self-indulgence in the form of the welfare-regulatory state. Meddling in others’ affairs seems to be a predilection of the Greatest Generation, a predilection that the Millenials may be shrugging off.

We owe the Greatest Generation a great debt for its service during World War II. We also owe the Greatest Generation a reprimand for the way it raised its children and kowtowed to government. Respect forbids me from delivering the reprimand, but I record it here, for the benefit of anyone who has unduly romanticized the Greatest Generation.

There’s more in “The Spoiled Children of Capitalism“:

This is from Tim [of Angle’s] “The Spoiled Children of Capitalism“:

The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.

As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it….

I have long shared Tim’s assessment of the Boomer generation. Among the corroborating data are my sister and my wife’s sister and brother — Boomers all….

Low conscientiousness was the bane of those Boomers who, in the 1960s and 1970s, chose to “drop out” and “do drugs.”…

Now comes this:

According to writer and venture capitalist Bruce Gibney, baby boomers are a “generation of sociopaths.”

In his new book, he argues that their “reckless self-indulgence” is in fact what set the example for millennials.

Gibney describes boomers as “acting without empathy, prudence, or respect for facts – acting, in other words, as sociopaths.”

And he’s not the first person to suggest this.

Back in 1976, journalist Tom Wolfe dubbed the young adults then coming of age the “Me Generation” in the New York Times, which is a term now widely used to describe millennials.

But the baby boomers grew up in a very different climate to today’s young adults.

When the generation born after World War Two were starting to make their way in the world, it was a time of economic prosperity.

“For the first half of the boomers particularly, they came of age in a time of fairly effortless prosperity, and they were conditioned to think that everything gets better each year without any real effort,” Gibney explained to The Huffington Post.

“So they really just assume that things are going to work out, no matter what. That’s unhelpful conditioning.

“You have 25 years where everything just seems to be getting better, so you tend not to try as hard, and you have much greater expectations about what society can do for you, and what it owes you.”…

Gibney puts forward the argument that boomers – specifically white, middle-class ones – tend to have genuine sociopathic traits.

He backs up his argument with mental health data which appears to show that this generation have more anti-social characteristics than others – lack of empathy, disregard for others, egotism and impulsivity, for example. [Rachel Hosie, “Baby Boomers Are a Generation of Sociopaths,” Independent, March 23, 2017]

That’s what I said.

More about Intelligence

Do genes matter? You betcha! See geneticist Gregory Cochran’s “Everything Is Different but the Same” and “Missing Heritability — Found?” (Useful Wikipedia articles for explanations of terms used by Cochran: “Genome-wide association study,” “Genetic load,” and “Allele.”) Snippets:

Another new paper finds that the GWAS hits for IQ – largely determined in Europeans – don’t work in people of African descent.

*     *     *

There is an interesting new paper out on genetics and IQ. The claim is that they have found the missing heritability – in rare variants, generally different in each family.

Cochran, in typical fashion, ends the second item with a bombastic put-down of the purported dysgenic trend, about which I’ve written here.

Psychologist James Thompson seems to put stock in the dysgenic trend. See, for example, his post “The Woodley Effect“:

[W]e could say that the Flynn Effect is about adding fertilizer to the soil, whereas the Woodley Effect is about noting the genetic quality of the plants. In my last post I described the current situation thus: The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.

But Thompson joins Cochran in his willingness to accept what the data show, namely, that there are strong linkages between race and intelligence. See, for example, “County IQs and Their Consequences” (and my related post). Thompson writes:

[I]n social interaction it is not always either possible or desirable to make intelligence estimates. More relevant is to look at technical innovation rates, patents, science publications and the like…. If there were no differences [in such] measures, then the associations between mental ability and social outcomes would be weakened, and eventually disconfirmed. However, the general link between national IQs and economic outcomes holds up pretty well….

… Smart fraction research suggests that the impact of the brightest persons in a national economy has a disproportionately positive effect on GDP. Rindermann and I have argued, following others, that the brightest 5% of every country make the greatest contribution by far, though of course many others of lower ability are required to implement the discoveries and strategies of the brightest.

Though Thompson doesn’t directly address race and intelligence in “10 Replicants in Search of Fame,” he leaves no doubt about dominance of genes over environment in the determination of traits; for example:

[A] review of the world’s literature on intelligence that included 10,000 pairs of twins showed identical twins to be significantly more similar than fraternal twins (twin correlations of about .85 and .60, respectively), with corroborating results from family and adoption studies, implying significant genetic influence….

Some traits, such as individual differences in height, yield heritability as high as 90%. Behavioural traits are less reliably measured than physical traits such as height, and error of measurement contributes to nonheritable variance….

[A] review of 23 twin studies and 12 family studies confirmed that anxiety and depression are correlated entirely for genetic reasons. In other words, the same genes affect both disorders, meaning that from a genetic perspective they are the same disorder. [I have personally witnessed this effect: TEA.]…

The heritability of intelligence increases throughout development. This is a strange and counter-intuitive finding: one would expect the effects of learning to accumulate with experience, increasing the strength of the environmental factor, but the opposite is true….

[M]easures of the environment widely used in psychological science—such as parenting, social support, and life events—can be treated as dependent measures in genetic analyses….

In sum, environments are partly genetically-influenced niches….

People to some extent make their own environments….

[F]or most behavioral dimensions and disorders, it is genetics that accounts for similarity among siblings.

In several of the snippets quoted above, Thompson is referring to a phenomenon known as genetic confounding, which is to say that genetic effects are often mistaken for environmental effects. Brian Boutwell and JC Barnes address an aspect of genetic confounding in “Is Crime Genetic? Scientists Don’t Know Because They’re Afraid to Ask.” A small sample:

The effects of genetic differences make some people more impulsive and shortsighted than others, some people more healthy or infirm than others, and, despite how uncomfortable it might be to admit, genes also make some folks more likely to break the law than others.

John Ray addresses another aspect of genetic confounding in “Blacks, Whites, Genes, and Disease,” where he comments about a recent article in the Journal of the American Medical Association:

It says things that the Left do not want to hear. But it says those things in verbose academic language that hides the point. So let me translate into plain English:

* The poor get more illness and die younger
* Blacks get more illness than whites and die younger
* Part of that difference is traceable to genetic differences between blacks and whites.
* But environmental differences — such as education — explain more than genetic differences do
* Researchers often ignore genetics for ideological reasons
* You don’t fully understand what is going on in an illness unless you know about any genetic factors that may be at work.
* Genetics research should pay more attention to blacks

Most of those things I have been saying for years — with one exception:

They find that environmental factors have greater effect than genetics. But they do that by making one huge and false assumption. They assume that education is an environmental factor. It is not. Educational success is hugely correlated with IQ, which is about two thirds genetic. High IQ people stay in the educational system for longer because they are better at it, whereas low IQ people (many of whom are blacks) just can’t do it at all. So if we treated education as a genetic factor, environmental differences would fade way as causes of disease. As Hans Eysenck once said to me in a casual comment: “It’s ALL genetic”. That’s not wholly true but it comes close

So the recommendation of the study — that we work on improving environmental factors that affect disease — is unlikely to achieve much. They are aiming their gun towards where the rabbit is not. If it were an actual rabbit, it would probably say: “What’s up Doc?”

Some problems are unfixable but knowing which problems they are can help us to avoid wasting resources on them. The black/white gap probably has no medical solution.

I return to James Thompson for a pair of less incendiary items. “The Secret in Your Eyes” points to a link between intelligence and pupil size. In “Group IQ Doesn’t Exist,” Thompson points out the fatuousness of the belief that a group is somehow more intelligent that the smartest member of the group. As Thompson puts it:

So, if you want a problem solved, don’t form a team. Find the brightest person and let [him] work on it. Placing [him] in a team will, on average, reduce [his] productivity. My advice would be: never form a team if there is one person who can sort out the problem.

Forcing the brightest person to act as a member of a team often results in the suppression of that person’s ideas by the (usually) more extroverted and therefore less-intelligent members of the team.

Added 04/05/17: James Thompson issues a challenge to IQ-deniers in “IQ Does Not Exist (Lead Poisoning Aside)“:

[T]his study shows how a neuro-toxin can have an effect on intelligence, of similar magnitude to low birth weight….

[I]f someone tells you they do not believe in intelligence reply that you wish them well, but that if they have children they should keep them well away from neuro-toxins because, among other things, they reduce social mobility.

*     *     *

Related posts:
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Round Up the Usual Suspects
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Let’s Have That “Conversation” about Race
Affirmative Action Comes Home to Roost
The IQ of Nations
Race and Social Engineering

Mugged by Non-Reality

A wise man said that a conservative is a liberal who has been mugged by reality. Thanks to Malcolm Pollock, I’ve just learned that a liberal is a conservative whose grasp of reality has been erased, literally.

Actually, this is unsurprising news (to me). I have pointed out many times that the various manifestations of liberalism — from stifling regulation to untrammeled immigration — arise from the cosseted beneficiaries of capitalism (e.g., pundits, politicians, academicians, students) who are far removed from the actualities of producing real things for real people. This has turned their brains into a kind of mush that is fit only for hatching unrealistic but costly schemes which rest upon a skewed vision of human nature.

Special Relativity II: A Fatal Flaw?

This post revisits some of the arguments in “Special Relativity: Answers and Questions,” and introduces some additional considerations. I quote extensively from Einstein’s Relativity: The Special and General Theory (1916, translated 1920).

Einstein begins with a discussion of the coordinates of space in Euclidean geometry, then turns to space and time in classical mechanics. In the following passage he refers to an expository scenario that recurs throughout the book, the passage of a railway carriage (train car) along an embankment:

In order to have a complete description of the motion [of a body], we must specify how the body alters its position with time; i.e. for every point on the trajectory it must be stated at what time the body is situated there. These data must be supplemented by such a definition of time that, in virtue of this definition, these time-values can be regarded essentially as magnitudes (results of measurements) capable of observation. If we take our stand on the ground of classical mechanics, we can satisfy this requirement for our illustration in the following manner. We imagine two clocks of identical construction; the man at the railway-carriage window is holding one of them, and the man on the footpath [of the embankment] the other. Each of the observers determines the position on his own reference-body occupied by the stone at each tick of the clock he is holding in his hand. In this connection we have not taken account of the inaccuracy involved by the finiteness of the velocity of propagation of light.

To get to that inaccuracy, Einstein begins with this:

Let us suppose our old friend the railway carriage to be travelling along the rails with a constant velocity v, and that a man traverses the length of the carriage in the direction of travel with a velocity w. How quickly, or, in other words, with what velocity W does the man advance relative to the embankment [on which the rails rest] during the process? The only possible answer seems to result from the following consideration: If the man were to stand still for a second, he would advance relative to the embankment through a distance v equal numerically to the velocity of the carriage. As a consequence of his walking, however, he traverses an additional distance w relative to the carriage, and hence also relative to the embankment, in this second, the distance w being numerically equal to the velocity with which he is walking. Thus in total he covers the distance W = v + w relative to the embankment in the second considered.

This is the theorem of the addition of velocities from classical physics. Why doesn’t it apply to light? Einstein continues:

If a ray of light be sent along the embankment [in an assumed vacuum], … the tip of the ray will be transmitted with the velocity c relative to the embankment. Now let us suppose that our railway carriage is again travelling along the railway lines with the velocity v, and that its direction is the same as that of the ray of light, but its velocity of course much less. Let us inquire about the velocity of propagation of the ray of light relative to the carriage. It is obvious that we can here apply the consideration of the previous section, since the ray of light plays the part of the man walking along relatively to the carriage. The velocity W of the man relative to the embankment is here replaced by the velocity of light relative to the embankment. w is the required velocity of light with respect to the carriage, and we have

w = c − v.

The velocity of propagation of a ray of light relative to the carriage thus comes out smaller than c.

Let’s take that part a bit more slowly than Einstein does. The question is the velocity with which the ray of light is traveling relative to the carriage. The man in the previous example was walking in the carriage with velocity w relative to the body of the carriage, and therefore with velocity W relative to the embankment (v + w). If the ray of light is traveling at c relative to the embankment, and the carriage is traveling at v relative to the embankment, then by analogy it would seem that the velocity of the ray of light relative to the velocity of the carriage should be the velocity of light minus the velocity of the carriage, that is c – v. (Einstein introduces some confusion by using w to denote this hypothetical velocity, having already used w to denote the velocity of the man walking in the carriage, relative to the embankment.)

It would thus seem that the velocity of a ray of light emitted from the railway carriage should be c + v relative to a person standing still on the embankment. That is, light would travel faster than c when it’s emitted from an object moving in a forward direction relative to an observer. But all objects are in relative motion, even hypothetically stationary ones such as the railway embankment of Einstein’s through experiment. Light would therefore move at many different velocities, all of them varying from c according to the motion of each observer relative to the source of light; that is, some observers would detect velocities greater than c, while others would detect velocities less than c.

But this can’t happen (supposedly). Einstein puts it this way:

In view of this dilemma there appears to be nothing else for it than to abandon either the [old] principle of relativity [the addition of velocities] or the simple law of the propagation of light in vacuo [that c is the same for all observers, regardless of their relative motion]. Those of you who have carefully followed the preceding discussion are almost sure to expect that we should retain the [old] principle of relativity, which appeals so convincingly to the intellect because it is so natural and simple. The law of the propagation of light in vacuo would then have to be replaced by a more complicated law conformable to the [old] principle of relativity. The development of theoretical physics shows, however, that we cannot pursue this course. The epoch-making theoretical investigations of H.A.Lorentz on the electrodynamical and optical phenomena connected with moving bodies show that experience in this domain leads conclusively to a theory of electromagnetic phenomena, of which the law of the constancy of the velocity of light in vacuo is a necessary consequence…..

[I]n reality there is not the least incompatibility between the principle of relativity and the law of propagation of light, and that by systematically holding fast to both these laws a logically rigid theory could be arrived at. This theory has been called the special theory of relativity….

Einstein gets to the special theory of relativity (STR) by next considering the problem of simultaneity:

Lightning has struck the rails on our railway embankment at two places A and B far distant from each other. I make the additional assertion that these two lightning flashes occurred simultaneously. If now I ask you whether there is sense in this statement, you will answer my question with a decided “Yes.” But if I now approach you with the request to explain to me the sense of the statement more precisely, you find after some consideration that the answer to this question is not so easy as it appears at first sight.

After thinking the matter over for some time you then offer the following suggestion with which to test simultaneity. By measuring along the rails, the connecting line AB should be measured up and an observer placed at the mid-point M of the distance AB. This observer should be supplied with an arrangement (e.g. two mirrors inclined at 90 ◦) which allows him visually to observe both places A and B at the same time. If the observer perceives the two flashes of lightning at the same time, then they are simultaneous.

I am very pleased with this suggestion, but for all that I cannot regard the matter as quite settled, because I feel constrained to raise the following objection: “Your definition would certainly be right, if I only knew that the light by means of which the observer at M perceives the lightning flashes travels along the length A → M with the same velocity as along the length B → M. But an examination of this supposition would only be possible if we already had at our disposal the means of measuring time. It would thus appear as though we were moving here in a logical circle.”

After further consideration you cast a somewhat disdainful glance at me— and rightly so— and you declare: “I maintain my previous definition nevertheless, because in reality it assumes absolutely nothing about light. There is only one demand to be made of the definition of simultaneity, namely, that in every real case it must supply us with an empirical decision as to whether or not the conception that has to be defined is fulfilled. That my definition satisfies this demand is indisputable. That light requires the same time to traverse the path A → M as for the path B → M is in reality neither a supposition nor a hypothesis about the physical nature of light, but a stipulation which I can make of my own freewill in order to arrive at a definition of simultaneity.”

It is clear that this definition can be used to give an exact meaning not only to two events, but to as many events as we care to choose, and independently of the positions of the scenes of the events with respect to the body of reference (here the railway embankment). We are thus led also to a definition of “time” in physics. For this purpose we suppose that clocks of identical construction are placed at the points A, B and C of the railway line (co-ordinate system), and that they are set in such a manner that the positions of their pointers are simultaneously (in the above sense) the same. Under these conditions we understand by the “time” of an event the reading (position of the hands) of that one of these clocks which is in the immediate vicinity (in space) of the event. In this manner a time-value is associated with every event which is essentially capable of observation.

This stipulation contains a further physical hypothesis, the validity of which will hardly be doubted without empirical evidence to the contrary. It has been assumed that all these clocks go at the same rate if they are of identical construction. Stated more exactly: When two clocks arranged at rest in different places of a reference-body are set in such a manner that a particular position of the pointers of the one clock is simultaneous (in the above sense) with the same position of the pointers of the other clock, then identical “settings” are always simultaneous (in the sense of the above definition).

In other words, time is the same for every point in a frame of reference, which can be thought of as a group of points that remain in a fixed spatial relationship. Every such point in that frame of reference can have a clock associated with it; every clock can be set to the same time; and every clock (assuming great precision) will run at the same rate. When it is noon at one point in the frame of reference, it will be noon at all points in the frame of reference. And when the clock at one point has advanced from noon to 1 p.m., the clocks at all points in the same frame of reference will have advanced from noon to 1 p.m., and ad infinitum.

As Einstein puts it later,

Events which are simultaneous with reference to the embankment are not simultaneous with respect to the train, and vice versa (relativity of simultaneity). Every reference body (co-ordinate system) has its own particular time; unless we are told the reference-body to which the statement of time refers, there is no meaning in a statement of the time of an event.

Returning to the question of simultaneity, Einstein poses his famous thought experiment:

Up to now our considerations have been referred to a particular body of reference, which we have styled a “railway embankment.” We suppose a very long train travelling along the rails with the constant velocity v and in the direction indicated in Fig. 1. People travelling in this train will with advantage use the train as a rigid reference-body (co-ordinate system); they regard all events in reference to the train. Then every event which takes place along the line also takes place at a particular point of the train. Also the definition of simultaneity can be given relative to the train in exactly the same way as with respect to the embankment. As a natural consequence, however, the following question arises:

Are two events ( e.g. the two strokes of lightning A and B) which are simultaneous with reference to the railway embankment also simultaneous relatively to the train? We shall show directly that the answer must be in the negative.


FIG. 1.

When we say that the lightning strokes A and B are simultaneous with respect to the embankment, we mean: the rays of light emitted at the places A and B, where the lightning occurs, meet each other at the mid-point M of the length A → B of the embankment. But the events A and B also correspond to positions A and B on the train. Let M’ be the mid-point of the distance A → B on the travelling train. Just when the flashes of lightning occur, this point M’ naturally coincides with the point M, but it moves towards the right in the diagram with the velocity v of the train. If an observer sitting in the position M’ in the train did not possess this velocity, then he would remain permanently at M, and the light rays emitted by the flashes of lightning A and B would reach him simultaneously, i.e. they would meet just where he is situated. Now in reality (considered with reference to the railway embankment) he is hastening towards the beam of light coming from B, whilst he is riding on ahead of the beam of light coming from A. Hence the observer will see the beam of light emitted from B earlier than he will see that emitted from A. Observers who take the railway train as their reference-body must therefore come to the conclusion that the lightning flash B took place earlier than the lightning flash A. We thus arrive at the important result:

Events which are simultaneous with reference to the embankment are not simultaneous with respect to the train, and vice versa (relativity of simultaneity). Every reference body (co-ordinate system) has its own particular time; unless we are told the reference-body to which the statement of time refers, there is no meaning in a statement of the time of an event.

It’s important to note that there is a time delay, however minuscule, between the instant that the flashes of light are are emitted at A and B and the instant when they reach the observer at M.

Because of the minuscule time delay, the flashes of light wouldn’t reach the observer at M’ in the carriage at the same time that they reach the observer at M on the embankment. The observer at M’ is directly opposite the observer at M when the flashes of light are emitted, not when they are received simultaneously at M. During the minuscule delay between the emission of the flashes at A and B and their simultaneous receipt by the observer at M, the observer at M’ moves toward B and away from A. The observer at M’ therefore sees the flash emitted from B a tiny fraction of a second before he sees the flash emitted from A. Neither event corresponds in time with the time at which the flashes reach M. (There are variations on Einstein’s thought experiment — here, for example — but they trade on the same subtlety: a time delay between the flashes of light and their reception by observers.)

Returning to Einstein’s diagram, suppose that A and B are copper wires, tautly strung from posts and closely overhanging the track and embankment at 90-degree angles to both. The track is slightly depressed below the level of the embankment, so that observers on the train and embankment are the same distance below the wires. The wires are shielded so that they can be seen only by observers directly below them. Because the shielding doesn’t deflect lightning, when lightning hits the wires they will glow instantaneously. If lightning strikes A and B at the same time, the glow  will be seen simultaneously by observers positioned directly under the wires at the instant of the lightning strikes. Therefore, observers on the embankment at A and B and observers directly opposite them on the train at A’ and B’ will see the wires glow at the same time. (The observers would be equipped with synchronized clocks, the readings of which they can compare to verify their simultaneous viewing of the lightning strikes. I will leave to a later post the question whether the clocks at A and B show the same time as the clocks at A’ and B’.)

Because of the configuration of the wires in relation to the  track and the embankment, A and B must be the same distance apart as A’ and B’. That is to say, the simultaneity of observation isn’t an artifact of the distortion of horizontal measurements, or length contraction, which is another aspect of STR.

Einstein’s version of the thought experiment was designed — unintentionally, I assume — to create an illusion of non-simultaneity. Of course an observer on the train at M’ would not see the lightning flashes at the same time as an observer on the embankment at M: The observer at M’ would no longer be opposite M when the lightning flashes arrive at M. But, as shown by my variation on Einstein’s though experiment, that doesn’t rule out the simultaneity of observations on the train and on the embankment. It just requires a setup that isn’t designed to exclude simultaneity. My setup involving copper wires is one possible way of ensuring simultaneity. It also seems to rule out the possibility of length contraction.

Einstein’s defense of his thought experiment (in the fifth block quotation above) is also an apt defense of my version of his thought experiment. I have described a situation in which there is indubitable simultaneity. The question is whether it forecloses a subsequent proof of non-simultaneity. Einstein’s thought experiment didn’t, because Einstein left a loophole — intentionally or not — which discredits his proof of non-simultaneity. (I am not the first person who claims to have discovered the loophole.) My thought experiment leaves no loophole, as far as I can tell.

If my thought experiment has merit, it points to an invariant time, that is, a time which is the same for all frames of reference. A kind of Newtonian absolute time, if you will.

To be continued.

Daylight Saving Time Doesn’t Kill…

…it’s “springing forward” in March that kills.

There’s a hue and cry about daylight saving time (that’s “saving” not “savings”). The main complaint seems to be the stress that results from moving clocks ahead in March:

Springing forward may be hazardous to your health. The Monday following the start of daylight saving time (DST) is a particularly bad one for heart attacks, traffic accidents, workplace injuries and accidental deaths. Now that most Americans have switched their clocks an hour ahead, studies show many will suffer for it.

Most Americans slept about 40 minutes less than normal on Sunday night, according to a 2009 study published in the Journal of Applied Psychology…. Since sleep is important for maintaining the body’s daily performance levels, much of society is broadly feeling the impact of less rest, which can include forgetfulness, impaired memory and a lower sex drive, according to WebMD.

One of the most striking affects of this annual shift: Last year, Colorado researchers reported finding a 25 percent increase in the number of heart attacks that occur on the Monday after DST starts, as compared with a normal Monday…. A cardiologist in Croatia recorded about twice as many heart attacks than expected during that same day, and researchers in Sweden have also witnessed a spike in heart attacks in the week following the time adjustment, particularly among those who were already at risk.

Workplace injuries are more likely to occur on that Monday, too, possibly because workers are more susceptible to a loss of focus due to too little sleep. Researchers at Michigan State University used over 20 years of data from the Mine Safety and Health Administration to determine that three to four more miners than average sustain a work-related injury on the Monday following the start of DST. Those injuries resulted in 2,649 lost days of work, which is a 68 percent increase over the hours lost from injuries on an average day. The team found no effects following the nation’s one-hour shift back to standard time in the fall….

There’s even more bad news: Drivers are more likely to be in a fatal traffic accident on DST’s first Monday, according to a 2001 study in Sleep Medicine. The authors analyzed 21 years of data on fatal traffic accidents in the U.S. and found that, following the start of DST, drivers are in 83.5 accidents as compared with 78.2 on the average Monday. This phenomenon has also been recorded in Canadian drivers and British motorists.

If all that wasn’t enough, a researcher from the University of British Columbia who analyzed three years of data on U.S. fatalities reported that accidental deaths of any kind are more likely in the days following a spring forward. Their 1996 analysis showed a 6.5 percent increase, which meant that about 200 more accidental deaths occurred immediately after the start of DST than would typically occur in a given period of the same length.

I’m convinced. But the solution to the problem isn’t to get rid of DST. No, the solution is to get rid of standard time and use DST year around.

I’m not arguing for year-around DST from an economic standpoint. The evidence about the economic advantages of DST is inconclusive.

I’m arguing for year-around DST as a way to eliminate “spring forward” distress and enjoy an extra hour of daylight in the winter.

Don’t you enjoy those late summer sunsets? I sure do, and a lot other people seem to enjoy them, too. That’s why daylight saving time won’t be abolished.

But if you love those late summer sunsets, you should also enjoy an extra hour of daylight at the end of a drab winter day. I know that I would. And it’s not as if you’d miss anything if the sun rises an hour later in the winter. Even with standard time, most working people and students have to be up and about before sunrise in winter, even though sunrise comes an hour earlier than it would with DST.

How would year-around DST affect you? The following table gives the times of sunrise and sunset on the longest and shortest days of 2017 for nine major cities, north to south and west to east:

I report, you decide. If it were up to me, the decision would be year-around DST.

Thoughts for the Day

Excerpts of recent correspondence.

Robots, and their functional equivalents in specialized AI systems, can either replace people or make people more productive. I suspect that the latter has been true in the realm of medicine — so far, at least. But I have seen reportage of robotic units that are beginning to perform routine, low-level work in hospitals. So, as usual, the first people to be replaced will be those with rudimentary skills, not highly specialized training. Will it go on from there? Maybe, but the crystal ball is as cloudy as an old-time London fog.

In any event, I don’t believe that automation is inherently a job-killer. The real job-killer consists of government programs that subsidize non-work — early retirement under Social Security, food stamps and other forms of welfare, etc. Automation has been in progress for eons, and with a vengeance since the second industrial revolution. But, on balance, it hasn’t killed jobs. It just pushes people toward new and different jobs that fit the skills they have to offer. I expect nothing different in the future, barring government programs aimed at subsidizing the “victims” of technological displacement.

*      *      *

It’s civil war by other means (so far): David Wasserman, “Purple America Has All but Disappeared” (The New York Times, March 8, 2017).

*      *      *

I know that most of what I write (even the non-political stuff) has a combative edge, and that I’m therefore unlikely to persuade people who disagree with me. I do it my way for two reasons. First, I’m too old to change my ways, and I’m not going to try. Second, in a world that’s seemingly dominated by left-wing ideas, it’s just plain fun to attack them. If what I write happens to help someone else fight the war on leftism — or if it happens to make a young person re-think a mindless commitment to leftism — that’s a plus.

*     *     *

I am pessimistic about the likelihood of cultural renewal in America. The populace is too deeply saturated with left-wing propaganda, which is injected from kindergarten through graduate school, with constant reinforcement via the media and popular culture. There are broad swaths of people — especially in low-income brackets — whose lives revolve around mindless escape from the mundane via drugs, alcohol, promiscuous sex, etc. Broad swaths of the educated classes have abandoned erudition and contemplation and taken up gadgets and entertainment.

The only hope for conservatives is to build their own “bubbles,” like those of effete liberals, and live within them. Even that will prove difficult as long as government (especially the Supreme Court) persists in storming the ramparts in the name of “equality” and “self-creation.”

*     *     *

I correlated Austin’s average temperatures in February and August. Here are the correlation coefficients for following periods:

1854-2016 = 0.001
1875-2016 = -0.007
1900-2016 = 0.178
1925-2016 = 0.161
1950-2016 = 0.191
1975-2016 = 0.126

Of these correlations, only the one for 1900-2016 is statistically significant at the 0.05 level (less than a 5-percent chance of a random relationship). The correlations for 1925-2016 and 1950-2016 are fairly robust, and almost significant at the 0.05 level. The relationship for 1975-2016 is statistically insignificant. I conclude that there’s a positive relationship between February and August temperatures, but weak one. A warm winter doesn’t necessarily presage an extra-hot summer in Austin.

Is Consciousness an Illusion?

Scientists seem to have pinpointed the physical source of consciousness. But the execrable Daniel C. Dennett, for whom science is God, hasn’t read the memo. Dennett argues in his latest book, From Bacteria to Bach and Back: The Evolution of Minds, that consciousness is an illusion.

Another philosopher, Thomas Nagel, weighs in with a dissenting review of Dennett’s book. (Nagel is better than Dennett, but that’s faint praise.) Nagel’s review, “Is Consciousness an Illusion?,” appears in The New York Review of Books (March 9, 2017). Here are some excerpts:

According to the manifest image, Dennett writes, the world is

full of other people, plants, and animals, furniture and houses and cars…and colors and rainbows and sunsets, and voices and haircuts, and home runs and dollars, and problems and opportunities and mistakes, among many other such things. These are the myriad “things” that are easy for us to recognize, point to, love or hate, and, in many cases, manipulate or even create…. It’s the world according to us.

According to the scientific image, on the other hand, the world

is populated with molecules, atoms, electrons, gravity, quarks, and who knows what else (dark energy, strings? branes?)….

In an illuminating metaphor, Dennett asserts that the manifest image that depicts the world in which we live our everyday lives is composed of a set of user-illusions,

like the ingenious user-illusion of click-and-drag icons, little tan folders into which files may be dropped, and the rest of the ever more familiar items on your computer’s desktop. What is actually going on behind the desktop is mind-numbingly complicated, but users don’t need to know about it, so intelligent interface designers have simplified the affordances, making them particularly salient for human eyes, and adding sound effects to help direct attention. Nothing compact and salient inside the computer corresponds to that little tan file-folder on the desktop screen.

He says that the manifest image of each species is “a user-illusion brilliantly designed by evolution to fit the needs of its users.” In spite of the word “illusion” he doesn’t wish simply to deny the reality of the things that compose the manifest image; the things we see and hear and interact with are “not mere fictions but different versions of what actually exists: real patterns.” The underlying reality, however, what exists in itself and not just for us or for other creatures, is accurately represented only by the scientific image—ultimately in the language of physics, chemistry, molecular biology, and neurophysiology….

You may well ask how consciousness can be an illusion, since every illusion is itself a conscious experience—an appearance that doesn’t correspond to reality. So it cannot appear to me that I am conscious though I am not: as Descartes famously observed, the reality of my own consciousness is the one thing I cannot be deluded about….

According to Dennett, however, the reality is that the representations that underlie human behavior are found in neural structures of which we know very little. And the same is true of the similar conception we have of our own minds. That conception does not capture an inner reality, but has arisen as a consequence of our need to communicate to others in rough and graspable fashion our various competencies and dispositions (and also, sometimes, to conceal them)….

The trouble is that Dennett concludes not only that there is much more behind our behavioral competencies than is revealed to the first-person point of view—which is certainly true—but that nothing whatever is revealed to the first-person point of view but a “version” of the neural machinery….

I am reminded of the Marx Brothers line: “Who are you going to believe, me or your lying eyes?” Dennett asks us to turn our backs on what is glaringly obvious—that in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotle’s words, “maintaining a thesis at all costs.”

Nagel’s counterargument would have been more compelling if he had relied on a simple metaphor like this one: Most drivers can’t describe in any detail the process by which an automobile converts the potential energy of gasoline to the kinetic energy that’s produced by the engine and then transmitted eventually to the automobile’s drive wheels. Instead, most drivers simply rely on the knowledge that pushing the start button will start the car. That knowledge may be shallow, but it isn’t illusory. If it were, an automobile would be a useless hulk sitting in the driver’s garage.

Some tough questions are in order, too. If consciousness is an illusion, where does it come from? Dennett is an out-and-out physicalist and strident atheist. It therefore follows that Dennett can’t believe in consciousness (the manifest image) as a free-floating spiritual entity that’s disconnected from physical reality (the scientific image). It must, in fact, be a representation of physical reality, even if a weak and flawed one.

Looked at another way, consciousness is the gateway to the scientific image. It is only through the  deliberate, reasoned, fact-based application of consciousness that scientists have been able to roll back the mysteries of the physical world and improve the manifest image so that it more nearly resembles the scientific image. The gap will never be closed, of course. Even the most learned of human beings have only a tenuous grasp of physical reality in all of it myriad aspects. Nor will anyone ever understand what physical reality “really is” — it’s beyond apprehension and description. But that doesn’t negate the symbiosis of physical reality and consciousness.

*     *     *

Related posts:
Debunking “Scientific Objectivity”
A Non-Believer Defends Religion
Evolution as God?
The Greatest Mystery
What Is Truth?
The Improbability of Us
The Atheism of the Gaps
Demystifying Science
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Nothingness
The Glory of the Human Mind
Mind, Cosmos, and Consciousness
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Words Fail Us
Hayek’s Anticipatory Account of Consciousness

Special Relativity I: Answers and Questions

SEE THE ADDENDUM OF 02/26/17 AT THE END OF THIS POST

The speed of light in a vacuum is 186,282 miles per second. It is a central tenet of the special theory of relativity (STR) that the speed of light is the same for every observer, regardless of the motion of an observer relative to the source of the light being observed. The meaning of the latter statement is not obvious to a non-physicist (like me). In an effort to understand it, I concocted the following thought experiment (TE), which I will call TE 1:

1. There is a long train car running smoothly on a level track, at a constant speed of 75 miles per hour (mph) relative to an observer who is standing close to the track. One side of the car is one-way mirror, arranged so that the outside observer (Ozzie) can see what is happening inside the car but an observer inside the car cannot see what is happening outside. For all the observer inside the car knows, the train car is stationary with respect to the surface of the Earth. (This is not a special condition; persons standing on the ground do not sense that they are revolving with the Earth at a speed of about 1,000 mph.)

2. The train car is commodious enough for a pitcher (Pete) and catcher (Charlie, the inside observer) to play a game of catch over a distance of 110 feet, from the pitcher’s release point to the catcher’s glove. Pete throws a baseball to Charlie at a speed of 75 mph (110 feet per second, or fps), relative to Charlie, so that the ball reaches his glove 1 second after Pete has released it. This is true regardless of the direction of the car or the positions of Pete and Charlie with respect to the direction of the car.

3. How fast the ball is thrown, relative to Ozzie, does depend on the movement of the car and positions of Pete and Charlie, relative to Ozzie. For example, when the car is moving toward Ozzie, and Pete is throwing in Ozzie’s direction, Ozzie sees the ball moving toward him at 150 mph. To understand why this is so, assume that Pete releases the ball when his release point is 220 feet from Ozzie and, accordingly, Charlie’s glove is 110 feet from Ozzie. The ball traverses the 110 feet between Pete and Charlie in 1 second, during which time the train moves 110 feet toward Ozzie. Therefore, when Charlie catches the ball, his glove is adjacent to Ozzie, and the ball has traveled 220 feet, from Ozzie’s point of view. Thus Ozzie reckons that the ball has traveled 220 feet in 1 second, or at a speed of 150 mph. This result is consistent with the formula of classical physics: To a stationary observer, the apparent speed of an emitted object is the speed of that object (the baseball) plus the speed of whatever emits it (Pete on a moving train car).

*     *     *

So far, so good, from the standpoint of classical physics. Classical physics “works” at low speeds (relative to the speed of light) because relativistic effects are imperceptible at low speeds. (See this post, for example.)

But consider what happens if Pete “throws” light instead of a baseball, according to STR. This is TE 2:

1. The perceived speed of light is not affected by the speed at which an emitting object (e.g., a flashlight) is traveling relative to an observer. Accordingly, if the speed of light were 75 mph and Pete were to “throw” light instead of a baseball, it would take 1 second for the light to reach Charlie’s glove. Charlie would therefore measure the speed of light as 75 mph.

2. As before, Charlie would have moved 110 feet toward Ozzie in that 1 second, so that Charlie’s glove would be abreast of Ozzie at the instant of the arrival of light. It would seem that Ozzie should calculate the speed of light as 150 mph.

3. But this cannot be so if the speed of light is the same for all observers. That is, both Charlie and Ozzie should measure the speed of light as 75 mph.

4. How can Ozzie’s measurement be brought into line with Charlie’s? Generalizing from the relationship between distance (d), time (t), and speed (v):

  • d = tv (i.e., t x v, in case you are unfamiliar with algebraic expressions);
  • therefore, v = d/t;
  • which is satisfied by any feasible combination of d and t that yields v = 110 fps (75 mph).

(Key point: The relevant measurements of t and d are those made by Ozzie, from his perspective as an observer standing by the track while the train car moves toward him. In other words, Ozzie will obtain measures of t and/or d that differ from those made by Charlie.)

5. Thus there are two limiting possibilities that satisfy the condition v = 110 fps (75 mph), which is the fixed speed of light in this example:

A. If t = 2 seconds and d = 220 feet, then v = 110 fps.

B. If t = 1 second and d = 110 ft, then v = 110 fps.

6. Regarding possibility A: t stretches to 2 seconds while d remains 220 feet. The stretching of t is a relativistic phenomenon known as time dilation. From Ozzie’s perspective, the train car slows down. More exactly, a clock mounted in the train car would seem (to Ozzie) to run at half-speed from the moment Pete releases the ball of light.

7. Regarding possibility B: d contracts to 110 feet while t remains 1 second. The contraction of d is a relativistic phenomenon known as length contraction. From Ozzie’s perspective, it appears that the distance from Pete’s release point to Charlie’s catch (which occurs when Charlie is adjacent to Ozzie) shrinks when Pete releases the ball of light, so that Ozzie sees it as 110 feet.

8. There is no reason to favor one phenomenon over the other; therefore, what Ozzie sees is a combination of the two, such that the observed speed of the ball of light is 75 mph.

*     *     *

Here is TE 3, which is a variation on TE 2:

1. The train car is now traveling leftward at 110 fps, as seen by Ozzie. The car  is a caboose, and Pete is standing on the rear platform, whence he throws the baseball rightward (relative to Ozzie) at 110 fps (relative to Pete).

2. Ozzie is directly opposite Pete when Pete releases the ball at t = 0. According to classical physics, Ozzie would perceive the ball as stationary; that is, the sum of the speed of the train car relative to Ozzie (- 110 fps) and the speed of the baseball relative to Pete (110 fps) is zero. In other words, Ozzie should see the ball hanging in mid-air for at least 1 second.

3. Do you really expect the ball to stand still (relative to Ozzie) in mid-air for 1 second? No, you don’t. You really expect, quite reasonably, that the ball will move to Ozzie’s right, just as a light beam would move to Ozzie’s right if switched on at t = 0. (This is analogous to the behavior of a light beam emitted from a flashlight that is switched on at t = 0.)

4. Now suppose that Charlie is stationary relative to Pete, as before. This time, however, Charlie is standing at the front of a train car that is following Pete’s train car at a constant distance of 110 feet. According to the setup of TE 1, Charlie will be directly opposite Pete at t = 1, and Charlie will catch the ball at that instant. How can that be if the ball actually moves to Ozzie’s right, as stipulated in the preceding paragraph?

5. If Pete had thrown a ball of light at t = 0 — a very slow ball that goes only 110 fps — it would hit Charlie’s glove at t = 1, as seen by Charlie. If Ozzie is to see Charlie catch the ball of light, even though it moves to Ozzie’s right, Charlie cannot be directly opposite Ozzie at t = 1, but must be somewhere to Ozzie’s right.

6. As in TE 2, this situation requires Pete and Charlie’s train cars to slow down (as seen by Ozzie), the distance between Pete and Charlie to stretch (as seen by Ozzie), or a combination of the two. Whatever the combination, Ozzie will measure the speed of the ball of light as 110 fps (75 mph). At one extreme, the distance between Pete and Charlie would seem to stretch from 110 feet to 220 feet when Pete releases the ball, so that Ozzie sees Charlie catch the ball 2 seconds after Pete releases it, and 110 feet to Ozzie’s right. At the other extreme (or near it), the distance between Pete and Charlie would seem to stretch from 110 feet to, say, 111 feet when Pete releases the ball, so that Ozzie sees Charlie catch the ball just over 1 second after Pete releases it, and 1 foot to Ozzie’s right. The outcome is slightly different than that of TE 2 because Pete and Charlie are moving to the left instead of the right, while the ball is moving to the right, as before.

7. In the case of a real ball moving at 75 mph, the clocks would slow imperceptibly and/or the distance would shrink imperceptibly, maintaining the illusion that the formula of classical physics is valid — but it is not. It only seems to be because the changes are too small to be detected by ordinary means.

*     *     *

TE 2 and TE 3 are rough expositions of how perceptions of space and time are affected by the relative motion of disparate objects, according to STR. I set the speed of light at the absurdly low figure of 75 mph to simplify the examples, but there is no essential difference between my expositions and what is supposed to happen to Ozzie’s perceptions of time and distance, according to STR.

If Pete and Charlie actually could move at the speed of light, some rather strange things would happen, according to STR, but I won’t go into them here. It is enough to note that STR implies that light has “weird” properties, which lead to “weird” perceptions about the relative speeds and sizes of objects that are moving relative to an observer. (I am borrowing “weird” from pages 23 and 24 of physicist Lewis Carroll Epstein’s Relativity Visualized, an excellent primer on STR, replete with insightful illustrations.)

The purpose of my explanation is not to demonstrate my grasp of STR (which is rudimentary but skeptical), or to venture an explanation of the “weird” nature of light. My purpose is to set the stage for some probing questions about STR. The questions are occasioned by the “fact” that occasioned STR: the postulate that the speed of light is the same in free space for all observers, regardless of their motion relative to the light source. If that postulate is true, then the preceding discussion is valid in its essentials; if it is false, much that physicists now claim to know is wrong.

*     *     *

My first question is about the effect of a change in Charlie’s perception of movement:

a. Recall that in TE 1 and TE 2 Charlie (the observer in the train car) is unaware that the car is moving relative to the surface of the Earth. Let us remedy that ignorance by replacing the one-way mirror on the side of the car with clear glass. Charlie then sees that the car is moving, at a speed that he calculates with the aid of a stopwatch and distance markers along the track. Does Charlie’s new perception affect his estimate of the speed of a baseball thrown by Pete?

b. The answer is “yes” and “no.” The “yes” comes from the fact that Charlie now appreciates that the forward speed of the baseball, relative to the ground or a stationary observer next to the track, is not 75 mph but 150 mph. The “no” comes from the fact that the baseball’s speed, relative to Charlie, remains 75 mph. Although this new knowledge gives Charlie information about how others may perceive the speed of a baseball thrown by Pete, it does not change Charlie’s original perception.

c. Charlie may nevertheless ask if there is any way of assigning an absolute value to the speed of the thrown baseball. He understands that such a speed may have no practical relevance (e.g., to a batter who is stationary with respect to Pete and Charlie). But if there is no such thing as absolute speed, because all motion is relative, then how can light be assigned an absolute speed of 186,282 miles per second in a vacuum? I say “absolute” because that is what the speed of light seems to be, for practical purposes.

*     *     *

That leads to my second question:

a. Do the methods of determining the speed of light betray an error in thinking about the how that speed is affected by the speed of objects that emit light?

b. Suppose, for example, that observers (or their electronic equivalent) are positioned along the track at intervals of 44 feet, and that their clocks are synchronized to record the number of seconds after Pete releases a baseball. The first observer, who is abreast of Pete’s release point, records the time of release as 0. The ball leaves Pete’s hand at a speed of 110 fps, relative to Pete, who is moving at a speed of 110 fps, for a combined speed of 220 fps. Accordingly, the baseball will be abreast of the second observer at 0.25 seconds, the third observer at 0.5 seconds, the fourth observer, at 0.75 seconds, and the fifth observer at 1 second. The fifth observer, of course, is Ozzie, who is adjacent to Charlie’s glove when Charlie catches the ball.

c. Change “baseball” to “light” and the result changes as described in TE 2 and TE 3, following the tenets of STR. It changes because the speed of light is supposed to be a limiting speed. But is it? For example, a Wikipedia article about faster-than-light phenomena includes a long section that gives several reasons (advanced by physicists) for doubting that the speed of light is a limiting speed.

d. It is therefore possible that the conduct and interpretation of experiments corroborating the constant nature of the speed of light have been influenced (subconsciously) by the crucial place of STR in physics. For example, an observer may see two objects approach (close with) each other at a combined speed greater than the speed of light. Accordingly, it would be possible for the Ozzie of my thought experiment to measure the velocity of a ball of light thrown by Pete as the sum of the speed of light and the speed of the train car. But that is not the standard way of explaining things in the literature with which I am familiar. Instead, the reader is told (by Epstein and other physicists) that Ozzie cannot simply add the two speeds because the speed of light is a limiting speed.

*     *     *

My rudimentary understanding of STR leaves me in doubt about its tenets, its implications, and the validity of experiments that seem to confirm those tenets and implications. I need to know a lot more about the nature of light and the nature of space-time (as a non-mathematical entity) before before accepting the “scientific consensus” that STR has been verified beyond a reasonable doubt. The more recent rush to “scientific consensus” about “global warming” should be taken as a cautionary tale for retrospective application.

ADDENDUM, 02/26/17:

I’ve just learned of the work of Thomas E. Phipps Jr. (1925-2016), a physicist who happens to have been a member of a World War II operations research unit that evolved into the think-tank where I worked for 30 years. Phipps long challenged the basic tenets of STR. A paper by Robert J. Buenker, “Commentary on the Work of Thomas E. Phipps Jr. (1925-2016)” gives a detailed, technical summary of Phipps’s objections to STR. I will spend some time reviewing Buenker’s paper and a book by Phipps that I’ve ordered. Meanwhile, consider this passage from Buenker’s paper:

[T]he supposed inextricable relationship between space and time is shown to be simply the result of an erroneous (and undeclared) assumption made by Einstein in his original work. Newton was right and Einstein was wrong. Instead, one can return to the ancient principle of the objectivity of measurement. The only reason two observers can legitimately disagree about the value of a measurement is because they base their results on a different set of units…. Galileo’s Relativity Principle needs to be amended to read: The laws of physics are the same in all inertial systems but the units in which their results are expressed can and do vary from one rest frame to another.

Einstein’s train thought-experiment (and its variant) may be wrong.

I have long thought that the Lorentz transformation, which is central to STR, actually undercuts the idea of non-simultaneity because it reconciles the observations of observers in different frames of reference:

[T]he Lorentz transformation … relates the coordinates used by one observer to coordinates used by another in uniform relative motion with respect to the first.

Assume that the first observer uses coordinates labeled t, x, y, and z, while the second observer uses coordinates labeled t’, x’, y’, and z’. Now suppose that the first observer sees the second moving in the x-direction at a velocity v. And suppose that the observers’ coordinate axes are parallel and that they have the same origin. Then the Lorentz transformation expresses how the coordinates are related:

 t'={\frac  {t-{v\,x/c^{2}}}{{\sqrt  {1-v^{2}/c^{2}}}}}\ ,
x'={\frac  {x-v\,t}{{\sqrt  {1-v^{2}/c^{2}}}}}\ ,
y'=y\ ,
z'=z\ ,

where c is the speed of light.

More to come.

Fine-Tuning in a Wacky Wrapper

The Unz Review hosts columnists who hold a wide range of views, including whacko-bizarro-conspiracy-theory-nut-job ones. Case in point: Kevin Barrett, who recently posted a review of David Ray Griffin’s God Exists But Gawd Does Not: From Evil to the New Atheism to Fine Tuning. Some things said by Barrett in the course of his review suggest that Griffin, too, holds whacko-bizarro-conspiracy-theory-nut-job views; for example:

In 2004 he published The New Pearl Harbor — which still stands as the single most important work on 9/11 — and followed it up with more than ten books expanding on his analysis of the false flag obscenity that shaped the 21st century.

Further investigation — a trip to Wikipedia — tells me that Griffin believes there is

a prima facie case for the contention that there must have been complicity from individuals within the United States and joined the 9/11 Truth Movement in calling for an extensive investigation from the United States media, Congress and the 9/11 Commission. At this time, he set about writing his first book on the subject, which he called The New Pearl Harbor: Disturbing Questions About the Bush Administration and 9/11 (2004).

Part One of the book looks at the events of 9/11, discussing each flight in turn and also the behaviour of President George W. Bush and his Secret Service protection. Part Two examines 9/11 in a wider context, in the form of four “disturbing questions.” David Ray Griffin discussed this book and the claims within it in an interview with Nick Welsh, reported under the headline Thinking Unthinkable Thoughts: Theologian Charges White House Complicity in 9/11 Attack….

Griffin’s second book on the subject was a direct critique of the 9/11 Commission Report, called The 9/11 Commission Report: Omissions And Distortions (2005). Griffin’s article The 9/11 Commission Report: A 571-page Lie summarizes this book, presenting 115 instances of either omissions or distortions of evidence he claims are in the report, stating that “the entire Report is constructed in support of one big lie: that the official story about 9/11 is true.”

In his next book, Christian Faith and the Truth Behind 9/11: A Call to Reflection and Action (2006), he summarizes some of what he believes is evidence for government complicity and reflects on its implications for Christians. The Presbyterian Publishing Corporation, publishers of the book, noted that Griffin is a distinguished theologian and praised the book’s religious content, but said, “The board believes the conspiracy theory is spurious and based on questionable research.”

And on and on and on. The moral of which is this: If you have already “know” the “truth,” it’s easy to weave together factual tidbits that seem to corroborate it. It’s an old game that any number of persons can play; for example: Mrs. Lincoln hired John Wilkes Booth to kill Abe; Woodrow Wilson was behind the sinking of the Lusitania, which “forced” him to ask for a declaration of war against Germany; FDR knew about Japan’s plans to bomb Pearl Harbor but did nothing so that he could then have a roundly applauded excuse to ask for a declaration of war on Japan; LBJ ordered the assassination of JFK; etc. Some of those bizarre plots have been “proved” by recourse to factual tidbits. I’ve no doubt that all of them could be “proved” in that way.

If that is so, you may well ask why I am writing about Barrett’s review of Griffin’s book? Because in the midst of Barrett’s off-kilter observations (e.g., “the Nazi holocaust, while terrible, wasn’t as incomparably horrible as it has been made out to be”) there’s a tantalizing passage:

Griffin’s Chapter 14, “Teleological Order,” provides the strongest stand-alone rational-empirical argument for God’s existence, one that should convince any open-minded person who is willing to invest some time in thinking about it and investigating the cited sources. This argument rests on the observation that at least 26 of the fundamental constants discovered by physicists appear to have been “fine tuned” to produce a universe in which complex, intelligent life forms could exist. A very slight variation in any one of these 26 numbers (including the strong force, electromagnetism, gravity, the mass difference between protons and neutrons, and many others) would produce a vastly less complex, rich, interesting universe, and destroy any possibility of complex life forms or intelligent observers. In short, the universe is indeed a miracle, in the sense of something indescribably wonderful and almost infinitely improbable. The claim that it could arise by chance (as opposed to intelligent design) is ludicrous.

Even the most dogmatic atheists who are familiar with the scientific facts admit this. Their only recourse is to embrace the multiple-universes interpretation of quantum physics, claim that there are almost infinitely many actual universes (virtually all of them uninteresting and unfit for life), and assert that we just happen to have gotten unbelievably lucky by finding ourselves in the one-universe-out-of-infinity-minus-one with all of the constants perfectly fine-tuned for our existence. But, they argue, we should not be grateful for this almost unbelievable luck — which is far more improbable than winning hundreds of multi-million-dollar lottery jackpots in a row. For our existence in an amazingly, improbably-wonderful-for-us universe is just a tautology, since we couldn’t possibly be in any of the vast, vast, vast majority of universes that we couldn’t possibly be in.

Griffin gently and persuasively points out that the multiple-universes defense of atheism is riddled with absurdities and inconsistencies. Occam’s razor definitively indicates that by far the best explanation of the facts is that the universe was created not just by an intelligent designer, but by one that must be considered almost supremely intelligent as well as almost supremely creative: a creative intelligence as far beyond Einstein-times-Leonardo-to-the-Nth-power as those great minds were beyond that of a common slug.

Fine-tuning is not a good argument for God’s existence. Here is a good argument for God’s existence:

  1. In the material universe, cause precedes effect.
  2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
  3. The existence of the universe therefore implies a separate, uncaused cause.

Barrett (Griffin?) goes on:

Occam’s razor definitively indicates that by far the best explanation of the facts is that the universe was created not just by an intelligent designer, but by one that must be considered almost supremely intelligent as well as almost supremely creative: a creative intelligence as far beyond Einstein-times-Leonardo-to-the-Nth-power as those great minds were beyond that of a common slug.

Whoa! Occam’s razor indicates nothing of the kind:

Occam’s razor is used as a heuristic technique (discovery tool) to guide scientists in the development of theoretical models, rather than as an arbiter between published models. In the scientific method, Occam’s razor is not considered an irrefutable principle of logic or a scientific result; the preference for simplicity in the scientific method is based on the falsifiability criterion. For each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypotheses to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are more testable.

Barrett’s (Griffin’s?) hypothesis about the nature of the supremely intelligent being is unduly complicated. Not that the existence of God is a testable (falsifiable) hypothesis. It’s just a logical necessity, and should be left at that.

Scott Adams Understands Probability

A probability expresses the observed frequency of the occurrence of a well-defined event for a large number of repetitions of the event, where each repetition is independent of the others (i.e., random). Thus the probability that a fair coin will come up heads in, say, 100 tosses is approximately 0.5; that is, it will come up heads approximately 50 percent of the time. (In the penultimate paragraph of this post, I explain why I emphasize approximately.)

If a coin is tossed 100 times, what is the probability that it will come up heads on the 101st toss? There is no probability for that event because it hasn’t occurred yet. The coin will come up heads or tails, and that’s all that can be said about it.

Scott Adams, writing about the probability of being killed by an immigrant, puts it this way:

The idea that we can predict the future based on the past is one of our most persistent illusions. It isn’t rational (for the vast majority of situations) and it doesn’t match our observations. But we think it does.

The big problem is that we have lots of history from which to cherry-pick our predictions about the future. The only reason history repeats is because there is so much of it. Everything that happens today is bound to remind us of something that happened before, simply because lots of stuff happened before, and our minds are drawn to analogies.

…If you can rigorously control the variables of your experiment, you can expect the same outcomes almost every time [emphasis added].

You can expect a given outcome (e.g., heads) to occur approximately 50 percent of the time if you toss a coin a lot of times. But you won’t know the actual frequency (probability) until you measure it; that is, after the fact.

Here’s why. The statement that heads has a probability of 50 percent is a mathematical approximation, given that there are only two possible outcomes of a coin toss: heads or tails. While writing this post I used the RANDBETWEEN function of Excel 2016 to simulate ten 100-toss games of heads or tails, with the following results (number of heads per game): 55, 49, 49, 43, 43, 54, 47, 47, 53, 52. Not a single game yielded exactly 50 heads, and heads came up 492 times (not 500) in 1,000 tosses.

What is the point of a probability statement? What is it good for? It lets you know what to expect over the long run, for a large number of repetitions of a strictly defined event. Change the definition of the event, even slightly, and you can “probably” kiss its probability goodbye.

*     *     *

Related posts:
Fooled by Non-Randomness
Randomness Is Over-Rated
Beware the Rare Event
Some Thoughts about Probability
My War on the Misuse of Probability
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming

Not Just for Baseball Fans

I have substantially revised “Bigger, Stronger, and Faster — But Not Quicker?” I set out to test Dr. Michael Woodley’s hypothesis that reaction times have slowed since the Victorian era:

It seems to me that if Woodley’s hypothesis has merit, it ought to be confirmed by the course of major-league batting averages over the decades. Other things being equal, quicker reaction times ought to produce higher batting averages. Of course, there’s a lot to hold equal, given the many changes in equipment, playing conditions, player conditioning, “style” of the game (e.g., greater emphasis on home runs), and other key variables over the course of more than a century.

I conclude that my analysis

says nothing definitive about reaction times, even though it sheds a lot of light on the relative hitting prowess of American League batters over the past 116 years. (I’ll have more to say about that in a future post.)

It’s been great fun but it was just one of those things.

Sandwiched between those statements you’ll find much statistical meat (about baseball) to chew on.

Not-So Random Thoughts (XIX)

ITEM ADDED 12/18/16

Manhattan Contrarian takes on the partisan analysis of economic growth offered by Alan Blinder and Mark Watson, and endorsed (predictably) by Paul Krugman. Eight years ago, I took on an earlier analysis along the same lines by Dani Rodrik, which Krugman (predictably) endorsed. In fact, bigger government, which is the growth mantra of economists like Blinder, Watson, Rodrik, and (predictably) Krugman, is anti-growth. The combination of spending, which robs the private sector of resources, and regulations, which rob the private sector of options and initiative, is killing economic growth. You can read about it here.

*     *     *

Rania Gihleb and Kevin Lang say that assortative mating hasn’t increased. But even if it had, so what?

Is there a potential social problem that will  have to be dealt with by government because it poses a severe threat to the nation’s political stability or economic well-being? Or is it just a step in the voluntary social evolution of the United States — perhaps even a beneficial one?

In fact,

The best way to help the people … of Charles Murray’s Fishtown [of Coming Apart] — is to ignore the smart-educated-professional-affluent class. It’s a non-problem…. The best way to help the forgotten people of America is to unleash the latent economic power of the United States by removing the dead hand of government from the economy.

*     *     *

Anthropogenic global warming (AGW) is a zombie-like creature of pseudo-science. I’ve rung its death knell, as have many actual scientists. But it keeps coming back. Perhaps President Trump will drive a stake through its heart — or whatever is done to extinguish zombies. In the meantime, here’s more evidence that AGW is a pseudo-scientific hoax:

In conclusion, this synthesis of empirical data reveals that increases in the CO2 concentration has not caused temperature change over the past 38 years across the Tropics-Land area of the Globe. However, the rate of change in CO2 concentration may have been influenced to a statistically significant degree by the temperature level.

And still more:

[B]ased on [Patrick[ Frank’s work, when considering the errors in clouds and CO2 levels only, the error bars around that prediction are ±15˚C. this does not mean—thankfully— that it could be 19˚ warmer in 2100. rather, it means the models are looking for a signal of a few degrees when they can’t differentiate within 15˚ in either direction; their internal errors and uncertainties are too large. this means that the models are unable to validate even the existence of a CO2 fingerprint because of their poor resolution, just as you wouldn’t claim to see DnA with a household magnifying glass.

And more yet:

[P]oliticians using global warming as a policy tool to solve a perceived problem is indeed a hoax. The energy needs of humanity are so large that Bjorn Lomborg has estimated that in the coming decades it is unlikely that more than about 20% of those needs can be met with renewable energy sources.

Whether you like it or not, we are stuck with fossil fuels as our primary energy source for decades to come. Deal with it. And to the extent that we eventually need more renewables, let the private sector figure it out. Energy companies are in the business of providing energy, and they really do not care where that energy comes from….

Scientists need to stop mischaracterizing global warming as settled science.

I like to say that global warming research isn’t rocket science — it is actually much more difficult. At best it is dodgy science, because there are so many uncertainties that you can get just about any answer you want out of climate models just by using those uncertianties as a tuning knob.

*     *     *

Well, that didn’t take long. lawprof Geoffrey Stone said something reasonable a few months ago. Now he’s back to his old, whiny, “liberal” self. Because the Senate failed to take up the nomination of Merrick Garland to fill Antonin Scalia’s seat on the Supreme Court — which is the Senate’s constitutional prerogative, Stone is characterizing the action (or lack of it) as a “constitutional coup d’etat” and claiming that the eventual Trump nominee will be an “illegitimate interloper.” Ed Whelan explains why Stone is wrong here, and adds a few cents worth here.

*     *     *

BHO stereotypes Muslims by asserting that

Trump’s proposal to bar immigration by Muslims would make Americans less safe. How? Because more Muslims would become radicalized and acts of terrorism would therefore become more prevalent. Why would there be more radicalized Muslims? Because the Islamic State (IS) would claim that America has declared war on Islam, and this would not only anger otherwise peaceful Muslims but draw them to IS. Therefore, there shouldn’t be any talk of barring immigration by Muslims, nor any action in that direction….

Because Obama is a semi-black leftist — and “therefore” not a racist — he can stereotype Muslims with impunity. To put it another way, Obama can speak the truth about Muslims without being accused of racism (though he’d never admit to the truth about blacks and violence).

It turns out, unsurprisingly, that there’s a lot of truth in stereotypes:

A stereotype is a preliminary insight. A stereotype can be true, the first step in noticing differences. For conceptual economy, stereotypes encapsulate the characteristics most people have noticed. Not all heuristics are false.

Here is a relevant paper from Denmark.

Emil O. W. Kirkegaard and Julius Daugbjerg Bjerrekær. Country of origin and use of social benefits: A large, preregistered study of stereotype accuracy in Denmark. Open Differential Psychology….

The high accuracy of aggregate stereotypes is confirmed. If anything, the stereotypes held by Danish people about immigrants underestimates those immigrants’ reliance on Danish benefits.

Regarding stereotypes about the criminality of immigrants:

Here is a relevant paper from the United Kingdom.

Noah Carl. NET OPPOSITION TO IMMIGRANTS OF DIFFERENT NATIONALITIES CORRELATES STRONGLY WITH THEIR ARREST RATES IN THE UK. Open Quantitative Sociology and Political Science. 10th November, 2016….

Public beliefs about immigrants and immigration are widely regarded as erroneous. Yet popular stereotypes about the respective characteristics of different groups are generally found to be quite accurate. The present study has shown that, in the UK, net opposition to immigrants of different nationalities correlates strongly with the log of immigrant arrests rates and the log of their arrest rates for violent crime.

The immigrants in question, in both papers, are Muslims — for what it’s worth.

* * *

ADDED 12/18/16:

I explained the phoniness of the Keynesian multiplier here, derived a true (strongly negative) multiplier here, and added some thoughts about the multiplier here. Economist Scott Sumner draws on the Japanese experience to throw more cold water on Keynesianism.

Hayek’s Anticipatory Account of Consciousness

I have almost finished reading F.A. Hayek‘s The Sensory Order, which was originally published in 1952. Chapter VI is Consciousness and Conceptual Thought. In the section headed the Functions of Consciousness, Hayek writes:

6.29.  …[I]t will be the pre-existing excitatory state of the higher centres [of the central nervous system] which will decide whether the evaluation of the new impulses [arising from stimuli external to the higher centres] will be of the kind characteristic of attention or consciousness. It will depend on the predisposition (or set) how fully the newly arriving impulses will be evaluated or whether they will be consciously perceived, and what the responses to them will be.

6.30.  It is probable that the processes in the highest centres which become conscious require the continuous support from nervous impulses originating at some source within the nervous system itself, such as the ‘wakefuleness center’ for whose existence a considerable amount of physiological evidence has been found. If this is so, it would seem probable also that it is these reinforcing impulses which, guided by the expectations evoked by pre-existing conditions, prepare the ground and decide on which of the new impulses the searchlight beam of full consciousness and attention will be focued. The stream of impulses which is thus strengthened becomes capable of dominating the processes in the highest centre, and of overruling and shutting out from full consciousness all the sensory signals which do not belong to the object on which attention is fixed, and which are not themselves strong enough (or perhaps not sufficiently in conflict with the underlying outline picture of the environment) to attract attention.

6.31.  There would thus appear to exist within the central nervous system a highest and most comprehensive center at which at any one time only a limited group of coherent processes can be fully evaluated; where all these processes are related to the same spatial and temporal framework; where the ‘abstract’ or generic relations for a closely knit order in which individual objects are placed; and where, in addition, a close connexion with the instruments of communication has not only contributed a further and very powerful means of classification, but has also made it possible for the individual to participate in a social or conventional representation of the world which he shares with his fellows.

Now, 64 years later, comes a report which I first saw in an online article by Fiona MacDonald, “Harvard Scientists Think They’ve Pinpointed the Physical Source of Consciousness” (Science Alert, November 8, 2016):

Scientists have struggled for millennia to understand human consciousness – the awareness of one’s existence. Despite advances in neuroscience, we still don’t really know where it comes from, and how it arises.

But researchers think they might have finally figured out its physical origins, after pinpointing a network of three specific regions in the brain that appear to be crucial to consciousness.

It’s a pretty huge deal for our understanding of what it means to be human, and it could also help researchers find new treatments for patients in vegetative states.

“For the first time, we have found a connection between the brainstem region involved in arousal and regions involved in awareness, two prerequisites for consciousness,” said lead researcher Michael Fox from the Beth Israel Deaconess Medical Centre at Harvard Medical School.

“A lot of pieces of evidence all came together to point to this network playing a role in human consciousness.”

Consciousness is generally thought of as being comprised of two critical components – arousal and awareness.

Researchers had already shown that arousal is likely regulated by the brainstem – the portion of the brain that links up with the spinal cord – seeing as it regulates when we sleep and wake, and our heart rate and breathing.

Awareness has been more elusive. Researchers have long thought that it resides somewhere in the cortex – the outer layer of the brain – but no one has been able to pinpoint where.

Now the Harvard team has identified not only the specific brainstem region linked to arousal, but also two cortex regions, that all appear to work together to form consciousness.

A full account of the research is given by David B. Fischer M.D. et al. in “A Human Brain Network Derived from Coma-Causing Brainstem Lesions” (Neurology, published online November 4, 2016, ungated version available here).

Hayek isn’t credited in the research paper. But he should be, for pointing the way to a physiological explanation of consciousness that finds it centered in the brain and not in that mysterious emanation called “mind.”

The IQ of Nations

In a twelve-year-old post, “The Main Causes of Prosperity,” I drew on statistics (sourced and described in the post) to find a statistically significant relationship between a nation’s real, per-capita GDP and three variables:

Y =  – 23,518 + 2,316L – 259T  + 253I

Where,
Y = GDP in 1998 dollars (U.S.)
L = Index for rule of law
T = Index for mean tariff rate
I = Verbal IQ

The r-squared of the regression equation is 0.89 and the p-values for the intercept and independent variables are 8.52E-07, 4.70E-10, 1.72E-04, and 3.96E-05.

The effect of IQ, by itself, is strong enough to merit a place of honor:

per-capita-gdp-and-average-verbal-iq

Another relationship struck me when I revisited the IQ numbers. There seems to be a strong correlation between IQ and distance from the equator. That correlation, however, may be an artifact of the strong (negative) correlation between blackness and IQ: The countries whose citizens are predominantly black are generally closer to the equator than the countries whose citizens are predominantly of other races.

Because of the strong (negative) correlation between blackness and IQ, and the geographic grouping of predominantly black countries, it’s not possible to find a statistically significant regression equation that accounts for national IQ as a function of the distance of nations from the equator and their dominant racial composition.

The most significant regression equation omits distance from the equator and admits race:

I = 84.0 – 13.2B + 12.4W + 20.7EA

Where,
I = national average IQ
B = predominantly black
W = predominantly white (i.e., residents are European or of European origin)
EA = East Asian (China, Hong Kong, Japan, Mongolia, South Korea, Taiwan, and Singapore, which is largely populated by persons of Chinese descent)

The r-squared of the equation is 0.78 and the p-values of the intercept and coefficients are all less than 1E-17. The F-value of the equation is 8.24E-51. The standard error of the estimate is 5.6, which means that the 95-percent confidence interval is plus or minus 11 — a smaller number than any of the coefficients.

The intercept applies to all “other” countries that aren’t predominantly black, white, or East Asian in their racial composition. There are 66 such countries in the sample, which comprises 159 countries. The 66 “other” countries span the Middle East; North Africa; South Asia; Southeast Asia; island-states in Indian, Pacific, and Atlantic Oceans; and most of the nations of Central and South America and the Caribbean. Despite the range of racial and ethnic mixtures in those 66 countries, their average IQs cluster fairly tightly around 84. By the same token, there’s a definite clustering of the black countries around 71 (84.0 – 13.2), of the white countries around 96 (84.0 + 12.4), and of the East Asian countries around 105 (84.0 + 20.7).

Thus this graph, where each “row” (from bottom to top) corresponds to black, “other,” white, and East Asian:

estimated-vs-actual-iq

The dotted line represents a perfect correlation. The regression yields a less-than-perfect relationship between race and IQ, but a strong one. That strong relationship is also seen in the following graph:

iq-vs-distance-from-the-equator

There’s a definite pattern — if a somewhat loose one — that goes from low-IQ black countries near the equator to higher IQ white countries farther from the equator. The position  of East Asian countries, which is toward the middle latitudes rather than the highest ones, points to something special in the relationship between East Asian genetic heritage and IQ.

*     *     *

Related posts:
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Let’s Have That “Conversation” about Race

Words Fail Us

Regular readers of this blog know that I seldom use “us” and “we.” Those words are too often appropriated by writers who say such things as “we the people,” and who characterize as “society” the geopolitical entity known as the United States. There is no such thing as “we the people,” and the United States is about as far from being a “society” as Hillary Clinton is from being president (I hope).

There are nevertheless some things that are so close to being universal that it’s fair to refer to them as characteristics of “us” and “we.” The inadequacy of language is one of those things.

Why is that the case? Try to describe in words a person who is beautiful or handsome to you, and why. It’s hard to do, if not impossible. There’s something about the combination of that person’s features, coloring, expression, etc., that defies anything like a complete description. You may have an image of that person in your mind, and you may know that — to you — the person is beautiful or handsome. But you just can’t capture in words all of those attributes. Why? Because the person’s beauty or handsomeness is a whole thing. It’s everything taken together, including subtle things that nestle in your subconscious mind but don’t readily swim to the surface. One such thing could be the relative size of the person’s upper and lower lips in the context of that particular person’s face; whereas, the same lips on another face might convey plainness or ugliness.

Words are inadequate because they describe one thing at a time — the shape of a nose, the slant of a brow, the prominence of a cheekbone. And the sum of those words isn’t the same thing as your image of the beautiful or handsome person. In fact, the sum of those words may be meaningless to a third party, who can’t begin to translate your words into an image of the person you think of as beautiful or handsome.

Yes, there are (supposedly) general rules about beauty and handsomeness. One of them is the symmetry of a person’s features. But that leaves a lot of ground uncovered. And it focuses on one aspect of a person’s face, rather than all of its aspects, which are what you take into account when you judge a person beautiful or handsome.

And, of course, there are many disagreements about who is beautiful or handsome. It’s a matter of taste. Where does the taste come from? Who knows? I have a theory about why I prefer dark-haired women to women whose hair is blonde, red, or medium-to-light brown: My mother was dark-haired, and photographs of her show that she was beautiful (in my opinion) as a young woman. (Despite that, I never thought of her as beautiful because she was just Mom to me.) You can come up with your own theories — and I expect that no two of them will be the same.

What about facts? Isn’t it possible to put facts into words? Not really, and for much the same reason that it’s impossible to describe beauty, handsomeness, love, hate, or anything “subjective” or “emotional.” Facts, at bottom, are subjective, and sometimes even emotional.

Let’s take a “fact” at random: the color red. We can all agree as to whether something looks red, can’t we? Even putting aside people who are color-blind, the answer is: not necessarily. For one thing red is defined as having a “predominant light wavelength of roughly 620–740 nanometers.” “Predominant” and “roughly” are weasel-words. Clearly, there’s no definite point on the visible spectrum where light changes from orange to red. If you think there is, just look at this chart and tell me where it happens. So red comes in shades, which various people describe variously: orange-red and reddish-orange, for example.

Not only that, but the visible spectrum

does not … contain all the colors that the human eyes and brain can distinguish. Unsaturated colors such as pink, or purple variations such as magenta, are absent, for example, because they can be made only by a mix of multiple wavelengths.

Thus we have magenta, fuchsia, blood-red, scarlet, crimson, vermillion, maroon, ruby, and even the many shades of pink — some are blends, some are represented by narrow segments of the light spectrum. Do all of those kinds of red have a clear definition, or are they defined by the beholder? Well, some may be easy to distinguish from others, but the distinctions between them remain arbitrary. Where does scarlet or magenta become vermillion?

In any event, how do you describe a color (whatever you call it) in words? Referring to its wavelength or composition in terms of other colors or its relation to other colors is no help. Wavelength really is meaningless unless you can show an image of the visible spectrum to someone who perceives colors exactly as you do, and point to red — or what you call red. In doing so, you will have pointed to a range of colors, not to red, because there is no red red and no definite boundary between orange and red (or yellow and orange, or green and yellow, etc.).

Further, you won’t have described red in words. And you can’t — without descending into tautologies — because red (as you visualize it) is what’s in your mind. It’s not an objective fact.

My point is that description isn’t the same as definition. You can define red (however vaguely) as a color which has a predominant light wavelength of roughly 620–740 nanometers. But you can’t describe it. Why? Because red is just a concept.

A concept isn’t a real thing that you can see, hear, taste, touch, smell, eat, drink from, drive, etc. How do you describe a concept? You define it in terms of other concepts.

Moving on from color, I’ll take gross domestic product (GDP) as another example. GDP is an estimate of the dollar value of the output of finished goods and services produced in the United States during a particular period of time. Wow, what a string of concepts. And every one of them must be defined, in turn. Some of them can be illustrated by referring to real things; a haircut is a kind of service, for example. But it’s impossible to describe GDP and its underlying concepts because they’re all abstractions, or representations of indescribable conglomerations of real things.

All right, you say, it’s impossible to describe concepts, but surely it’s possible to describe things. People do it all the time. See that ugly, dark-haired, tall guy standing over there? I’ve already dealt with ugly, indirectly, in my discussion of beauty or handsomeness. Ugliness, like beauty, is just a concept, the idea of which differs from person to person. What about tall? It’s a relative term, isn’t it? You can measure a person’s height, but whether or not you consider him tall depends on where and when you live and the range of heights you’re used to encountering. A person who seems tall to you may not seem tall to your taller brother. Dark-haired will evoke different pictures in different minds — ranging from jet-black to dark brown and even auburn.

But if you point to the guy you call ugly, dark-haired, tall guy, I may agree with you that he’s ugly, dark-haired, and tall. Or I may disagree with you, but gain some understanding of what you mean by ugly, dark-haired, and tall.

And therein lies the tale of how people are able to communicate with each other, despite their inability to describe concepts or to define them without going in endless circles and chains of definitions. First, human beings possess central nervous systems and sensory organs that are much alike, though within a wide range of variations (e.g., many people must wear glasses with an almost-infinite variety of corrections, hearing aids are programmed to an almost-infinite variety of settings, sensitivity to touch varies widely, reaction times vary widely). Nevertheless, most people seem to perceive the same color when light with a wavelength of, say, 700 nanometers strikes the retina. The same goes for sounds, tastes, smells, etc., as various external stimuli are detected by various receptors. Those perceptions then acquire agreed definitions through acculturation. For example, an object that reflects light with a wavelength of 700 nanometers becomes known as red; a sound with a certain frequency becomes known as middle C; a certain taste is characterized as bitter, sweet, or sour.

Objects acquire names in the same way: for example: a square piece of cloth that’s wrapped around a person’s head or neck becomes a bandana, and a longish, curved, yellow-skinned fruit with a soft interior becomes a banana. And so I can visualize a woman wearing a red bandana and eating a banana.

There is less agreement about “soft” concepts (e.g., beauty) because they’re based not just on “hard” facts (e.g., the wavelength of light), but on judgments that vary from person to person. A face that’s cute to one person may be beautiful to another person, but there’s no rigorous division between cute and beautiful. Both convey a sense of physical attractiveness that many persons will agree upon, but which won’t yield a consistent image. A very large percentage of Caucasian males (of a certain age) would agree that Ingrid Bergman and Hedy Lamarr were beautiful, but there’s nothing like a consensus about Katharine Hepburn (perhaps striking but not beautiful) or Jean Arthur (perhaps cute but not beautiful).

Other concepts, like GDP, acquire seemingly rigorous definitions, but they’re based on strings of seemingly rigorous definitions, the underpinnings of which may be as squishy as the flesh of a banana (e.g., the omission of housework and the effects of pollution from GDP). So if you’re familiar with the definitions of the definitions, you have a good grasp of the concepts. If you aren’t, you don’t. But if you have a good grasp of the numbers underlying the definitions of definitions, you know that the top-level concept is actually vague and hard to pin down. The numbers not only omit important things but are only estimates, and often are estimates of disparate things that are grouped because they’re judged to “alike enough.”

Acculturation in the form of education is a way of getting people to grasp concepts that have widely agreed definitions. Mathematics, for example, is nothing but concepts, all the way down. And to venture beyond arithmetic is to venture into a world of ideas that’s held together by definitions that rest upon definitions and end in nothing real. Unless you’re one of those people who insists that mathematics is the “real” stuff of which the universe is made, which is nothing more than a leap of faith. (Math, by the way, is nothing but words in shorthand.)

And so, human beings are able to communicate and (usually) understand each other because of their physical and cultural similarities, which include education in various and sundry subjects. Those similarities also enable people of different cultures and languages to translate their concepts (and the words that define them) from one language to another.

Those similarities also enable people to “feel” what another person is feeling when he says that he’s happy, sad, drunk, or whatever. There’s the physical similarity — the physiological changes that usually occur when a person becomes what he thinks of as happy, etc. And there’s acculturation — the acquired knowledge that people feel happy (or whatever) for certain reasons (e.g., a marriage, the birth of a child) and display their happiness in certain ways (e.g., a broad smile, a “jump for joy”).

A good novelist, in my view, is one who knows how to use words that evoke vivid mental images of the thoughts, feelings, and actions of characters, and the settings in which the characters act out the plot of a novel. A novelist who can do that and also tell a good story — one with an engaging or suspenseful plot — is thereby a great novelist. I submit that a good or great novelist (an admittedly vague concept) is worth almost any number of psychologists and psychiatrists, whose vision of the human mind is too rigid to grasp the subtleties that give it life.

But good and great novelists are thin on the ground. That is to say, there are relatively few persons among us who are able to grasp and communicate effectively a broad range of the kinds of thoughts and feelings that lurk in the minds of human beings. And even those few have their blind spots. Most of them, it seems to me, are persons of the left, and are therefore unable to empathize with the thoughts and feelings of the working-class people who seethe with resentment about fawning over and favoritism toward blacks, illegal immigrants, gender-confused persons, and other so-called victims. In fact, those few otherwise perceptive and articulate writers make it a point to write off the working-class people as racists, bigots, and ignoramuses.

There are exceptions, of course. A contemporary exception is Tom Wolfe. But his approach to class issues is top-down rather than bottom-up.

Which just underscores my point that we human beings find it hard to formulate and organize our own thoughts and feelings about the world around us and the other people in it. And we’re practically tongue-tied when it comes to expressing those thoughts and feelings to others. We just don’t know ourselves well enough to explain ourselves to others. And our feelings — such as our political preferences, which probably are based more on temperament than on facts — get in the way.

Love, to take a leading example, is a feeling that just is. The why and wherefore of it is beyond our ability to understand and explain. Some of the feelings attached to it can be expressed in prose, poetry, and song, but those are superficial expressions that don’t capture the depth of love and why it exists.

The world of science is of no real help. Even if feelings of love could be expressed in scientific terms — the action of hormone A on brain region X — that would be worse than useless. It would reduce love to chemistry, when we know that there’s more to it than that. Why, for example, is hormone A activated by the presence or thought of person M but not person N, even when they’re identical twins?

The world of science is of no real help about “getting to the bottom of things.” Science is an infinite regress. S is explained in terms of T, which is explained in terms of U, which is explained in terms of V, and on and on. For example, there was the “indivisible” atom, which turned out to consist of electrons, protons, and neutrons. But electrons have turned out to be more complicated than originally believed, and protons and neutrons have been found to be made of smaller particles with distinctive characteristics. So it’s reasonable to ask if all of the particles now considered elementary are really indivisible. Perhaps there other more-elementary particles yet to be hypothesized and discovered. And even if all of the truly elementary particles are discovered, scientists will still be unable to explain what those particles really “are.”

Words fail us.

*      *      *

Related reading:
Modeling Is Not Science
Physics Envy
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
We, the Children of the Enlightenment
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Mysteries: Sacred and Profane
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design
Mind, Cosmos, and Consciousness
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
“Settled Science” and the Monty Hall Problem
The Limits of Science, Illustrated by Scientists
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
The “Marketplace” of Ideas
My War on the Misuse of Probability
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Taleb’s Ruinous Rhetoric

Intelligence, Assortative Mating, and Social Engineering

UPDATED 11/18/16 (AT THE END)

What is intelligence? Why does it matter in “real life”? Are intelligence-driven “real life” outcomes — disparities in education and income — driving Americans apart? In particular, is the intermarriage of smart, educated professionals giving rise to a new hereditary class whose members have nothing in common with less-intelligent, poorly educated Americans, who will fall farther and farther behind economically? And if so, what should be done about it, if anything?

INTELLIGENCE AND WHY IT MATTERS IN “REAL LIFE”

Thanks to a post at Dr. James Thompson’s blog, Psychological comments, I found Dr. Linda Gottredson‘s paper, “Why g Matters: The Complexity of Everyday Life” (Intelligence 24:1, 79-132, 1997). The g factor — or just plain g — is general intelligence. I quote Gottredson’s article at length because it makes several key points about intelligence and why it matters in “real life.” For ease of reading, I’ve skipped over the many citations and supporting tables than lend authority to the article.

[W]hy does g have such pervasive practical utility? For example, why is a higher level of g a substantial advantage in carpentry, managing people, and navigating vehicles of all kinds? And, very importantly, why do those advantages vary in the ways they do? Why is g more helpful in repairing trucks than in driving them for a living? Or more for doing well in school than staying out of trouble?…

Also, can we presume that similar activities in other venues might be similarly affected by intelligence? For example, if differences in intelligence change the odds of effectively managing and motivating people on the job, do they also change the odds of successfully dealing with one’s own children? If so, why, and how much?

The heart of the argument I develop here is this: For practical purposes, g is the ability to deal with cognitive complexity — in particular, with complex information processing. All tasks in life involve some complexity, that is, some information processing. Life tasks, like job duties, vary greatly in their complexity (g loadedness). This means that the advantages of higher g are large in some situations and small in others, but probably never zero….

Although researchers disagree on how they define intelligence, there is virtual unanimity that it reflects the ability to reason, solve problems, think abstractly, and acquire knowledge. Intelligence is not the amount of information people know, but their ability to recognize, acquire, organize, update, select, and apply it effectively. In educational contexts, these complex mental behaviors are referred to as higher order thinking skills.

Stated at a more molecular level, g is the ability to mentally manipulate information — “to fill a gap, turn something over in one’s mind, make comparisons, transform the input to arrive at the output”….

[T]he active ingredient in test items seems to reside in their complexity. Any kind of item content-words, numbers, figures, pictures, symbols, blocks, mazes, and so on-can be used to create less to more g-loaded tests and test items. Differences in g loading seem to arise from variations in items’ cognitive complexity and thus the amount of mental manipulation they require….

Life is replete with uncertainty, change, confusion, and misinformation, sometimes minor and at times massive. From birth to death, life continually requires us to master abstractions, solve problems, draw inferences, and make judgments on the basis of inadequate information. Such demands may be especially intense in school, but they hardly cease when one walks out the school door. A close look at job duties in the workplace shows why….

When job analysis data for any large set of jobs are factor analyzed, they always reveal the major distinction among jobs to be the mental complexity of the work they require workers to perform. Arvey’s job analysis is particularly informative in showing that job complexity is quintessentially a demand for g….

Not surprisingly, jobs high in overall complexity require more education, .86 and .88, training, .76 and .51, and experience, .62 — and are viewed as the most prestigious, . 82. These correlations have sometimes been cited in support of the training hypothesis discussed earlier, namely, that sufficient training can render differences in g moot.

However, prior training and experience in a job never fully prepare workers for all contingencies. This is especially so for complex jobs, partly because they require workers to continually update job knowledge, .85. As already suggested, complex tasks often involve not only the appropriate application of old knowledge, but also the quick apprehension and use of new information in changing environments….

Many of the duties that correlate highly with overall job complexity suffuse our lives: advising, planning, negotiating, persuading, supervising others, to name just a few….

The National Adult Literacy Survey (NALS) of 26,000 persons aged 16 and older is one in a series of national literacy assessments developed by the Educational Testing Service (ETS) for the U.S. Department of Education. It is a direct descendent, both conceptually and methodologically, of the National Assessment of Educational Progress (NAEP) studies of reading among school-aged children and literacy among adults aged 21 to 25.

NALS, like its NAEP predecessors, is extremely valuable in understanding the complexity of everyday life and the advantages that higher g provides. In particular, NALS provides estimates of the proportion of adults who are able to perform everyday tasks of different complexity levels….

A look at the items in Figure 2 reveals their general relevance to social life. These are not obscure skills or bits of knowledge whose value is limited to academic pursuits. They are skills needed to carry out routine transactions with banks, social welfare agencies, restaurants, the post office, and credit card agencies; to understand contrasting views on public issues (fuel efficiency, parental involvement in schools); and to comprehend the events of the day (sports stories, trends in oil exports) and one’s personal options (welfare benefits, discount for early payment of bills, relative merits between two credit cards)….

[A]lthough the NALS items represent skills that are valuable in themselves, they are merely samples from broad domains of such skill. As already suggested, scores on the NALS reflect people’s more general ability (the latent trait) to master on a routine basis skills of different information-processing complexity….

[I]ndeed, the five levels of NALS literacy are associated with very different odds of economic well-being….

Each higher level of proficiency substantially improves the odds of economic well-being, generally halving the percentage living in poverty and doubling the percentage employed in the professions or management….

The effects of intelligence-like other psychological traits-are probabilistic, not deterministic. Higher intelligence improves the odds of success in school and work. It is an advantage, not a guarantee. Many other things matter.

However, the odds disfavor low-IQ people just about everywhere they turn. The differences in odds are relatively small in some aspects of life (law-abidingness), moderate in some (income), and large in others (educational, occupational attainment). But they are consistent. At a minimum (say, under conditions of simple tasks and equal prior knowledge), higher levels of intelligence act like the small percentage (2.7%) favoring the house in roulette at Monte Carlo — it yields enormous gains over the long run. Similarly, all of us make stupid mistakes from time to time, but higher intelligence helps protect us from accumulating a long, debilitating record of them.

To mitigate unfavorable odds attributable to low IQ, an individual must have some equally pervasive compensatory advantage-family wealth, winning personality, enormous resolve, strength of character, an advocate or benefactor, and the like. Such compensatory advantages may frequently soften but probably never eliminate the cumulative impact of low IQ. Conversely, high IQ acts like a cushion against some of life’s adverse circumstances, perhaps partly accounting for
why some children are more resilient than others in the face of deprivation and abuse….

For the top 5% of the population (over IQ 125), success is really “yours to lose.” These people meet the minimum intelligence requirements of all occupations, are highly sought after for their extreme trainability, and have a relatively easy time with the normal cognitive demands of life. Their jobs are often high pressure, emotionally draining, and socially demanding …, but these jobs are prestigious and generally pay well. Although very high IQ individuals share many of the vicissitudes of life, such as divorce, illness, and occasional unemployment, they rarely become trapped in poverty or social pathology. They may be saints or sinners, healthy or unhealthy, content or emotionally troubled. They may or may not work hard and apply their talents to get ahead, and some will fail miserably. But their lot in life and their prospects for living comfortably are comparatively rosy.

There are, of course, multiple causes of different social and economic outcomes in life. However, g seems to be at the center of the causal nexus for many. Indeed, g is more important than social class background in predicting whether White adults obtain college degrees, live in poverty, are unemployed, go on welfare temporarily, divorce, bear children out of wedlock, and commit crimes.

There are many other valued human traits besides g, but none seems to affect individuals’ life chances so systematically and so powerfully in modern life as does g. To the extent that one is concerned about inequality in life chances, one must be concerned about differences in g….

Society has become more complex-and g loaded-as we have entered the information age and postindustrial economy. Major reports on the U.S. schools, workforce, and economy routinely argue, in particular, that the complexity of work is rising.

Where the old industrial economy rewarded mass production of standardized products for large markets, the new postindustrial economy rewards the timely customization and delivery of high-quality, convenient products for increasingly specialized markets. Where the old economy broke work into narrow, routinized, and closely supervised tasks, the new economy increasingly requires workers to work in cross-functional teams, gather information, make decisions, and undertake diverse, changing, and challenging sets of tasks in a fast-changing and dynamic global market….

Such reports emphasize that the new workplace puts a premium on higher order thinking, learning, and information-processing skills — in other words, on intelligence. Gone are the many simple farm and factory jobs where a strong back and willing disposition were sufficient to sustain a respected livelihood, regardless of IQ. Fading too is the need for highly developed perceptual-motor skills, which were once critical for operating and monitoring machines, as technology advances.

Daily life also seems to have become considerably more complex. For instance, we now have a largely moneyless economy-checkbooks, credit cards, and charge accounts-that requires more abstract thought, foresight, and complex management. More self-service, whether in banks or hardware stores, throws individuals back onto their own capabilities. We struggle today with a truly vast array of continually evolving complexities: the changing welter of social services across diverse, large bureaucracies; increasing options for health insurance, cable, and phone service; the steady flow of debate over health hazards in our food and environment; the maze of transportation systems and schedules; the mushrooming array of over-the-counter medicines in the typical drugstore; new technologies (computers) and forms of communication (cyberspace) for home as well as office.

Brighter individuals, families, and communities will be better able to capitalize on the new opportunities this increased complexity brings. The least bright will use them less effectively, if at all, and so fail to reap in comparable measure any benefits they offer. There is evidence that increasing proportions of individuals with below-average IQs are having trouble adapting to our increasingly complex modern life and that social inequality along IQ lines is increasing.

CHARLES MURRAY AND FISHTOWN VS. BELMONT

At the end of the last sentence, Gottfredson refers to Richard J. Herrnstein and Charles Murray’s The Bell Curve: Intelligence and Class Structure in American Life (1994). In a later book, Coming Apart: The State of White America, 1960-2010 (2012), Murray tackles the issue of social (and economic) inequality. Kay S. Hymowitz summarizes Murray’s thesis:

According to Murray, the last 50 years have seen the emergence of a “new upper class.” By this he means something quite different from the 1 percent that makes the Occupy Wall Streeters shake their pitchforks. He refers, rather, to the cognitive elite that he and his coauthor Richard Herrnstein warned about in The Bell Curve. This elite is blessed with diplomas from top colleges and with jobs that allow them to afford homes in Nassau County, New York and Fairfax County, Virginia. They’ve earned these things not through trust funds, Murray explains, but because of the high IQs that the postindustrial economy so richly rewards.

Murray creates a fictional town, Belmont, to illustrate the demographics and culture of the new upper class. Belmont looks nothing like the well-heeled but corrupt, godless enclave of the populist imagination. On the contrary: the top 20 percent of citizens in income and education exemplify the core founding virtues Murray defines as industriousness, honesty, marriage, and religious observance….

The American virtues are not doing so well in Fishtown, Murray’s fictional working-class counterpart to Belmont. In fact, Fishtown is home to a “new lower class” whose lifestyle resembles The Wire more than Roseanne. Murray uncovers a five-fold increase in the percentage of white male workers on disability insurance since 1960, a tripling of prime-age men out of the labor force—almost all with a high school degree or less—and a doubling in the percentage of Fishtown men working less than full-time…..

Most disastrous for Fishtown residents has been the collapse of the family, which Murray believes is now “approaching a point of no return.” For a while after the 1960s, the working class hung on to its traditional ways. That changed dramatically by the 1990s. Today, under 50 percent of Fishtown 30- to 49-year-olds are married; in Belmont, the number is 84 percent. About a third of Fishtowners of that age are divorced, compared with 10 percent of Belmonters. Murray estimates that 45 percent of Fishtown babies are born to unmarried mothers, versus 6 to 8 percent of those in Belmont.

And so it follows: Fishtown kids are far less likely to be living with their two biological parents. One survey of mothers who turned 40 in the late nineties and early 2000s suggests the number to be only about 30 percent in Fishtown. In Belmont? Ninety percent—yes, ninety—were living with both mother and father….

For all their degrees, the upper class in Belmont is pretty ignorant about what’s happening in places like Fishtown. In the past, though the well-to-do had bigger houses and servants, they lived in towns and neighborhoods close to the working class and shared many of their habits and values. Most had never gone to college, and even if they had, they probably married someone who hadn’t. Today’s upper class, on the other hand, has segregated itself into tony ghettos where they can go to Pilates classes with their own kind. They marry each other and pool their incomes so that they can move to “Superzips”—the highest percentiles in income and education, where their children will grow up knowing only kids like themselves and go to college with kids who grew up the same way.

In short, America has become a segregated, caste society, with a born elite and an equally hereditary underclass. A libertarian, Murray believes these facts add up to an argument for limited government. The welfare state has sapped America’s civic energy in places like Fishtown, leaving a population of disengaged, untrusting slackers….

But might Murray lay the groundwork for fatalism of a different sort? “The reason that upper-middle-class children dominate the population of elite schools,” he writes, “is that the parents of the upper-middle class now produce a disproportionate number of the smartest children.” Murray doesn’t pursue this logic to its next step, and no wonder. If rich, smart people marry other smart people and produce smart children, then it follows that the poor marry—or rather, reproduce with—the less intelligent and produce less intelligent children. [“White Blight,” City Journal, January 25, 2012]

In the last sentence of that quotation, Hymowitz alludes to assortative mating.

ADDING 2 AND 2 TO GET ?

So intelligence is real; it’s not confined to “book learning”; it has a strong influence on one’s education, work, and income (i.e., class); and because of those things it leads to assortative mating, which (on balance) reinforces class differences. Or so the story goes.

But assortative mating is nothing new. What might be new, or more prevalent than in the past, is a greater tendency for intermarriage within the smart-educated-professional class instead of across class lines, and for the smart-educated-professional class to live in “enclaves” with their like, and to produce (generally) bright children who’ll (mostly) follow the lead of their parents.

How great are those tendencies? And in any event, so what? Is there a potential social problem that will  have to be dealt with by government because it poses a severe threat to the nation’s political stability or economic well-being? Or is it just a step in the voluntary social evolution of the United States — perhaps even a beneficial one?

Is there a growing tendency toward intermarriage among the smart-educated-professional class? It depends on how you look at it. Here, for example, are excerpts of commentaries about a paper by Jeremy Greenwood et al., “Marry Your Like: Assortative Mating and Income Inequality” (American Economic Review, 104:5, 348-53, May 2014 — also published as NBER Working Paper 19289):

[T]he abstract is this:

Has there been an increase in positive assortative mating? Does assortative mating contribute to household income inequality? Data from the United States Census Bureau suggests there has been a rise in assortative mating. Additionally, assortative mating affects household income inequality. In particular, if matching in 2005 between husbands and wives had been random, instead of the pattern observed in the data, then the Gini coefficient would have fallen from the observed 0.43 to 0.34, so that income inequality would be smaller. Thus, assortative mating is important for income inequality. The high level of married female labor-force participation in 2005 is important for this result.

That is quite a significant effect. [Tyler Cowen, “Assortative Mating and Income Inequality,” Marginal Revolution, January 27, 2014]

__________

The wage gap between highly and barely educated workers has grown, but that could in theory have been offset by the fact that more women now go to college and get good jobs. Had spouses chosen each other at random, many well-paid women would have married ill-paid men and vice versa. Workers would have become more unequal, but households would not. With such “random” matching, the authors estimate that the Gini co-efficient, which is zero at total equality and one at total inequality, would have remained roughly unchanged, at 0.33 in 1960 and 0.34 in 2005.

But in reality the highly educated increasingly married each other. In 1960 25% of men with university degrees married women with degrees; in 2005, 48% did. As a result, the Gini rose from 0.34 in 1960 to 0.43 in 2005.

Assortative mating is hardly mysterious. People with similar education tend to work in similar places and often find each other attractive. On top of this, the economic incentive to marry your peers has increased. A woman with a graduate degree whose husband dropped out of high school in 1960 could still enjoy household income 40% above the national average; by 2005, such a couple would earn 8% below it. In 1960 a household composed of two people with graduate degrees earned 76% above the average; by 2005, they earned 119% more. Women have far more choices than before, and that is one reason why inequality will be hard to reverse. [The Economist, “Sex, Brains, and Inequality,” February 8, 2014]

__________

I’d offer a few caveats:

  • Comparing observed GINI with a hypothetical world in which marriage patterns are completely random is a bit misleading. Marriage patterns weren’t random in 1960 either, and the past popularity of “Cinderella marriages” is more myth than reality. In fact, if you look at the red diagonals [in the accompanying figures], you’ll notice that assortative mating has actually increased only modestly since 1960.
  • So why bother with a comparison to a random counterfactual? That’s a little complicated, but the authors mainly use it to figure out why 1960 is so different from 2005. As it turns out, they conclude that rising income inequality isn’t really due to a rise in assortative mating per se. It’s mostly due to the simple fact that more women work outside the home these days. After all, who a man marries doesn’t affect his household income much if his wife doesn’t have an outside job. But when women with college degrees all started working, it caused a big increase in upper class household incomes regardless of whether assortative mating had increased.
  • This can get to sound like a broken record, but whenever you think about rising income inequality, you always need to keep in mind that over the past three decades it’s mostly been a phenomenon of the top one percent. It’s unlikely that either assortative mating or the rise of working women has had a huge impact at those income levels, and therefore it probably hasn’t had a huge impact on increasing income inequality either. (However, that’s an empirical question. I might be wrong about it.)

[Kevin Drum, “No the Decline of Cinderella Marriages Probably Hasn’t Played a Big Role in Rising Income Inequality,” Mother Jones, January 27, 2014]

In sum:

  • The rate of intermarriage at every level of education rose slightly between 1960 and 2005.
  • But the real change between 1960 and 2005 was that more and more women worked outside the home — a state of affairs that “progressives” applaud. It is that change which has led to a greater disparity between the household incomes of poorly educated couples and those of highly educated couples. (Hereinafter, I omit the “sneer quotes” around “progressives,” “progressive,” and “Progressivism,” but only to eliminate clutter.)
  • While that was going on, the measure of inequality in the incomes of individuals didn’t change. (Go to “In Which We’re Vindicated. Again,” Political Calculations, January 28, 2014, and scroll down to the figure titled “GINI Ratios for U.S. Households, Families, and Individuals, 1947-2010.”)
  • Further, as Kevin Drum notes, the rise in income inequality probably has almost nothing to do with a rise in the rate of assortative mating and much to do with the much higher incomes commanded by executives, athletes, entrepreneurs, financiers, and “techies” — a development that shouldn’t bother anyone, even though it does bother a lot of people. (See my post “Mass (Economic) Hysteria: Income Inequality and Related Themes,” and follow the many links therein to other posts of mine and to the long list of related readings.)

Moreover, intergenerational mobility in the United States hasn’t changed in the past several decades:

Our analysis of new administrative records on income shows that children entering the labor market today have the same chances of moving up in the income distribution relative to their parents as children born in the 1970s. Putting together our results with evidence from Hertz (2007) and Lee and Solon (2009) that intergenerational elasticities of income did not change significantly between the 1950 and 1970 birth cohorts, we conclude that rank-based measures of social mobility have remained remarkably stable over the second half of the twentieth century in the United States….

The lack of a trend in intergenerational mobility contrasts with the increase in income inequality in recent decades. This contrast may be surprising given the well-known negative correlation between inequality and mobility across countries (Corak 2013). Based on this “Great Gatsby curve,” Krueger (2012) predicted that recent increases in inequality would increase the intergenerational persistence of income by 20% in the U.S. One explanation for why this prediction was not borne out is that much of the increase in inequality has been driven by the extreme upper tail (Piketty and Saez 2003, U.S. Census Bureau 2013). In [Chetty et al. 2014, we show that there is little or no correlation between mobility and extreme upper tail inequality – as measured e.g. by top 1% income shares – both across countries and across areas within the U.S….

The stability of intergenerational mobility is perhaps more surprising in light of evidence that socio-economic gaps in early indicators of success such as test scores (Reardon 2011), parental inputs (Ramey and Ramey 2010), and social connectedness (Putnam, Frederick, and Snellman 2012) have grown over time. Indeed, based on such evidence, Putnam, Frederick, and Snellman predicted that the “adolescents of the 1990s and 2000s are yet to show up in standard studies of intergenerational mobility, but the fact that working class youth are relatively more disconnected from social institutions, and increasingly so, suggests that mobility is poised to plunge dramatically.” An important question for future research is why such a plunge in mobility has not occurred. [Raj Chetty et al., “Is the United States Still a Land of Opportunity? Recent Trends in Intergenerational Mobility,” NBER Working Paper 19844, January 2014]

Figure 3 of the paper by Chetty et al. nails it down:

chetty-et-al-figure-3

The results for ages 29-30 are close to the results for age 26.

What does it all mean? For one thing, it means that the children of top-quintile parents reach the top quintile about 30 percent of the time. For another thing, it means that, unsurprisingly, the children of top-quintile parents reach the top quintile more often than children of second-quintile parents, who reach the top quintile more often than children of third-quintile parents, and so on.

There is nevertheless a growing, quasi-hereditary, smart-educated-professional-affluent class. It’s almost a sure thing, given the rise of the two-professional marriage, and given the correlation between the intelligence of parents and that of their children, which may be as high as 0.8. However, as a fraction of the total population, membership in the new class won’t grow as fast as membership in the “lower” classes because birth rates are inversely related to income.

And the new class probably will be isolated from the “lower” classes. Most members of the new class work and live where their interactions with persons of “lower” classes are restricted to boss-subordinate and employer-employee relationships. Professionals, for the most part, work in office buildings, isolated from the machinery and practitioners of “blue collar” trades.

But the segregation of housing on class lines is nothing new. People earn more, in part, so that they can live in nicer houses in nicer neighborhoods. And the general rise in the real incomes of Americans has made it possible for persons in the higher income brackets to afford more luxurious homes in more luxurious neighborhoods than were available to their parents and grandparents. (The mansions of yore, situated on “Mansion Row,” were occupied by the relatively small number of families whose income and wealth set them widely apart from the professional class of the day.) So economic segregation is, and should be, as unsurprising as a sunrise in the east.

WHAT’S THE PROGRESSIVE SOLUTION TO THE NON-PROBLEM?

None of this will assuage progressives, who like to claim that intelligence (like race) is a social construct (while also claiming that Republicans are stupid); who believe that incomes should be more equal (theirs excepted); who believe in “diversity,” except when it comes to where most of them choose to live and school their children; and who also believe that economic mobility should be greater than it is — just because. In their superior minds, there’s an optimum income distribution and an optimum degree of economic mobility — just as there is an optimum global temperature, which must be less than the ersatz one that’s estimated by combining temperatures measured under various conditions and with various degrees of error.

The irony of it is that the self-segregated, smart-educated-professional-affluent class is increasingly progressive. Consider the changing relationship between party preference and income:

voting-vs-income
Source: K.K. Rebecca Lai et al., “How Trump Won the Election According to Exit Polls,” The New York Times, November 16, 2016.

The elections between 2004 and 2016 are indicated by the elbows in the zig-zag lines for each of the income groups. For example, among voters earning more than $200,000,  the Times estimates that almost 80 percent (+30) voted Republican in 2004, as against 45 percent in 2008, 60 percent in 2012, and just over 50 percent in 2016. Even as voters in the two lowest brackets swung toward the GOP (and Trump) between 2004 and 2016, voters in the three highest brackets were swinging toward the Democrat Party (and Clinton).

Those shifts are consistent with the longer trend among persons with bachelor’s degrees and advanced degrees toward identification with the Democrat Party. See, for example, the graphs showing relationships between party affiliation and level of education at “Party Identification Trends, 1992-2014” (Pew Research Center, April 7, 2015). The smart-educated-professional-affluent class consists almost entirely of persons with bachelor’s and advanced degrees.

So I ask progressives, given that you have met the new class and it is you, what do you want to do about it? Is there a social problem that might arise from greater segregation of socio-economic classes, and is it severe enough to warrant government action. Or is the real “problem” the possibility that some people — and their children and children’s children, etc. — might get ahead faster than other people — and their children and children’s children, etc.?

Do you want to apply the usual progressive remedies? Penalize success through progressive (pun intended) personal income-tax rates and the taxation of corporate income; force employers and universities to accept low-income candidates (whites included) ahead of better-qualified ones (e.g., your children) from higher-income brackets; push “diversity” in your neighborhood by expanding the kinds of low-income housing programs that helped to bring about the Great Recession; boost your local property and sales taxes by subsidizing “affordable housing,” mandating the payment of a “living wage” by the local government, and applying that mandate to contractors seeking to do business with the local government; and on and on down the list of progressive policies?

Of course you do, because you’re progressive. And you’ll support such things in the vain hope that they’ll make a difference. But not everyone shares your naive beliefs in blank slates, equal ability, and social homogenization (which you don’t believe either, but are too wedded to your progressive faith to admit). What will actually be accomplished — aside from tokenism — is social distrust and acrimony, which had a lot to do with the electoral victory of Donald J. Trump, and economic stagnation, which hurts the “little people” a lot more than it hurts the smart-educated-professional-affluent class.

Where the progressive view fails, as it usually does, is in its linear view of the world and dependence on government “solutions.” As the late Herbert Stein said, “If something cannot go on forever, it will stop.” The top 1-percent doesn’t go on forever; its membership is far more volatile than that of lower income groups. Neither do the top 10-percent or top quintile go on forever. There’s always a top 1-percent, a top 10-percent and top quintile, by definition. But the names change constantly, as the paper by Chetty et al. attests.

The solution to the pseudo-problem of economic inequality is benign neglect, which isn’t a phrase that falls lightly from the lips of progressives. For more than 80 years, a lot of Americans — and too many pundits, professors, and politicians — have been led astray by that one-off phenomenon: the Great Depression. FDR and his sycophants and their successors created and perpetuated the myth that an activist government saved America from ruin and totalitarianism. The truth of the matter is that FDR’s policies prolonged the Great Depression by several years, and ushered in soft despotism, which is just “friendly” fascism. And all of that happened at the behest of people of above-average intelligence and above-average incomes.

Progressivism is the seed-bed of eugenics, and still promotes eugenics through abortion on demand (mainly to rid the world of black babies). My beneficial version of eugenics would be the sterilization of everyone with an IQ above 125 or top-40-percent income who claims to be progressive.

WHAT IS THE REAL PROBLEM? (ADDED 11/18/16)

It’s not the rise of the smart-educated-professional-affluent class. It’s actually a problem that has nothing to do with that. It’s the problem pointed to by Charles Murray, and poignantly underlined by a blogger named Tori:

Over the summer, my little sister had a soccer tournament at Bloomsburg University, located in central Pennsylvania. The drive there was about three hours and many of the towns we drove through shocked me. The conditions of these towns were terrible. Houses were falling apart. Bars and restaurants were boarded up. Scrap metal was thrown across front lawns. White, plastic lawn chairs were out on the drooping front porches. There were no malls. No outlets. Most of these small towns did not have a Walmart, only a dollar store and a few run down thrift stores. In almost every town, there was an abandoned factory.

My father, who was driving the car, turned to me and pointed out a Trump sign stuck in a front yard, surrounded by weeds and dead grass. “This is Trump country, Tori,” He said. “These people are desperate, trapped for life in these small towns with no escape. These people are the ones voting for Trump.”

My father understood Trump’s key to success, even though it would leave the media and half of America baffled and terrified on November 9th. Trump’s presidency has sparked nationwide outrage, disbelief and fear.

And, while I commend the passion many of my fellow millennials feels towards minorities and the fervency they oppose the rhetoric they find dangerous, I do find many of their fears unfounded.  I don’t find their fears unfounded because I negate the potency of racism. Or the potency of oppression. Or the potency of hate.

I find these fears unfounded because these people groups have an army fighting for them. This army is full of celebrities, politicians, billionaires, students, journalists and passionate activists. Trust me, minorities will be fine with an army like this defending them.

And, I would argue, that these minorities aren’t the only ones who need our help. The results of Tuesday night did not expose a red shout of racism but a red shout for help….

The majority of rhetoric going around says that if you’re white, you have an inherent advantage in life. I would argue that, at least for the members of these small impoverished communities, their whiteness only harms them as it keeps their immense struggles out of the public eye.

Rural Americans suffer from a poverty rate that is 3 points higher than the poverty rate found in urban America. In Southern regions, like Appalachia, the poverty rate jumps to 8 points higher than those found in cities. One fifth of the children living in poverty live rural areas. The children in this “forgotten fifth” are more likely to live in extreme poverty and live in poverty longer than their urban counterparts. 57% of these children are white….

Lauren Gurley, a freelance journalist, wrote a piece that focuses on why politicians, namely liberal ones, have written off rural America completely. In this column she quotes Lisa Pruitt, a law professor at the University of California who focuses many of her studies on life in rural America. Pruitt argues that mainstream America ignores poverty stricken rural America because the majority of America associates rural poverty with whiteness. She attributes America’s lack of empathy towards white poverty to the fact that black poverty is attributed to institutionalized racism, while white people have no reason to be poor, unless poor choices were made….

For arguably the first time since President Kennedy in the 1950’s, Donald Trump reached out to rural America. Trump spoke out often about jobs leaving the US, which has been felt deeply by those living in the more rural parts of the country. Trump campaigned in rural areas, while Clinton mostly campaigned in cities. Even if you do not believe Trump will follow through on his promises, he was still one of the few politicians who focused his vision on rural communities and said “I see you, I hear you and I want to help you.”

Trump was the “change” candidate of the 2016 election. Whether Trump proposed a good change or bad change is up to you, but it can’t be denied that Trump offered change. Hillary Clinton, on the other hand, was the establishment candidate. She ran as an extension of Obama and, even though this appealed to the majority of voters located in cities, those in the country were looking for something else. Obama’s policies did little to help  alleviate the many ailments felt by those in rural communities. In response, these voters came out for the candidate who offered to “make America great again.”

I believe that this is why rural, white communities voted for Trump in droves. I do not believe it was purely racism. I believe it is because no one has listened to these communities’ cries for help. The media and our politicians focus on the poverty and deprivation found in cities and, while bringing these issues to light is immensely important, we have neglected another group of people who are suffering. It is not right to brush off all of these rural counties with words like “deplorable” and not look into why they might have voted for Trump with such desperation.

It was not a racist who voted for Trump, but a father who has no possible way of providing a steady income for his family. It was not a misogynist who voted for Trump, but a mother who is feeding her baby mountain dew out of a bottle. It was not a deplorable who voted for Trump, but a young man who has no possibility of getting out of a small town that is steadily growing smaller.

The people America has forgotten about are the ones who voted for Donald Trump. It does not matter if you agree with Trump. It does not matter if you believe that these people voted for a candidate who won’t actually help them. What matters is that the red electoral college map was a scream for help, and we’re screaming racist so loud we don’t hear them. Hatred didn’t elect Donald Trump; People did. [“Hate Didn’t Elect Donald Trump; People Did,” Tori’s Thought Bubble, November 12, 2016]

Wise words. The best way to help the people of whom Tori writes — the people of Charles Murray’s Fishtown — is to ignore the smart-educated-professional-affluent class. It’s a non-problem, as discussed above. The best way to help the forgotten people of America is to unleash the latent economic power of the United States by removing the dead hand of government from the economy.

 

Taleb’s Ruinous Rhetoric

A correspondent sent me some links to writings of Nicholas Nassim Taleb. One of them is “The Intellectual Yet Idiot,” in which Taleb makes some acute observations; for example:

What we have been seeing worldwide, from India to the UK to the US, is the rebellion against the inner circle of no-skin-in-the-game policymaking “clerks” and journalists-insiders, that class of paternalistic semi-intellectual experts with some Ivy league, Oxford-Cambridge, or similar label-driven education who are telling the rest of us 1) what to do, 2) what to eat, 3) how to speak, 4) how to think… and 5) who to vote for.

But the problem is the one-eyed following the blind: these self-described members of the “intelligentsia” can’t find a coconut in Coconut Island, meaning they aren’t intelligent enough to define intelligence hence fall into circularities — but their main skill is capacity to pass exams written by people like them….

The Intellectual Yet Idiot is a production of modernity hence has been accelerating since the mid twentieth century, to reach its local supremum today, along with the broad category of people without skin-in-the-game who have been invading many walks of life. Why? Simply, in most countries, the government’s role is between five and ten times what it was a century ago (expressed in percentage of GDP)….

The IYI pathologizes others for doing things he doesn’t understand without ever realizing it is his understanding that may be limited. He thinks people should act according to their best interests and he knows their interests, particularly if they are “red necks” or English non-crisp-vowel class who voted for Brexit. When plebeians do something that makes sense to them, but not to him, the IYI uses the term “uneducated”. What we generally call participation in the political process, he calls by two distinct designations: “democracy” when it fits the IYI, and “populism” when the plebeians dare voting in a way that contradicts his preferences….

The IYI has been wrong, historically, on Stalinism, Maoism, GMOs, Iraq, Libya, Syria, lobotomies, urban planning, low carbohydrate diets, gym machines, behaviorism, transfats, freudianism, portfolio theory, linear regression, Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, selfish gene, Bernie Madoff (pre-blowup) and p-values. But he is convinced that his current position is right.

That’s all yummy red meat to a person like me, especially in the wake of November 8, which Taleb’s piece predates. But the last paragraph quoted above reminded me that I had read something critical about a paper in which Taleb applies the precautionary principle. So I found the paper, which is by Taleb (lead author) and several others. This is from the abstract:

Here we formalize PP [the precautionary principle], placing it within the statistical and probabilistic structure of “ruin” problems, in which a system is at risk of total failure, and in place of risk we use a formal “fragility” based approach. In these problems, what appear to be small and reasonable risks accumulate inevitably to certain irreversible harm….

Our analysis makes clear that the PP is essential for a limited set of contexts and can be used to justify only a limited set of actions. We discuss the implications for nuclear energy and GMOs. GMOs represent a public risk of global harm, while harm from nuclear energy is comparatively limited and better characterized. PP should be used to prescribe severe limits on GMOs. [“The Precautionary Principle (With Application to the Genetic Modification of Organisms),” Extreme Risk Initiative – NYU School of Engineering Working Paper Series]

Jon Entine demurs:

Taleb has recently become the darling of GMO opponents. He and four colleagues–Yaneer Bar-Yam, Rupert Read, Raphael Douady and Joseph Norman–wrote a paper, The Precautionary Principle (with Application to the Genetic Modification of Organisms, released last May and updated last month, in which they claim to bring risk theory and the Precautionary Principle to the issue of whether GMOS might introduce “systemic risk” into the environment….

The crux of his claims: There is no comparison between conventional selective breeding of any kind, including mutagenesis which requires the radiation or chemical dousing of seeds (and has resulted in more than 2500 varieties of fruits, vegetables, and nuts, almost all available in organic varieties) versus what his calls the top-down engineering that occurs when a gene is taken from an organism and transferred to another (ignoring that some forms of genetic engineering, including gene editing, do not involve gene transfers). Taleb goes on to argue that the chance of ecocide, or the destruction of the environment and potentially of humans, increases incrementally with each additional transgenic trait introduced into the environment. In other words, in his mind genetic engineering is a classic “black swan” scenario.

Neither Taleb nor any of the co-authors has any background in genetics or agriculture or food, or even familiarity with the Precautionary Principle as it applies to biotechology, which they liberally invoke to justify their positions….

One of the paper’s central points displays his clear lack of understanding of modern crop breeding. He claims that the rapidity of the genetic changes using the rDNA technique does not allow the environment to equilibrate. Yet rDNA techniques are actually among the safest crop breeding techniques in use today because each rDNA crop represents only 1-2 genetic changes that are more thoroughly tested than any other crop breeding technique. The number of genetic changes caused by hybridization or mutagensis techniques are orders of magnitude higher than rDNA methods. And no testing is required before widespread monoculture-style release. Even selective breeding likely represents a more rapid change than rDNA techniques because of the more rapid employment of the method today.

In essence. Taleb’s ecocide argument applies just as much to other agricultural techniques in both conventional and organic agriculture. The only difference between GMOs and other forms of breeding is that genetic engineering is closely evaluated, minimizing the potential for unintended consequences. Most geneticists–experts in this field as opposed to Taleb–believe that genetic engineering is far safer than any other form of breeding.

Moreover, as Maxx Chatsko notes, the natural environment has encountered new traits from unthinkable events (extremely rare occurrences of genetic transplantation across continents, species and even planetary objects, or extremely rare single mutations that gave an incredible competitive advantage to a species or virus) that have led to problems and genetic bottlenecks in the past — yet we’re all still here and the biosphere remains tremendously robust and diverse. So much for Mr. Doomsday. [“Is Nassim Taleb a ‘Dangerous Imbecile’ or on [sic] the Pay of Anti-GMO Activists?Genetic Literacy Project, November 13, 2014 — see footnote for an explanation of “dangerous imbecile”]

Gregory Conko also demurs:

The paper received a lot of attention in scientific circles, but was roundly dismissed for being long on overblown rhetoric but conspicuously short on any meaningful reference to the scientific literature describing the risks and safety of genetic engineering, and for containing no understanding of how modern genetic engineering fits within the context of centuries of far more crude genetic modification of plants, animals, and microorganisms.

Well, Taleb is back, this time penning a short essay published on The New York Times’s DealB%k blog with co-author Mark Spitznagel. The authors try to draw comparisons between the recent financial crisis and GMOs, claiming the latter represent another “Too Big to Fail” crisis in waiting. Unfortunately, Taleb’s latest contribution is nothing more than the same sort of evidence-free bombast posing as thoughtful analysis. The result is uninformed and/or unintelligible gibberish….

“In nature, errors stay confined and, critically, isolated.” Ebola, anyone? Avian flu? Or, for examples that are not “in nature” but the “small step” changes Spitznagel and Taleb seem to prefer, how about the introduction of hybrid rice plants into parts of Asia that have led to widespread outcrossing to and increased weediness in wild red rices? Or kudzu? Again, this seems like a bold statement designed to impress. But it is completely untethered to any understanding of what actually occurs in nature or the history of non-genetically engineered crop introductions….

“[T]he risk of G.M.O.s are more severe than those of finance. They can lead to complex chains of unpredictable changes in the ecosystem, while the methods of risk management with G.M.O.s — unlike finance, where some effort was made — are not even primitive.” Again, the authors evince no sense that they understand how extensively breeders have been altering the genetic composition of plants and other organisms for the past century, or what types of risk management practices have evolved to coincide.

In fact, compared with the wholly voluntary (and yet quite robust) risk management practices that are relied upon to manage introductions of mutant varieties, somaclonal variants, wide crosses, and the products of cell fusion, the legally obligatory risk management practices used for genetically engineered plant introductions are vastly over-protective.

In the end, Spitznagel and Taleb’s argument boils down to a claim that ecosystems are complex and rDNA modification seems pretty mysterious to them, so nobody could possibly understand it. Until they can offer some arguments that take into consideration what we actually know about genetic modification of organisms (by various methods) and why we should consider rDNA modification uniquely risky when other methods result in even greater genetic changes, the rest of us are entitled to ignore them. [“More Unintelligible Gibberish on GMO Risks from Nicholas Nassim Taleb,” Competitive Enterprise Institute, July 16, 2015]

And despite my enjoyment of Taleb’s red-meat commentary about IYIs, I have to admit that I’ve had my fill of Taleb’s probabilistic gibberish. This is from “Fooled by Non-Randomness,” which I wrote seven years ago about Taleb’s Fooled by Randomness:

The first reason that I am unfooled by Fooled… might be called a meta-reason. Standing back from the book, I am able to perceive its essential defect: According to Taleb, human affairs — especially economic affairs, and particularly the operations of financial markets — are dominated by randomness. But if that is so, only a delusional person can truly claim to understand the conduct of human affairs. Taleb claims to understand the conduct of human affairs. Taleb is therefore either delusional or omniscient.

Given Taleb’s humanity, it is more likely that he is delusional — or simply fooled, but not by randomness. He is fooled because he proceeds from the assumption of randomness instead of exploring the ways and means by which humans are actually capable of shaping events. Taleb gives no more than scant attention to those traits which, in combination, set humans apart from other animals: self-awareness, empathy, forward thinking, imagination, abstraction, intentionality, adaptability, complex communication skills, and sheer brain power. Given those traits (in combination) the world of human affairs cannot be random. Yes, human plans can fail of realization for many reasons, including those attributable to human flaws (conflict, imperfect knowledge, the triumph of hope over experience, etc.). But the failure of human plans is due to those flaws — not to the randomness of human behavior.

What Taleb sees as randomness is something else entirely. The trajectory of human affairs often is unpredictable, but it is not random. For it is possible to find patterns in the conduct of human affairs, as Taleb admits (implicitly) when he discusses such phenomena as survivorship bias, skewness, anchoring, and regression to the mean….

[R]andom events as events which are repeatable, convergent on a limiting value, and truly patternless over a large number of repetitions. Evolving economic events (e.g., stock-market trades, economic growth) are not alike (in the way that dice are, for example), they do not converge on limiting values, and they are not patternless, as I will show.

In short, Taleb fails to demonstrate that human affairs in general or financial markets in particular exhibit randomness, properly understood….

A bit of unpredictability (or “luck”) here and there does not make for a random universe, random lives, or random markets. If a bit of unpredictability here and there dominated our actions, we wouldn’t be here to talk about randomness — and Taleb wouldn’t have been able to marshal his thoughts into a published, marketed, and well-sold book.

Human beings are not “designed” for randomness. Human endeavors can yield unpredictable results, but those results do not arise from random processes, they derive from skill or the lack therof, knowledge or the lack thereof (including the kinds of self-delusions about which Taleb writes), and conflicting objectives….

No one believes that Ty Cobb, Babe Ruth, Ted Williams, Christy Matthewson, Warren Spahn, and the dozens of other baseball players who rank among the truly great were lucky. No one believes that the vast majority of the the tens of thousands of minor leaguers who never enjoyed more than the proverbial cup of coffee were unlucky. No one believes that the vast majority of the millions of American males who never made it to the minor leagues were unlucky. Most of them never sought a career in baseball; those who did simply lacked the requisite skills.

In baseball, as in life, “luck” is mainly an excuse and rarely an explanation. We prefer to apply “luck” to outcomes when we don’t like the true explanations for them. In the realm of economic activity and financial markets, one such explanation … is the exogenous imposition of governmental power….

Given what I have said thus far, I find it almost incredible that anyone believes in the randomness of financial markets. It is unclear where Taleb stands on the random-walk hypothesis, but it is clear that he believes financial markets to be driven by randomness. Yet, contradictorily, he seems to attack the efficient-markets hypothesis (see pp. 61-62), which is the foundation of the random-walk hypothesis.

What is the random-walk hypothesis? In brief, it is this: Financial markets are so efficient that they instantaneously reflect all information bearing on the prices of financial instruments that is then available to persons buying and selling those instruments….

When we step back from day-to-day price changes, we are able to see the underlying reality: prices (instead of changes) and price trends (which are the opposite of randomness). This (correct) perspective enables us to see that stock prices (on the whole) are not random, and to identify the factors that influence the broad movements of the stock market. For one thing, if you look at stock prices correctly, you can see that they vary cyclically….

[But] the long-term trend of the stock market (as measured by the S&P 500) is strongly correlated with GDP. And broad swings around that trend can be traced to governmental intervention in the economy….

The wild swings around the trend line began in the uncertain aftermath of World War I, which saw the imposition of production and price controls. The swings continued with the onset of the Great Depression (which can be traced to governmental action), the advent of the anti-business New Deal, and the imposition of production and price controls on a grand scale during World War II. The next downswing was occasioned by the culmination the Great Society, the “oil shocks” of the early 1970s, and the raging inflation that was touched off by — you guessed it — government policy. The latest downswing is owed mainly to the financial crisis born of yet more government policy: loose money and easy loans to low-income borrowers.

And so it goes, wildly but predictably enough if you have the faintest sense of history. The moral of the story: Keep your eye on government and a hand on your wallet.

There is randomness in economic affairs, but they are not dominated by randomness. They are dominated by intentions, including especially the intentions of the politicians and bureaucrats who run governments. Yet, Taleb has no space in his book for the influence of their deeds economic activity and financial markets.

Taleb is right to disparage those traders (professional and amateur) who are lucky enough to catch upswings, but are unprepared for downswings. And he is right to scoff at their readiness to believe that the current upswing (uniquely) will not be followed by a downswing (“this time it’s different”).

But Taleb is wrong to suggest that traders are fooled by randomness. They are fooled to some extent by false hope, but more profoundly by their inablity to perceive the economic damage wrought by government. They are not alone of course; most of the rest of humanity shares their perceptual failings.

Taleb, in that respect, is only somewhat different than most of the rest of humanity. He is not fooled by false hope, but he is fooled by non-randomness — the non-randomness of government’s decisive influence on economic activity and financial markets. In overlooking that influence he overlooks the single most powerful explanation for the behavior of markets in the past 90 years.

I followed up a few days later with “Randomness Is Over-Rated“:

What we often call random events in human affairs really are non-random events whose causes we do not and, in some cases, cannot know. Such events are unpredictable, but they are not random….

Randomness … is found in (a) the results of non-intentional actions, where (b) we lack sufficient knowledge to understand the link between actions and results.

It is unreasonable to reduce intentional human behavior to probabilistic formulas. Humans don’t behave like dice, roulette balls, or similar “random” devices. But that is what Taleb (and others) do when they ascribe unusual success in financial markets to “luck.”…

I say it again: The most successful professionals are not successful because of luck, they are successful because of skill. There is no statistically predetermined percentage of skillful traders; the actual percentage depends on the skills of entrants and their willingness (if skillful) to make a career of it….

The outcomes of human endeavor are skewed because the distribution of human talents is skewed. It would be surprising to find as many as one-half of traders beating the long-run average performance of the various markets in which they operate….

[Taleb] sees an a priori distribution of “winners” and losers,” where “winners” are determined mainly by luck, not skill. Moreover, we — the civilians on the sidelines — labor under the false impression about the relative number of “winners”….

[T]here are no “odds” favoring success — even in financial markets. Financial “players” do what they can do, and most of them — like baseball players — simply don’t have what it takes for great success. Outcomes are skewed, not because of (fictitious) odds but because talent is distributed unevenly.

The real lesson … is not to assume that the “winners” are merely lucky. No, the real lesson is to seek out those “winners” who have proven their skills over a long period of time, through boom and bust and boom and bust.

Those who do well, over the long run, do not do so merely because they have survived. They have survived because they do well.

There’s much more, and you should read the whole thing(s), as they say.

I turn now to Taleb’s version of the precautionary principle, which seems tailored to support the position that Taleb wants to support, namely, that GMOs should be banned. Who gets to decide what “threats” should be included in the “limited set of contexts” where the PP applies? Taleb, of course. Taleb has excreted a circular pile of horse manure; thus:

  • The PP applies only where I (Taleb) say it applies.
  • I (Taleb) say that the PP applies to GMOs.
  • Therefore, the PP applies to GMOs.

I (the proprietor of this blog) say that the PP ought to apply to the works of Nicholas Nassim Taleb. They ought to be banned because they may perniciously influence gullible readers.

I’ll justify my facetious proposal to ban Taleb’s writings by working my way through the “logic” of what Taleb calls the non-naive version of the PP, on which he bases his anti-GMO stance. Here are the main points of Taleb’s argument, extracted from “The Precautionary Principle (With Application to the Genetic Modification of Organisms).” Taleb’s statements (with minor, non-substantive elisions) are in roman type, followed by my comments in bold type.

The purpose of the PP is to avoid a certain class of what, in probability and insurance, is called “ruin” problems. A ruin problem is one where outcomes of risks have a non-zero probability of resulting in unrecoverable losses. An often-cited illustrative case is that of a gambler who loses his entire fortune and so cannot return to the game. In biology, an example would be a species that has gone extinct. For nature, “ruin” is ecocide: an irreversible termination of life at some scale, which could be planetwide.

The extinction of a species is ruinous only if one believes that species shouldn’t become extinct. But they do, because that’s the way nature works. Ruin, as Taleb means it, is avoidable, self-inflicted, and (at some point) irreversibly catastrophic. Let’s stick to that version of it.

Our concern is with public policy. While an individual may be advised to not “bet the farm,” whether or not he does so is generally a matter of individual preferences. Policy makers have a responsibility to avoid catastrophic harm for society as a whole; the focus is on the aggregate, not at the level of single individuals, and on globalsystemic, not idiosyncratic, harm. This is the domain of collective “ruin” problems.

This assumes that government can do something about a potentially catastrophic harm — or should do something about it. The Great Depression, for example, began as a potentially catastrophic harm that government made into a real catastrophic harm (for millions of Americans, though not all of them) and prolonged through its actions. Here Taleb commits the nirvana fallacy, by implicitly ascribing to government the power to anticipate harm without making a Type I or Type II error,  and then to take appropriate and effective action to prevent or ameliorate that harm.

By the ruin theorems, if you incur a tiny probability of ruin as a “one-off” risk, survive it, then do it again (another “one-off” deal), you will eventually go bust with probability 1. Confusion arises because it may seem that the “one-off” risk is reasonable, but that also means that an additional one is reasonable. This can be quantified by recognizing that the probability of ruin approaches 1 as the number of exposures to individually small risks, say one in ten thousand, increases. For this reason a strategy of risk taking is not sustainable and we must consider any genuine risk of total ruin as if it were inevitable.

But you have to know in advance that a particular type of risk will be ruinous. Which means that — given the uncertainty of such knowledge — the perception of (possible) ruin is in the eye of the assessor. (I’ll have a lot more to say about uncertainty.)

A way to formalize the ruin problem in terms of the destructive consequences of actions identifies harm as not about the amount of destruction, but rather a measure of the integrated level of destruction over the time it persists. When the impact of harm extends to all future times, i.e. forever, then the harm is infinite. When the harm is infinite, the product of any non-zero probability and the harm is also infinite, and it cannot be balanced against any potential gains, which are necessarily finite.

As discussed below, the concept of probability is inapplicable here. Further, and granting the use of probability for the sake of argument, Taleb’s contention holds only if there’s no doubt that the harm will be infinite, that is, totally ruinous. If there’s room for doubt, there’s room for disagreement as to the extent of the harm (if any) and the value of attempting to counter it (or not). Otherwise, it would be “rational” to devote as much as the entire economic output of the world to combat so-called catastrophic anthropogenic global warming (CAGW) because some “expert” says that there’s a non-zero probability of its occurrence. In practical terms, the logic of such a policy is that if you’re going to die of heat stroke, you might as well do it sooner rather than later — which would be one of the consequences of, say, banning the use of fossil fuels. Other consequences would be freezing to death if you live in a cold climate and starving to death because foodstuffs couldn’t be grown, harvested, processed, or transported. Those are also infinite harms, and they arise from Taleb’s preferred policy of acting on little information about a risk because (in someone’s view) it could lead to infinite harm. There’s a relevant cost-benefit analysis for you.

Because the “cost” of ruin is effectively infinite, costbenefit analysis (in which the potential harm and potential gain are multiplied by their probabilities and weighed against each other) is no longer a useful paradigm.

Here, Taleb displays profound ignorance in two fields: economics and probability. His ignorance of economics might be excusable, but his ignorance of probability isn’t, inasmuch as he’s made a name for himself (and probably a lot of money) by parading his “sophisticated” understanding of it in books and on the lecture circuit.

Regarding the economics of cost-benefit analysis (CBA), it’s properly an exercise for individual persons and firms, not governments. When a government undertakes CBA, it implicitly (and arrogantly) assumes that the costs of a project (which are defrayed in the end by taxpayers) can be weighed against the monetary benefits of the project (which aren’t distributed in proportion to the costs and are often deliberately distributed so that taxpayers bear most of the costs and non-taxpayers reap most of the benefits).

Regarding probability, Taleb quite wrongly insists on ascribing probabilities to events that might (or might not) occur in the future. A probability is a statement about a very large number of like events, each of which has an unpredictable (random) outcome. A valid probability is based either on a large number of past “trials” or a mathematical certainty (e.g., a fair coin, tossed a large number of times — 100 or more  — will come up heads about half the time and tails about half the time). Probability, properly understood, says nothing about the outcome of an individual future event; that is, it says nothing about what will happen next in a truly random trial, such as a coin toss. Probability certainly says nothing about the occurrence of a unique event. Therefore, Taleb cannot validly assign a probability of “ruin” to a speculative event as little understood (by him)  as the effect of GMOs on the world’s food supply.

The non-naive PP bridges the gap between precaution and evidentiary action using the ability to evaluate the difference between local and global risks.

In other words, if there’s a subjective, non-zero probability of CAGW in Taleb’s mind, that probability should outweigh evidence about the wrongness of a belief in CAGW. And such evidence is ample, not only in the various scientific fields that impinge on climatology, but also in the failure of almost all climate models to predict the long pause in what’s called global warming. Ah, but “almost all” — in Taleb’s mind — means that there’s a non-zero probability of CAGW.  It’s the “heads I win, tails you lose” method of gambling on the flip of a coin.

Here’s another way of putting it: Taleb turns the scientific method upside down by rejecting the null hypothesis (e.g., no CAGW) on the basis of evidence that confirms it (no observable rise in temperatures approaching the predictions of CAGW theory) because a few predictions happened to be close to the truth. Taleb, in his guise as the author of Fooled by Randomness, would correctly label such predictions as lucky.

While evidentiary approaches are often considered to reflect adherence to the scientific method in its purest form, it is apparent that these approaches do not apply to ruin problems. In an evidentiary approach to risk (relying on evidence-based methods), the existence of a risk or harm occurs when we experience that risk or harm. In the case of ruin, by the time evidence comes it will by definition be too late to avoid it. Nothing in the past may predict one fatal event. Thus standard evidence-based approaches cannot work.

It’s misleading to say that “by the time the evidence comes it will be by definition too late to avoid it.” Taleb assumes, without proof, that the linkage between GMOs, say, and a worldwide food crisis will occur suddenly and without warning (or sufficient warning), as if GMOs will be everywhere at once and no one will have been paying attention to their effects as their use spread. That’s unlikely given broad disparities in the distribution of GMOs, the state of vigilance about them, and resistance to them in many quarters. What Taleb really says is this: Some people (Taleb among them) believe that GMOs pose an existential risk with a probability greater than zero. (Any such “probability” is fictional, as discussed above.) Therefore, the risk of ruin from GMOs is greater than zero and ruin is inevitable. By that logic, there must be dozens of certain-death scenarios for the planet. Why is Taleb wasting his time on GMOs, which are small potatoes compared with, say, asteroids? And why don’t we just slit our collective throat and get it over with?

Since there are mathematical limitations to predictability of outcomes in a complex system, the central issue to determine is whether the threat of harm is local (hence globally benign) or carries global consequences. Scientific analysis can robustly determine whether a risk is systemic, i.e. by evaluating the connectivity of the system to propagation of harm, without determining the specifics of such a risk. If the consequences are systemic, the associated uncertainty of risks must be treated differently than if it is not. In such cases, precautionary action is not based on direct empirical evidence but on analytical approaches based upon the theoretical understanding of the nature of harm. It relies on probability theory without computing probabilities. The essential question is whether or not global harm is possible or not.

More of the same.

Everything that has survived is necessarily non-linear to harm. If I fall from a height of 10 meters I am injured more than 10 times than if I fell from a height of 1 meter, or more than 1000 times than if I fell from a height of 1 centimeter, hence I am fragile. In general, every additional meter, up to the point of my destruction, hurts me more than the previous one.

This explains the necessity of considering scale when invoking the PP. Polluting in a small way does not warrant the PP because it is essentially less harmful than polluting in large quantities, since harm is non-linear.

This is just a way of saying that there’s a threshold of harm, and harm becomes ruinous when the threshold is surpassed. Which is true in some cases, but there’s a wide variety of cases and a wide range of thresholds. This is just a framing device meant to set the reader up for the sucker punch, which is that the widespread use of GMOs will be ruinous, at some undefined point. Well, we’ve been hearing that about CAGW for twenty years, and the undefined point keeps receding into the indefinite future.

Thus, when impacts extend to the size of the system, harm is severely exacerbated by non-linear effects. Small impacts, below a threshold of recovery, do not accumulate for systems that retain their structure. Larger impacts cause irreversible damage.We should be careful, however, of actions that may seem small and local but then lead to systemic consequences.

“When impacts extend to the size of the system” means “when ruin is upon us there is ruin.” It’s a tautology without empirical content.

An increase in uncertainty leads to an increase in the probability of ruin, hence “skepticism” is that its impact on decisions should lead to increased, not decreased conservatism in the presence of ruin. Hence skepticism about climate models should lead to more precautionary policies.

This is through the looking glass and into the wild blue yonder. More below.

The rest of the paper is devoted to two things. One of them is making the case against GMOs because they supposedly exemplify the kind of risk that’s covered by the non-naive PP. I’ll let Jon Entine and Gregory Conko (quoted above) speak for me on that issue.

The other thing that the rest of the paper does is to spell out and debunk ten supposedly fallacious arguments against PP. I won’t go into them here because Taleb’s version of PP is self-evidently fallacious. The fallacy can be found in figure 6 of the paper:

taleb-figure-6

Taleb pulls an interesting trick here — or perhaps he exposes his fundamental ignorance about probability. Let’s take it a step at a time:

  1. Figure 6 depicts two normal distributions. But what are they normal distributions of? Let’s say that they’re supposed to be normal distributions of the probability of the occurrence of CAGW (however that might be defined) by a certain date, in the absence of further steps to mitigate it (e.g., banning the use of fossil fuels forthwith). There’s no known normal distribution of the probability of CAGW because, as discussed above, CAGW is a unique, hypothesized (future) event which cannot have a probability. It’s not 100 tosses of a fair coin.
  2. The curves must therefore represent something about models that predict the arrival of CAGW by a certain date. Perhaps those predictions are normally distributed, though that has nothing to do with the “probability” of CAGW if all of the predictions are wrong.
  3. The two curves shown in Taleb’s figure 6 are meant (by Taleb) to represent greater and lesser certainty about the arrival of CAGW (or the ruinous scenario of his choice), as depicted by climate models.
  4. But if models are adjusted or built anew in the face of evidence about their shortcomings (i.e., their gross overprediction of temperatures since 1998), the newer models (those with presumably greater certainty) will have two characteristics: (a) The tails will be thinner, as Taleb suggests. (b) The mean will shift to the left or right; that is they won’t have the same mean.
  5. In the case of CAGW, the mean will shift to the right because it’s already known that extant models overstate the risk of “ruin.” The left tail of the distribution of the new models will therefore shift to the right, further reducing the “probability” of CAGW.
  6. Taleb’s trick is to ignore that shift and, further, to implicitly assume that the two distributions coexist. By doing that he can suggest that there’s an “increase in uncertainty [that] leads to an increase in the probability of ruin.” In fact, there’s a decrease in uncertainty, and therefore a decrease in the probability of ruin.

I’ll say it again: As evidence is gathered, there is less uncertainty; that is, the high-uncertainty condition precedes the low-uncertainty one. The movement from high uncertainty to low uncertainty would result in the assignment of a lower probability to a catastrophic outcome (assuming, for the moment, that such a probability is meaningful). And that would be a good reason to worry less about the eventuality of the catastrophic outcome. Taleb wants to compare the two distributions, as if the earlier one (based on little evidence) were as valid as the later one (based on additional evidence).

That’s why Taleb counsels against “evidentiary approaches.” In Taleb’s worldview, knowing little about a potential risk to health, welfare, and existence is a good reason to take action with respect to that risk. Therefore, if you know little about the risk, you should act immediately and with all of the resources at your disposal. Why? Because the risk might suddenly cause an irreversible calamity. But that’s not true of CAGW or GMOs. There’s time to gather evidence as to whether there’s truly a looming calamity, and then — if necessary — to take steps to avert it, steps that are more likely to be effective because they’re evidence-based. Further, if there’s not a looming calamity, a tremendous wast of resources will be averted.

It follows from the non-naive PP — as interpreted by Taleb — that all human beings should be sterilized and therefore prevented from procreating. This is so because sometimes just a few human beings  — Hitler, Mussolini, and Tojo, for example — can cause wars. And some of those wars have harmed human beings on a nearly global scale. Global sterilization is therefore necessary, to ensure against the birth of new Hitlers, Mussolinis, and Tojos — even if it prevents the birth of new Schweitzers, Salks, Watsons, Cricks, and Mother Teresas.

In other words, the non-naive PP (or Taleb’s version of it) is pseudo-scientific claptrap. It can be used to justify any extreme and nonsensical position that its user wishes to advance. It can be summed up in an Orwellian sentence: There is certainty in uncertainty.

Perhaps this is better: You shouldn’t get out of bed in the morning because you don’t know with certainty everything that will happen to you in the course of the day.

*     *     *

NOTE: The title of Jon Entine’s blog post, quoted above, refers to Taleb as a “dangerous imbecile.” Here’s Entine’s explanation of that characterization:

If you think the headline of this blog [post] is unnecessarily inflammatory, you are right. It’s an ad hominem way to deal with public discourse, and it’s unfair to Nassim Taleb, the New York University statistician and risk analyst. I’m using it to make a point–because it’s Taleb himself who regularly invokes such ugly characterizations of others….

…Taleb portrays GMOs as a ‘castrophe in waiting’–and has taken to personally lashing out at those who challenge his conclusions–and yes, calling them “imbeciles” or paid shills.

He recently accused Anne Glover, the European Union’s Chief Scientist, and one of the most respected scientists in the world, of being a “dangerous imbecile” for arguing that GM crops and foods are safe and that Europe should apply science based risk analysis to the GMO approval process–views reflected in summary statements by every major independent science organization in the world.

Taleb’s ugly comment was gleefully and widely circulated by anti-GMO activist web sites. GMO Free USA designed a particularly repugnant visual to accompany their post.

Taleb is known for his disagreeable personality–as Keith Kloor at Discover noted, the economist Noah Smith had called Taleb  a “vulgar bombastic windbag”, adding, “and I like him a lot”. He has a right to flaunt an ego bigger than the Goodyear blimp. But that doesn’t make his argument any more persuasive.

*     *     *

Related posts:
“Warmism”: The Myth of Anthropogenic Global Warming
More Evidence against Anthropogenic Global Warming
Yet More Evidence against Anthropogenic Global Warming
Pascal’s Wager, Morality, and the State
Modeling Is Not Science
Fooled by Non-Randomness
Randomness Is Over-Rated
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
Beware the Rare Event
Demystifying Science
Pinker Commits Scientism
AGW: The Death Knell
The Pretence of Knowledge
“The Science Is Settled”
“Settled Science” and the Monty Hall Problem
The Limits of Science, Illustrated by Scientists
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
The “Marketplace” of Ideas
My War on the Misuse of Probability
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II)
Is Science Self-Correcting?