Science and Understanding

Special Relativity II: A Fatal Flaw?

This post revisits some of the arguments in “Special Relativity: Answers and Questions,” and introduces some additional considerations. I quote extensively from Einstein’s Relativity: The Special and General Theory (1916, translated 1920).

Einstein begins with a discussion of the coordinates of space in Euclidean geometry, then turns to space and time in classical mechanics. In the following passage he refers to an expository scenario that recurs throughout the book, the passage of a railway carriage (train car) along an embankment:

In order to have a complete description of the motion [of a body], we must specify how the body alters its position with time; i.e. for every point on the trajectory it must be stated at what time the body is situated there. These data must be supplemented by such a definition of time that, in virtue of this definition, these time-values can be regarded essentially as magnitudes (results of measurements) capable of observation. If we take our stand on the ground of classical mechanics, we can satisfy this requirement for our illustration in the following manner. We imagine two clocks of identical construction; the man at the railway-carriage window is holding one of them, and the man on the footpath [of the embankment] the other. Each of the observers determines the position on his own reference-body occupied by the stone at each tick of the clock he is holding in his hand. In this connection we have not taken account of the inaccuracy involved by the finiteness of the velocity of propagation of light.

To get to that inaccuracy, Einstein begins with this:

Let us suppose our old friend the railway carriage to be travelling along the rails with a constant velocity v, and that a man traverses the length of the carriage in the direction of travel with a velocity w. How quickly, or, in other words, with what velocity W does the man advance relative to the embankment [on which the rails rest] during the process? The only possible answer seems to result from the following consideration: If the man were to stand still for a second, he would advance relative to the embankment through a distance v equal numerically to the velocity of the carriage. As a consequence of his walking, however, he traverses an additional distance w relative to the carriage, and hence also relative to the embankment, in this second, the distance w being numerically equal to the velocity with which he is walking. Thus in total he covers the distance W = v + w relative to the embankment in the second considered.

This is the theorem of the addition of velocities from classical physics. Why doesn’t it apply to light? Einstein continues:

If a ray of light be sent along the embankment [in an assumed vacuum], … the tip of the ray will be transmitted with the velocity c relative to the embankment. Now let us suppose that our railway carriage is again travelling along the railway lines with the velocity v, and that its direction is the same as that of the ray of light, but its velocity of course much less. Let us inquire about the velocity of propagation of the ray of light relative to the carriage. It is obvious that we can here apply the consideration of the previous section, since the ray of light plays the part of the man walking along relatively to the carriage. The velocity W of the man relative to the embankment is here replaced by the velocity of light relative to the embankment. w is the required velocity of light with respect to the carriage, and we have

w = c − v.

The velocity of propagation of a ray of light relative to the carriage thus comes out smaller than c.

Let’s take that part a bit more slowly than Einstein does. The question is the velocity with which the ray of light is traveling relative to the carriage. The man in the previous example was walking in the carriage with velocity w relative to the body of the carriage, and therefore with velocity W relative to the embankment (v + w). If the ray of light is traveling at c relative to the embankment, and the carriage is traveling at v relative to the embankment, then by analogy it would seem that the velocity of the ray of light relative to the velocity of the carriage should be the velocity of light minus the velocity of the carriage, that is c – v. (Einstein introduces some confusion by using w to denote this hypothetical velocity, having already used w to denote the velocity of the man walking in the carriage, relative to the embankment.)

It would thus seem that the velocity of a ray of light emitted from the railway carriage should be c + v relative to a person standing still on the embankment. That is, light would travel faster than c when it’s emitted from an object moving in a forward direction relative to an observer. But all objects are in relative motion, even hypothetically stationary ones such as the railway embankment of Einstein’s through experiment. Light would therefore move at many different velocities, all of them varying from c according to the motion of each observer relative to the source of light; that is, some observers would detect velocities greater than c, while others would detect velocities less than c.

But this can’t happen (supposedly). Einstein puts it this way:

In view of this dilemma there appears to be nothing else for it than to abandon either the [old] principle of relativity [the addition of velocities] or the simple law of the propagation of light in vacuo [that c is the same for all observers, regardless of their relative motion]. Those of you who have carefully followed the preceding discussion are almost sure to expect that we should retain the [old] principle of relativity, which appeals so convincingly to the intellect because it is so natural and simple. The law of the propagation of light in vacuo would then have to be replaced by a more complicated law conformable to the [old] principle of relativity. The development of theoretical physics shows, however, that we cannot pursue this course. The epoch-making theoretical investigations of H.A.Lorentz on the electrodynamical and optical phenomena connected with moving bodies show that experience in this domain leads conclusively to a theory of electromagnetic phenomena, of which the law of the constancy of the velocity of light in vacuo is a necessary consequence…..

[I]n reality there is not the least incompatibility between the principle of relativity and the law of propagation of light, and that by systematically holding fast to both these laws a logically rigid theory could be arrived at. This theory has been called the special theory of relativity….

Einstein gets to the special theory of relativity (STR) by next considering the problem of simultaneity:

Lightning has struck the rails on our railway embankment at two places A and B far distant from each other. I make the additional assertion that these two lightning flashes occurred simultaneously. If now I ask you whether there is sense in this statement, you will answer my question with a decided “Yes.” But if I now approach you with the request to explain to me the sense of the statement more precisely, you find after some consideration that the answer to this question is not so easy as it appears at first sight.

After thinking the matter over for some time you then offer the following suggestion with which to test simultaneity. By measuring along the rails, the connecting line AB should be measured up and an observer placed at the mid-point M of the distance AB. This observer should be supplied with an arrangement (e.g. two mirrors inclined at 90 ◦) which allows him visually to observe both places A and B at the same time. If the observer perceives the two flashes of lightning at the same time, then they are simultaneous.

I am very pleased with this suggestion, but for all that I cannot regard the matter as quite settled, because I feel constrained to raise the following objection: “Your definition would certainly be right, if I only knew that the light by means of which the observer at M perceives the lightning flashes travels along the length A → M with the same velocity as along the length B → M. But an examination of this supposition would only be possible if we already had at our disposal the means of measuring time. It would thus appear as though we were moving here in a logical circle.”

After further consideration you cast a somewhat disdainful glance at me— and rightly so— and you declare: “I maintain my previous definition nevertheless, because in reality it assumes absolutely nothing about light. There is only one demand to be made of the definition of simultaneity, namely, that in every real case it must supply us with an empirical decision as to whether or not the conception that has to be defined is fulfilled. That my definition satisfies this demand is indisputable. That light requires the same time to traverse the path A → M as for the path B → M is in reality neither a supposition nor a hypothesis about the physical nature of light, but a stipulation which I can make of my own freewill in order to arrive at a definition of simultaneity.”

It is clear that this definition can be used to give an exact meaning not only to two events, but to as many events as we care to choose, and independently of the positions of the scenes of the events with respect to the body of reference (here the railway embankment). We are thus led also to a definition of “time” in physics. For this purpose we suppose that clocks of identical construction are placed at the points A, B and C of the railway line (co-ordinate system), and that they are set in such a manner that the positions of their pointers are simultaneously (in the above sense) the same. Under these conditions we understand by the “time” of an event the reading (position of the hands) of that one of these clocks which is in the immediate vicinity (in space) of the event. In this manner a time-value is associated with every event which is essentially capable of observation.

This stipulation contains a further physical hypothesis, the validity of which will hardly be doubted without empirical evidence to the contrary. It has been assumed that all these clocks go at the same rate if they are of identical construction. Stated more exactly: When two clocks arranged at rest in different places of a reference-body are set in such a manner that a particular position of the pointers of the one clock is simultaneous (in the above sense) with the same position of the pointers of the other clock, then identical “settings” are always simultaneous (in the sense of the above definition).

In other words, time is the same for every point in a frame of reference, which can be thought of as a group of points that remain in a fixed spatial relationship. Every such point in that frame of reference can have a clock associated with it; every clock can be set to the same time; and every clock (assuming great precision) will run at the same rate. When it is noon at one point in the frame of reference, it will be noon at all points in the frame of reference. And when the clock at one point has advanced from noon to 1 p.m., the clocks at all points in the same frame of reference will have advanced from noon to 1 p.m., and ad infinitum.

As Einstein puts it later,

Events which are simultaneous with reference to the embankment are not simultaneous with respect to the train, and vice versa (relativity of simultaneity). Every reference body (co-ordinate system) has its own particular time; unless we are told the reference-body to which the statement of time refers, there is no meaning in a statement of the time of an event.

Returning to the question of simultaneity, Einstein poses his famous thought experiment:

Up to now our considerations have been referred to a particular body of reference, which we have styled a “railway embankment.” We suppose a very long train travelling along the rails with the constant velocity v and in the direction indicated in Fig. 1. People travelling in this train will with advantage use the train as a rigid reference-body (co-ordinate system); they regard all events in reference to the train. Then every event which takes place along the line also takes place at a particular point of the train. Also the definition of simultaneity can be given relative to the train in exactly the same way as with respect to the embankment. As a natural consequence, however, the following question arises:

Are two events ( e.g. the two strokes of lightning A and B) which are simultaneous with reference to the railway embankment also simultaneous relatively to the train? We shall show directly that the answer must be in the negative.


FIG. 1.

When we say that the lightning strokes A and B are simultaneous with respect to the embankment, we mean: the rays of light emitted at the places A and B, where the lightning occurs, meet each other at the mid-point M of the length A → B of the embankment. But the events A and B also correspond to positions A and B on the train. Let M’ be the mid-point of the distance A → B on the travelling train. Just when the flashes of lightning occur, this point M’ naturally coincides with the point M, but it moves towards the right in the diagram with the velocity v of the train. If an observer sitting in the position M’ in the train did not possess this velocity, then he would remain permanently at M, and the light rays emitted by the flashes of lightning A and B would reach him simultaneously, i.e. they would meet just where he is situated. Now in reality (considered with reference to the railway embankment) he is hastening towards the beam of light coming from B, whilst he is riding on ahead of the beam of light coming from A. Hence the observer will see the beam of light emitted from B earlier than he will see that emitted from A. Observers who take the railway train as their reference-body must therefore come to the conclusion that the lightning flash B took place earlier than the lightning flash A. We thus arrive at the important result:

Events which are simultaneous with reference to the embankment are not simultaneous with respect to the train, and vice versa (relativity of simultaneity). Every reference body (co-ordinate system) has its own particular time; unless we are told the reference-body to which the statement of time refers, there is no meaning in a statement of the time of an event.

It’s important to note that there is a time delay, however minuscule, between the instant that the flashes of light are are emitted at A and B and the instant when they reach the observer at M.

Because of the minuscule time delay, the flashes of light wouldn’t reach the observer at M’ in the carriage at the same time that they reach the observer at M on the embankment. The observer at M’ is directly opposite the observer at M when the flashes of light are emitted, not when they are received simultaneously at M. During the minuscule delay between the emission of the flashes at A and B and their simultaneous receipt by the observer at M, the observer at M’ moves toward B and away from A. The observer at M’ therefore sees the flash emitted from B a tiny fraction of a second before he sees the flash emitted from A. Neither event corresponds in time with the time at which the flashes reach M. (There are variations on Einstein’s thought experiment — here, for example — but they trade on the same subtlety: a time delay between the flashes of light and their reception by observers.)

Returning to Einstein’s diagram, suppose that A and B are copper wires, tautly strung from posts and closely overhanging the track and embankment at 90-degree angles to both. The track is slightly depressed below the level of the embankment, so that observers on the train and embankment are the same distance below the wires. The wires are shielded so that they can be seen only by observers directly below them. Because the shielding doesn’t deflect lightning, when lightning hits the wires they will glow instantaneously. If lightning strikes A and B at the same time, the glow  will be seen simultaneously by observers positioned directly under the wires at the instant of the lightning strikes. Therefore, observers on the embankment at A and B and observers directly opposite them on the train at A’ and B’ will see the wires glow at the same time. (The observers would be equipped with synchronized clocks, the readings of which they can compare to verify their simultaneous viewing of the lightning strikes. I will leave to a later post the question whether the clocks at A and B show the same time as the clocks at A’ and B’.)

Because of the configuration of the wires in relation to the  track and the embankment, A and B must be the same distance apart as A’ and B’. That is to say, the simultaneity of observation isn’t an artifact of the distortion of horizontal measurements, or length contraction, which is another aspect of STR.

Einstein’s version of the thought experiment was designed — unintentionally, I assume — to create an illusion of non-simultaneity. Of course an observer on the train at M’ would not see the lightning flashes at the same time as an observer on the embankment at M: The observer at M’ would no longer be opposite M when the lightning flashes arrive at M. But, as shown by my variation on Einstein’s though experiment, that doesn’t rule out the simultaneity of observations on the train and on the embankment. It just requires a setup that isn’t designed to exclude simultaneity. My setup involving copper wires is one possible way of ensuring simultaneity. It also seems to rule out the possibility of length contraction.

Einstein’s defense of his thought experiment (in the fifth block quotation above) is also an apt defense of my version of his thought experiment. I have described a situation in which there is indubitable simultaneity. The question is whether it forecloses a subsequent proof of non-simultaneity. Einstein’s thought experiment didn’t, because Einstein left a loophole — intentionally or not — which discredits his proof of non-simultaneity. (I am not the first person who claims to have discovered the loophole.) My thought experiment leaves no loophole, as far as I can tell.

If my thought experiment has merit, it points to an invariant time, that is, a time which is the same for all frames of reference. A kind of Newtonian absolute time, if you will.

To be continued.

Daylight Saving Time Doesn’t Kill…

…it’s “springing forward” in March that kills.

There’s a hue and cry about daylight saving time (that’s “saving” not “savings”). The main complaint seems to be the stress that results from moving clocks ahead in March:

Springing forward may be hazardous to your health. The Monday following the start of daylight saving time (DST) is a particularly bad one for heart attacks, traffic accidents, workplace injuries and accidental deaths. Now that most Americans have switched their clocks an hour ahead, studies show many will suffer for it.

Most Americans slept about 40 minutes less than normal on Sunday night, according to a 2009 study published in the Journal of Applied Psychology…. Since sleep is important for maintaining the body’s daily performance levels, much of society is broadly feeling the impact of less rest, which can include forgetfulness, impaired memory and a lower sex drive, according to WebMD.

One of the most striking affects of this annual shift: Last year, Colorado researchers reported finding a 25 percent increase in the number of heart attacks that occur on the Monday after DST starts, as compared with a normal Monday…. A cardiologist in Croatia recorded about twice as many heart attacks than expected during that same day, and researchers in Sweden have also witnessed a spike in heart attacks in the week following the time adjustment, particularly among those who were already at risk.

Workplace injuries are more likely to occur on that Monday, too, possibly because workers are more susceptible to a loss of focus due to too little sleep. Researchers at Michigan State University used over 20 years of data from the Mine Safety and Health Administration to determine that three to four more miners than average sustain a work-related injury on the Monday following the start of DST. Those injuries resulted in 2,649 lost days of work, which is a 68 percent increase over the hours lost from injuries on an average day. The team found no effects following the nation’s one-hour shift back to standard time in the fall….

There’s even more bad news: Drivers are more likely to be in a fatal traffic accident on DST’s first Monday, according to a 2001 study in Sleep Medicine. The authors analyzed 21 years of data on fatal traffic accidents in the U.S. and found that, following the start of DST, drivers are in 83.5 accidents as compared with 78.2 on the average Monday. This phenomenon has also been recorded in Canadian drivers and British motorists.

If all that wasn’t enough, a researcher from the University of British Columbia who analyzed three years of data on U.S. fatalities reported that accidental deaths of any kind are more likely in the days following a spring forward. Their 1996 analysis showed a 6.5 percent increase, which meant that about 200 more accidental deaths occurred immediately after the start of DST than would typically occur in a given period of the same length.

I’m convinced. But the solution to the problem isn’t to get rid of DST. No, the solution is to get rid of standard time and use DST year around.

I’m not arguing for year-around DST from an economic standpoint. The evidence about the economic advantages of DST is inconclusive.

I’m arguing for year-around DST as a way to eliminate “spring forward” distress and enjoy an extra hour of daylight in the winter.

Don’t you enjoy those late summer sunsets? I sure do, and a lot other people seem to enjoy them, too. That’s why daylight saving time won’t be abolished.

But if you love those late summer sunsets, you should also enjoy an extra hour of daylight at the end of a drab winter day. I know that I would. And it’s not as if you’d miss anything if the sun rises an hour later in the winter. Even with standard time, most working people and students have to be up and about before sunrise in winter, even though sunrise comes an hour earlier than it would with DST.

How would year-around DST affect you? The following table gives the times of sunrise and sunset on the longest and shortest days of 2017 for nine major cities, north to south and west to east:

I report, you decide. If it were up to me, the decision would be year-around DST.

Thoughts for the Day

Excerpts of recent correspondence.

Robots, and their functional equivalents in specialized AI systems, can either replace people or make people more productive. I suspect that the latter has been true in the realm of medicine — so far, at least. But I have seen reportage of robotic units that are beginning to perform routine, low-level work in hospitals. So, as usual, the first people to be replaced will be those with rudimentary skills, not highly specialized training. Will it go on from there? Maybe, but the crystal ball is as cloudy as an old-time London fog.

In any event, I don’t believe that automation is inherently a job-killer. The real job-killer consists of government programs that subsidize non-work — early retirement under Social Security, food stamps and other forms of welfare, etc. Automation has been in progress for eons, and with a vengeance since the second industrial revolution. But, on balance, it hasn’t killed jobs. It just pushes people toward new and different jobs that fit the skills they have to offer. I expect nothing different in the future, barring government programs aimed at subsidizing the “victims” of technological displacement.

*      *      *

It’s civil war by other means (so far): David Wasserman, “Purple America Has All but Disappeared” (The New York Times, March 8, 2017).

*      *      *

I know that most of what I write (even the non-political stuff) has a combative edge, and that I’m therefore unlikely to persuade people who disagree with me. I do it my way for two reasons. First, I’m too old to change my ways, and I’m not going to try. Second, in a world that’s seemingly dominated by left-wing ideas, it’s just plain fun to attack them. If what I write happens to help someone else fight the war on leftism — or if it happens to make a young person re-think a mindless commitment to leftism — that’s a plus.

*     *     *

I am pessimistic about the likelihood of cultural renewal in America. The populace is too deeply saturated with left-wing propaganda, which is injected from kindergarten through graduate school, with constant reinforcement via the media and popular culture. There are broad swaths of people — especially in low-income brackets — whose lives revolve around mindless escape from the mundane via drugs, alcohol, promiscuous sex, etc. Broad swaths of the educated classes have abandoned erudition and contemplation and taken up gadgets and entertainment.

The only hope for conservatives is to build their own “bubbles,” like those of effete liberals, and live within them. Even that will prove difficult as long as government (especially the Supreme Court) persists in storming the ramparts in the name of “equality” and “self-creation.”

*     *     *

I correlated Austin’s average temperatures in February and August. Here are the correlation coefficients for following periods:

1854-2016 = 0.001
1875-2016 = -0.007
1900-2016 = 0.178
1925-2016 = 0.161
1950-2016 = 0.191
1975-2016 = 0.126

Of these correlations, only the one for 1900-2016 is statistically significant at the 0.05 level (less than a 5-percent chance of a random relationship). The correlations for 1925-2016 and 1950-2016 are fairly robust, and almost significant at the 0.05 level. The relationship for 1975-2016 is statistically insignificant. I conclude that there’s a positive relationship between February and August temperatures, but weak one. A warm winter doesn’t necessarily presage an extra-hot summer in Austin.

Is Consciousness an Illusion?

Scientists seem to have pinpointed the physical source of consciousness. But the execrable Daniel C. Dennett, for whom science is God, hasn’t read the memo. Dennett argues in his latest book, From Bacteria to Bach and Back: The Evolution of Minds, that consciousness is an illusion.

Another philosopher, Thomas Nagel, weighs in with a dissenting review of Dennett’s book. (Nagel is better than Dennett, but that’s faint praise.) Nagel’s review, “Is Consciousness an Illusion?,” appears in The New York Review of Books (March 9, 2017). Here are some excerpts:

According to the manifest image, Dennett writes, the world is

full of other people, plants, and animals, furniture and houses and cars…and colors and rainbows and sunsets, and voices and haircuts, and home runs and dollars, and problems and opportunities and mistakes, among many other such things. These are the myriad “things” that are easy for us to recognize, point to, love or hate, and, in many cases, manipulate or even create…. It’s the world according to us.

According to the scientific image, on the other hand, the world

is populated with molecules, atoms, electrons, gravity, quarks, and who knows what else (dark energy, strings? branes?)….

In an illuminating metaphor, Dennett asserts that the manifest image that depicts the world in which we live our everyday lives is composed of a set of user-illusions,

like the ingenious user-illusion of click-and-drag icons, little tan folders into which files may be dropped, and the rest of the ever more familiar items on your computer’s desktop. What is actually going on behind the desktop is mind-numbingly complicated, but users don’t need to know about it, so intelligent interface designers have simplified the affordances, making them particularly salient for human eyes, and adding sound effects to help direct attention. Nothing compact and salient inside the computer corresponds to that little tan file-folder on the desktop screen.

He says that the manifest image of each species is “a user-illusion brilliantly designed by evolution to fit the needs of its users.” In spite of the word “illusion” he doesn’t wish simply to deny the reality of the things that compose the manifest image; the things we see and hear and interact with are “not mere fictions but different versions of what actually exists: real patterns.” The underlying reality, however, what exists in itself and not just for us or for other creatures, is accurately represented only by the scientific image—ultimately in the language of physics, chemistry, molecular biology, and neurophysiology….

You may well ask how consciousness can be an illusion, since every illusion is itself a conscious experience—an appearance that doesn’t correspond to reality. So it cannot appear to me that I am conscious though I am not: as Descartes famously observed, the reality of my own consciousness is the one thing I cannot be deluded about….

According to Dennett, however, the reality is that the representations that underlie human behavior are found in neural structures of which we know very little. And the same is true of the similar conception we have of our own minds. That conception does not capture an inner reality, but has arisen as a consequence of our need to communicate to others in rough and graspable fashion our various competencies and dispositions (and also, sometimes, to conceal them)….

The trouble is that Dennett concludes not only that there is much more behind our behavioral competencies than is revealed to the first-person point of view—which is certainly true—but that nothing whatever is revealed to the first-person point of view but a “version” of the neural machinery….

I am reminded of the Marx Brothers line: “Who are you going to believe, me or your lying eyes?” Dennett asks us to turn our backs on what is glaringly obvious—that in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotle’s words, “maintaining a thesis at all costs.”

Nagel’s counterargument would have been more compelling if he had relied on a simple metaphor like this one: Most drivers can’t describe in any detail the process by which an automobile converts the potential energy of gasoline to the kinetic energy that’s produced by the engine and then transmitted eventually to the automobile’s drive wheels. Instead, most drivers simply rely on the knowledge that pushing the start button will start the car. That knowledge may be shallow, but it isn’t illusory. If it were, an automobile would be a useless hulk sitting in the driver’s garage.

Some tough questions are in order, too. If consciousness is an illusion, where does it come from? Dennett is an out-and-out physicalist and strident atheist. It therefore follows that Dennett can’t believe in consciousness (the manifest image) as a free-floating spiritual entity that’s disconnected from physical reality (the scientific image). It must, in fact, be a representation of physical reality, even if a weak and flawed one.

Looked at another way, consciousness is the gateway to the scientific image. It is only through the  deliberate, reasoned, fact-based application of consciousness that scientists have been able to roll back the mysteries of the physical world and improve the manifest image so that it more nearly resembles the scientific image. The gap will never be closed, of course. Even the most learned of human beings have only a tenuous grasp of physical reality in all of it myriad aspects. Nor will anyone ever understand what physical reality “really is” — it’s beyond apprehension and description. But that doesn’t negate the symbiosis of physical reality and consciousness.

*     *     *

Related posts:
Debunking “Scientific Objectivity”
A Non-Believer Defends Religion
Evolution as God?
The Greatest Mystery
What Is Truth?
The Improbability of Us
The Atheism of the Gaps
Demystifying Science
Something from Nothing?
Something or Nothing
My Metaphysical Cosmology
Further Thoughts about Metaphysical Cosmology
Nothingness
The Glory of the Human Mind
Mind, Cosmos, and Consciousness
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Words Fail Us
Hayek’s Anticipatory Account of Consciousness

Special Relativity: Answers and Questions

SEE THE ADDENDUM OF 02/26/17 AT THE END OF THIS POST

The speed of light in a vacuum is 186,282 miles per second. It is a central tenet of the special theory of relativity (STR) that the speed of light is the same for every observer, regardless of the motion of an observer relative to the source of the light being observed. The meaning of the latter statement is not obvious to a non-physicist (like me). In an effort to understand it, I concocted the following thought experiment (TE), which I will call TE 1:

1. There is a long train car running smoothly on a level track, at a constant speed of 75 miles per hour (mph) relative to an observer who is standing close to the track. One side of the car is one-way mirror, arranged so that the outside observer (Ozzie) can see what is happening inside the car but an observer inside the car cannot see what is happening outside. For all the observer inside the car knows, the train car is stationary with respect to the surface of the Earth. (This is not a special condition; persons standing on the ground do not sense that they are revolving with the Earth at a speed of about 1,000 mph.)

2. The train car is commodious enough for a pitcher (Pete) and catcher (Charlie, the inside observer) to play a game of catch over a distance of 110 feet, from the pitcher’s release point to the catcher’s glove. Pete throws a baseball to Charlie at a speed of 75 mph (110 feet per second, or fps), relative to Charlie, so that the ball reaches his glove 1 second after Pete has released it. This is true regardless of the direction of the car or the positions of Pete and Charlie with respect to the direction of the car.

3. How fast the ball is thrown, relative to Ozzie, does depend on the movement of the car and positions of Pete and Charlie, relative to Ozzie. For example, when the car is moving toward Ozzie, and Pete is throwing in Ozzie’s direction, Ozzie sees the ball moving toward him at 150 mph. To understand why this is so, assume that Pete releases the ball when his release point is 220 feet from Ozzie and, accordingly, Charlie’s glove is 110 feet from Ozzie. The ball traverses the 110 feet between Pete and Charlie in 1 second, during which time the train moves 110 feet toward Ozzie. Therefore, when Charlie catches the ball, his glove is adjacent to Ozzie, and the ball has traveled 220 feet, from Ozzie’s point of view. Thus Ozzie reckons that the ball has traveled 220 feet in 1 second, or at a speed of 150 mph. This result is consistent with the formula of classical physics: To a stationary observer, the apparent speed of an emitted object is the speed of that object (the baseball) plus the speed of whatever emits it (Pete on a moving train car).

*     *     *

So far, so good, from the standpoint of classical physics. Classical physics “works” at low speeds (relative to the speed of light) because relativistic effects are imperceptible at low speeds. (See this post, for example.)

But consider what happens if Pete “throws” light instead of a baseball, according to STR. This is TE 2:

1. The perceived speed of light is not affected by the speed at which an emitting object (e.g., a flashlight) is traveling relative to an observer. Accordingly, if the speed of light were 75 mph and Pete were to “throw” light instead of a baseball, it would take 1 second for the light to reach Charlie’s glove. Charlie would therefore measure the speed of light as 75 mph.

2. As before, Charlie would have moved 110 feet toward Ozzie in that 1 second, so that Charlie’s glove would be abreast of Ozzie at the instant of the arrival of light. It would seem that Ozzie should calculate the speed of light as 150 mph.

3. But this cannot be so if the speed of light is the same for all observers. That is, both Charlie and Ozzie should measure the speed of light as 75 mph.

4. How can Ozzie’s measurement be brought into line with Charlie’s? Generalizing from the relationship between distance (d), time (t), and speed (v):

  • d = tv (i.e., t x v, in case you are unfamiliar with algebraic expressions);
  • therefore, v = d/t;
  • which is satisfied by any feasible combination of d and t that yields v = 110 fps (75 mph).

(Key point: The relevant measurements of t and d are those made by Ozzie, from his perspective as an observer standing by the track while the train car moves toward him. In other words, Ozzie will obtain measures of t and/or d that differ from those made by Charlie.)

5. Thus there are two limiting possibilities that satisfy the condition v = 110 fps (75 mph), which is the fixed speed of light in this example:

A. If t = 2 seconds and d = 220 feet, then v = 110 fps.

B. If t = 1 second and d = 110 ft, then v = 110 fps.

6. Regarding possibility A: t stretches to 2 seconds while d remains 220 feet. The stretching of t is a relativistic phenomenon known as time dilation. From Ozzie’s perspective, the train car slows down. More exactly, a clock mounted in the train car would seem (to Ozzie) to run at half-speed from the moment Pete releases the ball of light.

7. Regarding possibility B: d contracts to 110 feet while t remains 1 second. The contraction of d is a relativistic phenomenon known as length contraction. From Ozzie’s perspective, it appears that the distance from Pete’s release point to Charlie’s catch (which occurs when Charlie is adjacent to Ozzie) shrinks when Pete releases the ball of light, so that Ozzie sees it as 110 feet.

8. There is no reason to favor one phenomenon over the other; therefore, what Ozzie sees is a combination of the two, such that the observed speed of the ball of light is 75 mph.

*     *     *

Here is TE 3, which is a variation on TE 2:

1. The train car is now traveling leftward at 110 fps, as seen by Ozzie. The car  is a caboose, and Pete is standing on the rear platform, whence he throws the baseball rightward (relative to Ozzie) at 110 fps (relative to Pete).

2. Ozzie is directly opposite Pete when Pete releases the ball at t = 0. According to classical physics, Ozzie would perceive the ball as stationary; that is, the sum of the speed of the train car relative to Ozzie (- 110 fps) and the speed of the baseball relative to Pete (110 fps) is zero. In other words, Ozzie should see the ball hanging in mid-air for at least 1 second.

3. Do you really expect the ball to stand still (relative to Ozzie) in mid-air for 1 second? No, you don’t. You really expect, quite reasonably, that the ball will move to Ozzie’s right, just as a light beam would move to Ozzie’s right if switched on at t = 0. (This is analogous to the behavior of a light beam emitted from a flashlight that is switched on at t = 0.)

4. Now suppose that Charlie is stationary relative to Pete, as before. This time, however, Charlie is standing at the front of a train car that is following Pete’s train car at a constant distance of 110 feet. According to the setup of TE 1, Charlie will be directly opposite Pete at t = 1, and Charlie will catch the ball at that instant. How can that be if the ball actually moves to Ozzie’s right, as stipulated in the preceding paragraph?

5. If Pete had thrown a ball of light at t = 0 — a very slow ball that goes only 110 fps — it would hit Charlie’s glove at t = 1, as seen by Charlie. If Ozzie is to see Charlie catch the ball of light, even though it moves to Ozzie’s right, Charlie cannot be directly opposite Ozzie at t = 1, but must be somewhere to Ozzie’s right.

6. As in TE 2, this situation requires Pete and Charlie’s train cars to slow down (as seen by Ozzie), the distance between Pete and Charlie to stretch (as seen by Ozzie), or a combination of the two. Whatever the combination, Ozzie will measure the speed of the ball of light as 110 fps (75 mph). At one extreme, the distance between Pete and Charlie would seem to stretch from 110 feet to 220 feet when Pete releases the ball, so that Ozzie sees Charlie catch the ball 2 seconds after Pete releases it, and 110 feet to Ozzie’s right. At the other extreme (or near it), the distance between Pete and Charlie would seem to stretch from 110 feet to, say, 111 feet when Pete releases the ball, so that Ozzie sees Charlie catch the ball just over 1 second after Pete releases it, and 1 foot to Ozzie’s right. The outcome is slightly different than that of TE 2 because Pete and Charlie are moving to the left instead of the right, while the ball is moving to the right, as before.

7. In the case of a real ball moving at 75 mph, the clocks would slow imperceptibly and/or the distance would shrink imperceptibly, maintaining the illusion that the formula of classical physics is valid — but it is not. It only seems to be because the changes are too small to be detected by ordinary means.

*     *     *

TE 2 and TE 3 are rough expositions of how perceptions of space and time are affected by the relative motion of disparate objects, according to STR. I set the speed of light at the absurdly low figure of 75 mph to simplify the examples, but there is no essential difference between my expositions and what is supposed to happen to Ozzie’s perceptions of time and distance, according to STR.

If Pete and Charlie actually could move at the speed of light, some rather strange things would happen, according to STR, but I won’t go into them here. It is enough to note that STR implies that light has “weird” properties, which lead to “weird” perceptions about the relative speeds and sizes of objects that are moving relative to an observer. (I am borrowing “weird” from pages 23 and 24 of physicist Lewis Carroll Epstein’s Relativity Visualized, an excellent primer on STR, replete with insightful illustrations.)

The purpose of my explanation is not to demonstrate my grasp of STR (which is rudimentary but skeptical), or to venture an explanation of the “weird” nature of light. My purpose is to set the stage for some probing questions about STR. The questions are occasioned by the “fact” that occasioned STR: the postulate that the speed of light is the same in free space for all observers, regardless of their motion relative to the light source. If that postulate is true, then the preceding discussion is valid in its essentials; if it is false, much that physicists now claim to know is wrong.

*     *     *

My first question is about the effect of a change in Charlie’s perception of movement:

a. Recall that in TE 1 and TE 2 Charlie (the observer in the train car) is unaware that the car is moving relative to the surface of the Earth. Let us remedy that ignorance by replacing the one-way mirror on the side of the car with clear glass. Charlie then sees that the car is moving, at a speed that he calculates with the aid of a stopwatch and distance markers along the track. Does Charlie’s new perception affect his estimate of the speed of a baseball thrown by Pete?

b. The answer is “yes” and “no.” The “yes” comes from the fact that Charlie now appreciates that the forward speed of the baseball, relative to the ground or a stationary observer next to the track, is not 75 mph but 150 mph. The “no” comes from the fact that the baseball’s speed, relative to Charlie, remains 75 mph. Although this new knowledge gives Charlie information about how others may perceive the speed of a baseball thrown by Pete, it does not change Charlie’s original perception.

c. Charlie may nevertheless ask if there is any way of assigning an absolute value to the speed of the thrown baseball. He understands that such a speed may have no practical relevance (e.g., to a batter who is stationary with respect to Pete and Charlie). But if there is no such thing as absolute speed, because all motion is relative, then how can light be assigned an absolute speed of 186,282 miles per second in a vacuum? I say “absolute” because that is what the speed of light seems to be, for practical purposes.

*     *     *

That leads to my second question:

a. Do the methods of determining the speed of light betray an error in thinking about the how that speed is affected by the speed of objects that emit light?

b. Suppose, for example, that observers (or their electronic equivalent) are positioned along the track at intervals of 44 feet, and that their clocks are synchronized to record the number of seconds after Pete releases a baseball. The first observer, who is abreast of Pete’s release point, records the time of release as 0. The ball leaves Pete’s hand at a speed of 110 fps, relative to Pete, who is moving at a speed of 110 fps, for a combined speed of 220 fps. Accordingly, the baseball will be abreast of the second observer at 0.25 seconds, the third observer at 0.5 seconds, the fourth observer, at 0.75 seconds, and the fifth observer at 1 second. The fifth observer, of course, is Ozzie, who is adjacent to Charlie’s glove when Charlie catches the ball.

c. Change “baseball” to “light” and the result changes as described in TE 2 and TE 3, following the tenets of STR. It changes because the speed of light is supposed to be a limiting speed. But is it? For example, a Wikipedia article about faster-than-light phenomena includes a long section that gives several reasons (advanced by physicists) for doubting that the speed of light is a limiting speed.

d. It is therefore possible that the conduct and interpretation of experiments corroborating the constant nature of the speed of light have been influenced (subconsciously) by the crucial place of STR in physics. For example, an observer may see two objects approach (close with) each other at a combined speed greater than the speed of light. Accordingly, it would be possible for the Ozzie of my thought experiment to measure the velocity of a ball of light thrown by Pete as the sum of the speed of light and the speed of the train car. But that is not the standard way of explaining things in the literature with which I am familiar. Instead, the reader is told (by Epstein and other physicists) that Ozzie cannot simply add the two speeds because the speed of light is a limiting speed.

*     *     *

My rudimentary understanding of STR leaves me in doubt about its tenets, its implications, and the validity of experiments that seem to confirm those tenets and implications. I need to know a lot more about the nature of light and the nature of space-time (as a non-mathematical entity) before before accepting the “scientific consensus” that STR has been verified beyond a reasonable doubt. The more recent rush to “scientific consensus” about “global warming” should be taken as a cautionary tale for retrospective application.

ADDENDUM, 02/26/17:

I’ve just learned of the work of Thomas E. Phipps Jr. (1925-2016), a physicist who happens to have been a member of a World War II operations research unit that evolved into the think-tank where I worked for 30 years. Phipps long challenged the basic tenets of STR. A paper by Robert J. Buenker, “Commentary on the Work of Thomas E. Phipps Jr. (1925-2016)” gives a detailed, technical summary of Phipps’s objections to STR. I will spend some time reviewing Buenker’s paper and a book by Phipps that I’ve ordered. Meanwhile, consider this passage from Buenker’s paper:

[T]he supposed inextricable relationship between space and time is shown to be simply the result of an erroneous (and undeclared) assumption made by Einstein in his original work. Newton was right and Einstein was wrong. Instead, one can return to the ancient principle of the objectivity of measurement. The only reason two observers can legitimately disagree about the value of a measurement is because they base their results on a different set of units…. Galileo’s Relativity Principle needs to be amended to read: The laws of physics are the same in all inertial systems but the units in which their results are expressed can and do vary from one rest frame to another.

Einstein’s train thought-experiment (and its variant) may be wrong.

I have long thought that the Lorentz transformation, which is central to STR, actually undercuts the idea of non-simultaneity because it reconciles the observations of observers in different frames of reference:

[T]he Lorentz transformation … relates the coordinates used by one observer to coordinates used by another in uniform relative motion with respect to the first.

Assume that the first observer uses coordinates labeled t, x, y, and z, while the second observer uses coordinates labeled t’, x’, y’, and z’. Now suppose that the first observer sees the second moving in the x-direction at a velocity v. And suppose that the observers’ coordinate axes are parallel and that they have the same origin. Then the Lorentz transformation expresses how the coordinates are related:

 t'={\frac  {t-{v\,x/c^{2}}}{{\sqrt  {1-v^{2}/c^{2}}}}}\ ,
x'={\frac  {x-v\,t}{{\sqrt  {1-v^{2}/c^{2}}}}}\ ,
y'=y\ ,
z'=z\ ,

where c is the speed of light.

More to come.

Fine-Tuning in a Wacky Wrapper

The Unz Review hosts columnists who hold a wide range of views, including whacko-bizarro-conspiracy-theory-nut-job ones. Case in point: Kevin Barrett, who recently posted a review of David Ray Griffin’s God Exists But Gawd Does Not: From Evil to the New Atheism to Fine Tuning. Some things said by Barrett in the course of his review suggest that Griffin, too, holds whacko-bizarro-conspiracy-theory-nut-job views; for example:

In 2004 he published The New Pearl Harbor — which still stands as the single most important work on 9/11 — and followed it up with more than ten books expanding on his analysis of the false flag obscenity that shaped the 21st century.

Further investigation — a trip to Wikipedia — tells me that Griffin believes there is

a prima facie case for the contention that there must have been complicity from individuals within the United States and joined the 9/11 Truth Movement in calling for an extensive investigation from the United States media, Congress and the 9/11 Commission. At this time, he set about writing his first book on the subject, which he called The New Pearl Harbor: Disturbing Questions About the Bush Administration and 9/11 (2004).

Part One of the book looks at the events of 9/11, discussing each flight in turn and also the behaviour of President George W. Bush and his Secret Service protection. Part Two examines 9/11 in a wider context, in the form of four “disturbing questions.” David Ray Griffin discussed this book and the claims within it in an interview with Nick Welsh, reported under the headline Thinking Unthinkable Thoughts: Theologian Charges White House Complicity in 9/11 Attack….

Griffin’s second book on the subject was a direct critique of the 9/11 Commission Report, called The 9/11 Commission Report: Omissions And Distortions (2005). Griffin’s article The 9/11 Commission Report: A 571-page Lie summarizes this book, presenting 115 instances of either omissions or distortions of evidence he claims are in the report, stating that “the entire Report is constructed in support of one big lie: that the official story about 9/11 is true.”

In his next book, Christian Faith and the Truth Behind 9/11: A Call to Reflection and Action (2006), he summarizes some of what he believes is evidence for government complicity and reflects on its implications for Christians. The Presbyterian Publishing Corporation, publishers of the book, noted that Griffin is a distinguished theologian and praised the book’s religious content, but said, “The board believes the conspiracy theory is spurious and based on questionable research.”

And on and on and on. The moral of which is this: If you have already “know” the “truth,” it’s easy to weave together factual tidbits that seem to corroborate it. It’s an old game that any number of persons can play; for example: Mrs. Lincoln hired John Wilkes Booth to kill Abe; Woodrow Wilson was behind the sinking of the Lusitania, which “forced” him to ask for a declaration of war against Germany; FDR knew about Japan’s plans to bomb Pearl Harbor but did nothing so that he could then have a roundly applauded excuse to ask for a declaration of war on Japan; LBJ ordered the assassination of JFK; etc. Some of those bizarre plots have been “proved” by recourse to factual tidbits. I’ve no doubt that all of them could be “proved” in that way.

If that is so, you may well ask why I am writing about Barrett’s review of Griffin’s book? Because in the midst of Barrett’s off-kilter observations (e.g., “the Nazi holocaust, while terrible, wasn’t as incomparably horrible as it has been made out to be”) there’s a tantalizing passage:

Griffin’s Chapter 14, “Teleological Order,” provides the strongest stand-alone rational-empirical argument for God’s existence, one that should convince any open-minded person who is willing to invest some time in thinking about it and investigating the cited sources. This argument rests on the observation that at least 26 of the fundamental constants discovered by physicists appear to have been “fine tuned” to produce a universe in which complex, intelligent life forms could exist. A very slight variation in any one of these 26 numbers (including the strong force, electromagnetism, gravity, the mass difference between protons and neutrons, and many others) would produce a vastly less complex, rich, interesting universe, and destroy any possibility of complex life forms or intelligent observers. In short, the universe is indeed a miracle, in the sense of something indescribably wonderful and almost infinitely improbable. The claim that it could arise by chance (as opposed to intelligent design) is ludicrous.

Even the most dogmatic atheists who are familiar with the scientific facts admit this. Their only recourse is to embrace the multiple-universes interpretation of quantum physics, claim that there are almost infinitely many actual universes (virtually all of them uninteresting and unfit for life), and assert that we just happen to have gotten unbelievably lucky by finding ourselves in the one-universe-out-of-infinity-minus-one with all of the constants perfectly fine-tuned for our existence. But, they argue, we should not be grateful for this almost unbelievable luck — which is far more improbable than winning hundreds of multi-million-dollar lottery jackpots in a row. For our existence in an amazingly, improbably-wonderful-for-us universe is just a tautology, since we couldn’t possibly be in any of the vast, vast, vast majority of universes that we couldn’t possibly be in.

Griffin gently and persuasively points out that the multiple-universes defense of atheism is riddled with absurdities and inconsistencies. Occam’s razor definitively indicates that by far the best explanation of the facts is that the universe was created not just by an intelligent designer, but by one that must be considered almost supremely intelligent as well as almost supremely creative: a creative intelligence as far beyond Einstein-times-Leonardo-to-the-Nth-power as those great minds were beyond that of a common slug.

Fine-tuning is not a good argument for God’s existence. Here is a good argument for God’s existence:

  1. In the material universe, cause precedes effect.
  2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
  3. The existence of the universe therefore implies a separate, uncaused cause.

Barrett (Griffin?) goes on:

Occam’s razor definitively indicates that by far the best explanation of the facts is that the universe was created not just by an intelligent designer, but by one that must be considered almost supremely intelligent as well as almost supremely creative: a creative intelligence as far beyond Einstein-times-Leonardo-to-the-Nth-power as those great minds were beyond that of a common slug.

Whoa! Occam’s razor indicates nothing of the kind:

Occam’s razor is used as a heuristic technique (discovery tool) to guide scientists in the development of theoretical models, rather than as an arbiter between published models. In the scientific method, Occam’s razor is not considered an irrefutable principle of logic or a scientific result; the preference for simplicity in the scientific method is based on the falsifiability criterion. For each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypotheses to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are more testable.

Barrett’s (Griffin’s?) hypothesis about the nature of the supremely intelligent being is unduly complicated. Not that the existence of God is a testable (falsifiable) hypothesis. It’s just a logical necessity, and should be left at that.

Scott Adams Understands Probability

A probability expresses the observed frequency of the occurrence of a well-defined event for a large number of repetitions of the event, where each repetition is independent of the others (i.e., random). Thus the probability that a fair coin will come up heads in, say, 100 tosses is approximately 0.5; that is, it will come up heads approximately 50 percent of the time. (In the penultimate paragraph of this post, I explain why I emphasize approximately.)

If a coin is tossed 100 times, what is the probability that it will come up heads on the 101st toss? There is no probability for that event because it hasn’t occurred yet. The coin will come up heads or tails, and that’s all that can be said about it.

Scott Adams, writing about the probability of being killed by an immigrant, puts it this way:

The idea that we can predict the future based on the past is one of our most persistent illusions. It isn’t rational (for the vast majority of situations) and it doesn’t match our observations. But we think it does.

The big problem is that we have lots of history from which to cherry-pick our predictions about the future. The only reason history repeats is because there is so much of it. Everything that happens today is bound to remind us of something that happened before, simply because lots of stuff happened before, and our minds are drawn to analogies.

…If you can rigorously control the variables of your experiment, you can expect the same outcomes almost every time [emphasis added].

You can expect a given outcome (e.g., heads) to occur approximately 50 percent of the time if you toss a coin a lot of times. But you won’t know the actual frequency (probability) until you measure it; that is, after the fact.

Here’s why. The statement that heads has a probability of 50 percent is a mathematical approximation, given that there are only two possible outcomes of a coin toss: heads or tails. While writing this post I used the RANDBETWEEN function of Excel 2016 to simulate ten 100-toss games of heads or tails, with the following results (number of heads per game): 55, 49, 49, 43, 43, 54, 47, 47, 53, 52. Not a single game yielded exactly 50 heads, and heads came up 492 times (not 500) in 1,000 tosses.

What is the point of a probability statement? What is it good for? It lets you know what to expect over the long run, for a large number of repetitions of a strictly defined event. Change the definition of the event, even slightly, and you can “probably” kiss its probability goodbye.

*     *     *

Related posts:
Fooled by Non-Randomness
Randomness Is Over-Rated
Beware the Rare Event
Some Thoughts about Probability
My War on the Misuse of Probability
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming

Not Just for Baseball Fans

I have substantially revised “Bigger, Stronger, and Faster — But Not Quicker?” I set out to test Dr. Michael Woodley’s hypothesis that reaction times have slowed since the Victorian era:

It seems to me that if Woodley’s hypothesis has merit, it ought to be confirmed by the course of major-league batting averages over the decades. Other things being equal, quicker reaction times ought to produce higher batting averages. Of course, there’s a lot to hold equal, given the many changes in equipment, playing conditions, player conditioning, “style” of the game (e.g., greater emphasis on home runs), and other key variables over the course of more than a century.

I conclude that my analysis

says nothing definitive about reaction times, even though it sheds a lot of light on the relative hitting prowess of American League batters over the past 116 years. (I’ll have more to say about that in a future post.)

It’s been great fun but it was just one of those things.

Sandwiched between those statements you’ll find much statistical meat (about baseball) to chew on.

Not-So Random Thoughts (XIX)

ITEM ADDED 12/18/16

Manhattan Contrarian takes on the partisan analysis of economic growth offered by Alan Blinder and Mark Watson, and endorsed (predictably) by Paul Krugman. Eight years ago, I took on an earlier analysis along the same lines by Dani Rodrik, which Krugman (predictably) endorsed. In fact, bigger government, which is the growth mantra of economists like Blinder, Watson, Rodrik, and (predictably) Krugman, is anti-growth. The combination of spending, which robs the private sector of resources, and regulations, which rob the private sector of options and initiative, is killing economic growth. You can read about it here.

*     *     *

Rania Gihleb and Kevin Lang say that assortative mating hasn’t increased. But even if it had, so what?

Is there a potential social problem that will  have to be dealt with by government because it poses a severe threat to the nation’s political stability or economic well-being? Or is it just a step in the voluntary social evolution of the United States — perhaps even a beneficial one?

In fact,

The best way to help the people … of Charles Murray’s Fishtown [of Coming Apart] — is to ignore the smart-educated-professional-affluent class. It’s a non-problem…. The best way to help the forgotten people of America is to unleash the latent economic power of the United States by removing the dead hand of government from the economy.

*     *     *

Anthropogenic global warming (AGW) is a zombie-like creature of pseudo-science. I’ve rung its death knell, as have many actual scientists. But it keeps coming back. Perhaps President Trump will drive a stake through its heart — or whatever is done to extinguish zombies. In the meantime, here’s more evidence that AGW is a pseudo-scientific hoax:

In conclusion, this synthesis of empirical data reveals that increases in the CO2 concentration has not caused temperature change over the past 38 years across the Tropics-Land area of the Globe. However, the rate of change in CO2 concentration may have been influenced to a statistically significant degree by the temperature level.

And still more:

[B]ased on [Patrick[ Frank’s work, when considering the errors in clouds and CO2 levels only, the error bars around that prediction are ±15˚C. this does not mean—thankfully— that it could be 19˚ warmer in 2100. rather, it means the models are looking for a signal of a few degrees when they can’t differentiate within 15˚ in either direction; their internal errors and uncertainties are too large. this means that the models are unable to validate even the existence of a CO2 fingerprint because of their poor resolution, just as you wouldn’t claim to see DnA with a household magnifying glass.

And more yet:

[P]oliticians using global warming as a policy tool to solve a perceived problem is indeed a hoax. The energy needs of humanity are so large that Bjorn Lomborg has estimated that in the coming decades it is unlikely that more than about 20% of those needs can be met with renewable energy sources.

Whether you like it or not, we are stuck with fossil fuels as our primary energy source for decades to come. Deal with it. And to the extent that we eventually need more renewables, let the private sector figure it out. Energy companies are in the business of providing energy, and they really do not care where that energy comes from….

Scientists need to stop mischaracterizing global warming as settled science.

I like to say that global warming research isn’t rocket science — it is actually much more difficult. At best it is dodgy science, because there are so many uncertainties that you can get just about any answer you want out of climate models just by using those uncertianties as a tuning knob.

*     *     *

Well, that didn’t take long. lawprof Geoffrey Stone said something reasonable a few months ago. Now he’s back to his old, whiny, “liberal” self. Because the Senate failed to take up the nomination of Merrick Garland to fill Antonin Scalia’s seat on the Supreme Court — which is the Senate’s constitutional prerogative, Stone is characterizing the action (or lack of it) as a “constitutional coup d’etat” and claiming that the eventual Trump nominee will be an “illegitimate interloper.” Ed Whelan explains why Stone is wrong here, and adds a few cents worth here.

*     *     *

BHO stereotypes Muslims by asserting that

Trump’s proposal to bar immigration by Muslims would make Americans less safe. How? Because more Muslims would become radicalized and acts of terrorism would therefore become more prevalent. Why would there be more radicalized Muslims? Because the Islamic State (IS) would claim that America has declared war on Islam, and this would not only anger otherwise peaceful Muslims but draw them to IS. Therefore, there shouldn’t be any talk of barring immigration by Muslims, nor any action in that direction….

Because Obama is a semi-black leftist — and “therefore” not a racist — he can stereotype Muslims with impunity. To put it another way, Obama can speak the truth about Muslims without being accused of racism (though he’d never admit to the truth about blacks and violence).

It turns out, unsurprisingly, that there’s a lot of truth in stereotypes:

A stereotype is a preliminary insight. A stereotype can be true, the first step in noticing differences. For conceptual economy, stereotypes encapsulate the characteristics most people have noticed. Not all heuristics are false.

Here is a relevant paper from Denmark.

Emil O. W. Kirkegaard and Julius Daugbjerg Bjerrekær. Country of origin and use of social benefits: A large, preregistered study of stereotype accuracy in Denmark. Open Differential Psychology….

The high accuracy of aggregate stereotypes is confirmed. If anything, the stereotypes held by Danish people about immigrants underestimates those immigrants’ reliance on Danish benefits.

Regarding stereotypes about the criminality of immigrants:

Here is a relevant paper from the United Kingdom.

Noah Carl. NET OPPOSITION TO IMMIGRANTS OF DIFFERENT NATIONALITIES CORRELATES STRONGLY WITH THEIR ARREST RATES IN THE UK. Open Quantitative Sociology and Political Science. 10th November, 2016….

Public beliefs about immigrants and immigration are widely regarded as erroneous. Yet popular stereotypes about the respective characteristics of different groups are generally found to be quite accurate. The present study has shown that, in the UK, net opposition to immigrants of different nationalities correlates strongly with the log of immigrant arrests rates and the log of their arrest rates for violent crime.

The immigrants in question, in both papers, are Muslims — for what it’s worth.

* * *

ADDED 12/18/16:

I explained the phoniness of the Keynesian multiplier here, derived a true (strongly negative) multiplier here, and added some thoughts about the multiplier here. Economist Scott Sumner draws on the Japanese experience to throw more cold water on Keynesianism.

Hayek’s Anticipatory Account of Consciousness

I have almost finished reading F.A. Hayek‘s The Sensory Order, which was originally published in 1952. Chapter VI is Consciousness and Conceptual Thought. In the section headed the Functions of Consciousness, Hayek writes:

6.29.  …[I]t will be the pre-existing excitatory state of the higher centres [of the central nervous system] which will decide whether the evaluation of the new impulses [arising from stimuli external to the higher centres] will be of the kind characteristic of attention or consciousness. It will depend on the predisposition (or set) how fully the newly arriving impulses will be evaluated or whether they will be consciously perceived, and what the responses to them will be.

6.30.  It is probable that the processes in the highest centres which become conscious require the continuous support from nervous impulses originating at some source within the nervous system itself, such as the ‘wakefuleness center’ for whose existence a considerable amount of physiological evidence has been found. If this is so, it would seem probable also that it is these reinforcing impulses which, guided by the expectations evoked by pre-existing conditions, prepare the ground and decide on which of the new impulses the searchlight beam of full consciousness and attention will be focued. The stream of impulses which is thus strengthened becomes capable of dominating the processes in the highest centre, and of overruling and shutting out from full consciousness all the sensory signals which do not belong to the object on which attention is fixed, and which are not themselves strong enough (or perhaps not sufficiently in conflict with the underlying outline picture of the environment) to attract attention.

6.31.  There would thus appear to exist within the central nervous system a highest and most comprehensive center at which at any one time only a limited group of coherent processes can be fully evaluated; where all these processes are related to the same spatial and temporal framework; where the ‘abstract’ or generic relations for a closely knit order in which individual objects are placed; and where, in addition, a close connexion with the instruments of communication has not only contributed a further and very powerful means of classification, but has also made it possible for the individual to participate in a social or conventional representation of the world which he shares with his fellows.

Now, 64 years later, comes a report which I first saw in an online article by Fiona MacDonald, “Harvard Scientists Think They’ve Pinpointed the Physical Source of Consciousness” (Science Alert, November 8, 2016):

Scientists have struggled for millennia to understand human consciousness – the awareness of one’s existence. Despite advances in neuroscience, we still don’t really know where it comes from, and how it arises.

But researchers think they might have finally figured out its physical origins, after pinpointing a network of three specific regions in the brain that appear to be crucial to consciousness.

It’s a pretty huge deal for our understanding of what it means to be human, and it could also help researchers find new treatments for patients in vegetative states.

“For the first time, we have found a connection between the brainstem region involved in arousal and regions involved in awareness, two prerequisites for consciousness,” said lead researcher Michael Fox from the Beth Israel Deaconess Medical Centre at Harvard Medical School.

“A lot of pieces of evidence all came together to point to this network playing a role in human consciousness.”

Consciousness is generally thought of as being comprised of two critical components – arousal and awareness.

Researchers had already shown that arousal is likely regulated by the brainstem – the portion of the brain that links up with the spinal cord – seeing as it regulates when we sleep and wake, and our heart rate and breathing.

Awareness has been more elusive. Researchers have long thought that it resides somewhere in the cortex – the outer layer of the brain – but no one has been able to pinpoint where.

Now the Harvard team has identified not only the specific brainstem region linked to arousal, but also two cortex regions, that all appear to work together to form consciousness.

A full account of the research is given by David B. Fischer M.D. et al. in “A Human Brain Network Derived from Coma-Causing Brainstem Lesions” (Neurology, published online November 4, 2016, ungated version available here).

Hayek isn’t credited in the research paper. But he should be, for pointing the way to a physiological explanation of consciousness that finds it centered in the brain and not in that mysterious emanation called “mind.”

The IQ of Nations

In a twelve-year-old post, “The Main Causes of Prosperity,” I drew on statistics (sourced and described in the post) to find a statistically significant relationship between a nation’s real, per-capita GDP and three variables:

Y =  – 23,518 + 2,316L – 259T  + 253I

Where,
Y = GDP in 1998 dollars (U.S.)
L = Index for rule of law
T = Index for mean tariff rate
I = Verbal IQ

The r-squared of the regression equation is 0.89 and the p-values for the intercept and independent variables are 8.52E-07, 4.70E-10, 1.72E-04, and 3.96E-05.

The effect of IQ, by itself, is strong enough to merit a place of honor:

per-capita-gdp-and-average-verbal-iq

Another relationship struck me when I revisited the IQ numbers. There seems to be a strong correlation between IQ and distance from the equator. That correlation, however, may be an artifact of the strong (negative) correlation between blackness and IQ: The countries whose citizens are predominantly black are generally closer to the equator than the countries whose citizens are predominantly of other races.

Because of the strong (negative) correlation between blackness and IQ, and the geographic grouping of predominantly black countries, it’s not possible to find a statistically significant regression equation that accounts for national IQ as a function of the distance of nations from the equator and their dominant racial composition.

The most significant regression equation omits distance from the equator and admits race:

I = 84.0 – 13.2B + 12.4W + 20.7EA

Where,
I = national average IQ
B = predominantly black
W = predominantly white (i.e., residents are European or of European origin)
EA = East Asian (China, Hong Kong, Japan, Mongolia, South Korea, Taiwan, and Singapore, which is largely populated by persons of Chinese descent)

The r-squared of the equation is 0.78 and the p-values of the intercept and coefficients are all less than 1E-17. The F-value of the equation is 8.24E-51. The standard error of the estimate is 5.6, which means that the 95-percent confidence interval is plus or minus 11 — a smaller number than any of the coefficients.

The intercept applies to all “other” countries that aren’t predominantly black, white, or East Asian in their racial composition. There are 66 such countries in the sample, which comprises 159 countries. The 66 “other” countries span the Middle East; North Africa; South Asia; Southeast Asia; island-states in Indian, Pacific, and Atlantic Oceans; and most of the nations of Central and South America and the Caribbean. Despite the range of racial and ethnic mixtures in those 66 countries, their average IQs cluster fairly tightly around 84. By the same token, there’s a definite clustering of the black countries around 71 (84.0 – 13.2), of the white countries around 96 (84.0 + 12.4), and of the East Asian countries around 105 (84.0 + 20.7).

Thus this graph, where each “row” (from bottom to top) corresponds to black, “other,” white, and East Asian:

estimated-vs-actual-iq

The dotted line represents a perfect correlation. The regression yields a less-than-perfect relationship between race and IQ, but a strong one. That strong relationship is also seen in the following graph:

iq-vs-distance-from-the-equator

There’s a definite pattern — if a somewhat loose one — that goes from low-IQ black countries near the equator to higher IQ white countries farther from the equator. The position  of East Asian countries, which is toward the middle latitudes rather than the highest ones, points to something special in the relationship between East Asian genetic heritage and IQ.

*     *     *

Related posts:
Race and Reason: The Victims of Affirmative Action
Race and Reason: The Achievement Gap — Causes and Implications
“Conversing” about Race
Evolution and Race
“Wading” into Race, Culture, and IQ
Evolution, Culture, and “Diversity”
The Harmful Myth of Inherent Equality
Let’s Have That “Conversation” about Race

Words Fail Us

Regular readers of this blog know that I seldom use “us” and “we.” Those words are too often appropriated by writers who say such things as “we the people,” and who characterize as “society” the geopolitical entity known as the United States. There is no such thing as “we the people,” and the United States is about as far from being a “society” as Hillary Clinton is from being president (I hope).

There are nevertheless some things that are so close to being universal that it’s fair to refer to them as characteristics of “us” and “we.” The inadequacy of language is one of those things.

Why is that the case? Try to describe in words a person who is beautiful or handsome to you, and why. It’s hard to do, if not impossible. There’s something about the combination of that person’s features, coloring, expression, etc., that defies anything like a complete description. You may have an image of that person in your mind, and you may know that — to you — the person is beautiful or handsome. But you just can’t capture in words all of those attributes. Why? Because the person’s beauty or handsomeness is a whole thing. It’s everything taken together, including subtle things that nestle in your subconscious mind but don’t readily swim to the surface. One such thing could be the relative size of the person’s upper and lower lips in the context of that particular person’s face; whereas, the same lips on another face might convey plainness or ugliness.

Words are inadequate because they describe one thing at a time — the shape of a nose, the slant of a brow, the prominence of a cheekbone. And the sum of those words isn’t the same thing as your image of the beautiful or handsome person. In fact, the sum of those words may be meaningless to a third party, who can’t begin to translate your words into an image of the person you think of as beautiful or handsome.

Yes, there are (supposedly) general rules about beauty and handsomeness. One of them is the symmetry of a person’s features. But that leaves a lot of ground uncovered. And it focuses on one aspect of a person’s face, rather than all of its aspects, which are what you take into account when you judge a person beautiful or handsome.

And, of course, there are many disagreements about who is beautiful or handsome. It’s a matter of taste. Where does the taste come from? Who knows? I have a theory about why I prefer dark-haired women to women whose hair is blonde, red, or medium-to-light brown: My mother was dark-haired, and photographs of her show that she was beautiful (in my opinion) as a young woman. (Despite that, I never thought of her as beautiful because she was just Mom to me.) You can come up with your own theories — and I expect that no two of them will be the same.

What about facts? Isn’t it possible to put facts into words? Not really, and for much the same reason that it’s impossible to describe beauty, handsomeness, love, hate, or anything “subjective” or “emotional.” Facts, at bottom, are subjective, and sometimes even emotional.

Let’s take a “fact” at random: the color red. We can all agree as to whether something looks red, can’t we? Even putting aside people who are color-blind, the answer is: not necessarily. For one thing red is defined as having a “predominant light wavelength of roughly 620–740 nanometers.” “Predominant” and “roughly” are weasel-words. Clearly, there’s no definite point on the visible spectrum where light changes from orange to red. If you think there is, just look at this chart and tell me where it happens. So red comes in shades, which various people describe variously: orange-red and reddish-orange, for example.

Not only that, but the visible spectrum

does not … contain all the colors that the human eyes and brain can distinguish. Unsaturated colors such as pink, or purple variations such as magenta, are absent, for example, because they can be made only by a mix of multiple wavelengths.

Thus we have magenta, fuchsia, blood-red, scarlet, crimson, vermillion, maroon, ruby, and even the many shades of pink — some are blends, some are represented by narrow segments of the light spectrum. Do all of those kinds of red have a clear definition, or are they defined by the beholder? Well, some may be easy to distinguish from others, but the distinctions between them remain arbitrary. Where does scarlet or magenta become vermillion?

In any event, how do you describe a color (whatever you call it) in words? Referring to its wavelength or composition in terms of other colors or its relation to other colors is no help. Wavelength really is meaningless unless you can show an image of the visible spectrum to someone who perceives colors exactly as you do, and point to red — or what you call red. In doing so, you will have pointed to a range of colors, not to red, because there is no red red and no definite boundary between orange and red (or yellow and orange, or green and yellow, etc.).

Further, you won’t have described red in words. And you can’t — without descending into tautologies — because red (as you visualize it) is what’s in your mind. It’s not an objective fact.

My point is that description isn’t the same as definition. You can define red (however vaguely) as a color which has a predominant light wavelength of roughly 620–740 nanometers. But you can’t describe it. Why? Because red is just a concept.

A concept isn’t a real thing that you can see, hear, taste, touch, smell, eat, drink from, drive, etc. How do you describe a concept? You define it in terms of other concepts.

Moving on from color, I’ll take gross domestic product (GDP) as another example. GDP is an estimate of the dollar value of the output of finished goods and services produced in the United States during a particular period of time. Wow, what a string of concepts. And every one of them must be defined, in turn. Some of them can be illustrated by referring to real things; a haircut is a kind of service, for example. But it’s impossible to describe GDP and its underlying concepts because they’re all abstractions, or representations of indescribable conglomerations of real things.

All right, you say, it’s impossible to describe concepts, but surely it’s possible to describe things. People do it all the time. See that ugly, dark-haired, tall guy standing over there? I’ve already dealt with ugly, indirectly, in my discussion of beauty or handsomeness. Ugliness, like beauty, is just a concept, the idea of which differs from person to person. What about tall? It’s a relative term, isn’t it? You can measure a person’s height, but whether or not you consider him tall depends on where and when you live and the range of heights you’re used to encountering. A person who seems tall to you may not seem tall to your taller brother. Dark-haired will evoke different pictures in different minds — ranging from jet-black to dark brown and even auburn.

But if you point to the guy you call ugly, dark-haired, tall guy, I may agree with you that he’s ugly, dark-haired, and tall. Or I may disagree with you, but gain some understanding of what you mean by ugly, dark-haired, and tall.

And therein lies the tale of how people are able to communicate with each other, despite their inability to describe concepts or to define them without going in endless circles and chains of definitions. First, human beings possess central nervous systems and sensory organs that are much alike, though within a wide range of variations (e.g., many people must wear glasses with an almost-infinite variety of corrections, hearing aids are programmed to an almost-infinite variety of settings, sensitivity to touch varies widely, reaction times vary widely). Nevertheless, most people seem to perceive the same color when light with a wavelength of, say, 700 nanometers strikes the retina. The same goes for sounds, tastes, smells, etc., as various external stimuli are detected by various receptors. Those perceptions then acquire agreed definitions through acculturation. For example, an object that reflects light with a wavelength of 700 nanometers becomes known as red; a sound with a certain frequency becomes known as middle C; a certain taste is characterized as bitter, sweet, or sour.

Objects acquire names in the same way: for example: a square piece of cloth that’s wrapped around a person’s head or neck becomes a bandana, and a longish, curved, yellow-skinned fruit with a soft interior becomes a banana. And so I can visualize a woman wearing a red bandana and eating a banana.

There is less agreement about “soft” concepts (e.g., beauty) because they’re based not just on “hard” facts (e.g., the wavelength of light), but on judgments that vary from person to person. A face that’s cute to one person may be beautiful to another person, but there’s no rigorous division between cute and beautiful. Both convey a sense of physical attractiveness that many persons will agree upon, but which won’t yield a consistent image. A very large percentage of Caucasian males (of a certain age) would agree that Ingrid Bergman and Hedy Lamarr were beautiful, but there’s nothing like a consensus about Katharine Hepburn (perhaps striking but not beautiful) or Jean Arthur (perhaps cute but not beautiful).

Other concepts, like GDP, acquire seemingly rigorous definitions, but they’re based on strings of seemingly rigorous definitions, the underpinnings of which may be as squishy as the flesh of a banana (e.g., the omission of housework and the effects of pollution from GDP). So if you’re familiar with the definitions of the definitions, you have a good grasp of the concepts. If you aren’t, you don’t. But if you have a good grasp of the numbers underlying the definitions of definitions, you know that the top-level concept is actually vague and hard to pin down. The numbers not only omit important things but are only estimates, and often are estimates of disparate things that are grouped because they’re judged to “alike enough.”

Acculturation in the form of education is a way of getting people to grasp concepts that have widely agreed definitions. Mathematics, for example, is nothing but concepts, all the way down. And to venture beyond arithmetic is to venture into a world of ideas that’s held together by definitions that rest upon definitions and end in nothing real. Unless you’re one of those people who insists that mathematics is the “real” stuff of which the universe is made, which is nothing more than a leap of faith. (Math, by the way, is nothing but words in shorthand.)

And so, human beings are able to communicate and (usually) understand each other because of their physical and cultural similarities, which include education in various and sundry subjects. Those similarities also enable people of different cultures and languages to translate their concepts (and the words that define them) from one language to another.

Those similarities also enable people to “feel” what another person is feeling when he says that he’s happy, sad, drunk, or whatever. There’s the physical similarity — the physiological changes that usually occur when a person becomes what he thinks of as happy, etc. And there’s acculturation — the acquired knowledge that people feel happy (or whatever) for certain reasons (e.g., a marriage, the birth of a child) and display their happiness in certain ways (e.g., a broad smile, a “jump for joy”).

A good novelist, in my view, is one who knows how to use words that evoke vivid mental images of the thoughts, feelings, and actions of characters, and the settings in which the characters act out the plot of a novel. A novelist who can do that and also tell a good story — one with an engaging or suspenseful plot — is thereby a great novelist. I submit that a good or great novelist (an admittedly vague concept) is worth almost any number of psychologists and psychiatrists, whose vision of the human mind is too rigid to grasp the subtleties that give it life.

But good and great novelists are thin on the ground. That is to say, there are relatively few persons among us who are able to grasp and communicate effectively a broad range of the kinds of thoughts and feelings that lurk in the minds of human beings. And even those few have their blind spots. Most of them, it seems to me, are persons of the left, and are therefore unable to empathize with the thoughts and feelings of the working-class people who seethe with resentment about fawning over and favoritism toward blacks, illegal immigrants, gender-confused persons, and other so-called victims. In fact, those few otherwise perceptive and articulate writers make it a point to write off the working-class people as racists, bigots, and ignoramuses.

There are exceptions, of course. A contemporary exception is Tom Wolfe. But his approach to class issues is top-down rather than bottom-up.

Which just underscores my point that we human beings find it hard to formulate and organize our own thoughts and feelings about the world around us and the other people in it. And we’re practically tongue-tied when it comes to expressing those thoughts and feelings to others. We just don’t know ourselves well enough to explain ourselves to others. And our feelings — such as our political preferences, which probably are based more on temperament than on facts — get in the way.

Love, to take a leading example, is a feeling that just is. The why and wherefore of it is beyond our ability to understand and explain. Some of the feelings attached to it can be expressed in prose, poetry, and song, but those are superficial expressions that don’t capture the depth of love and why it exists.

The world of science is of no real help. Even if feelings of love could be expressed in scientific terms — the action of hormone A on brain region X — that would be worse than useless. It would reduce love to chemistry, when we know that there’s more to it than that. Why, for example, is hormone A activated by the presence or thought of person M but not person N, even when they’re identical twins?

The world of science is of no real help about “getting to the bottom of things.” Science is an infinite regress. S is explained in terms of T, which is explained in terms of U, which is explained in terms of V, and on and on. For example, there was the “indivisible” atom, which turned out to consist of electrons, protons, and neutrons. But electrons have turned out to be more complicated than originally believed, and protons and neutrons have been found to be made of smaller particles with distinctive characteristics. So it’s reasonable to ask if all of the particles now considered elementary are really indivisible. Perhaps there other more-elementary particles yet to be hypothesized and discovered. And even if all of the truly elementary particles are discovered, scientists will still be unable to explain what those particles really “are.”

Words fail us.

*      *      *

Related reading:
Modeling Is Not Science
Physics Envy
What Is Truth?
The Improbability of Us
A Digression about Probability and Existence
More about Probability and Existence
Existence and Creation
We, the Children of the Enlightenment
Probability, Existence, and Creation
The Atheism of the Gaps
Demystifying Science
Scientism, Evolution, and the Meaning of Life
Mysteries: Sacred and Profane
Pinker Commits Scientism
Spooky Numbers, Evolution, and Intelligent Design
Mind, Cosmos, and Consciousness
The Limits of Science (II)
The Pretence of Knowledge
“The Science Is Settled”
“Settled Science” and the Monty Hall Problem
The Limits of Science, Illustrated by Scientists
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
The “Marketplace” of Ideas
My War on the Misuse of Probability
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
Is Science Self-Correcting?
“Feelings, Nothing More than Feelings”
Taleb’s Ruinous Rhetoric

Intelligence, Assortative Mating, and Social Engineering

UPDATED 11/18/16 (AT THE END)

What is intelligence? Why does it matter in “real life”? Are intelligence-driven “real life” outcomes — disparities in education and income — driving Americans apart? In particular, is the intermarriage of smart, educated professionals giving rise to a new hereditary class whose members have nothing in common with less-intelligent, poorly educated Americans, who will fall farther and farther behind economically? And if so, what should be done about it, if anything?

INTELLIGENCE AND WHY IT MATTERS IN “REAL LIFE”

Thanks to a post at Dr. James Thompson’s blog, Psychological comments, I found Dr. Linda Gottredson‘s paper, “Why g Matters: The Complexity of Everyday Life” (Intelligence 24:1, 79-132, 1997). The g factor — or just plain g — is general intelligence. I quote Gottredson’s article at length because it makes several key points about intelligence and why it matters in “real life.” For ease of reading, I’ve skipped over the many citations and supporting tables than lend authority to the article.

[W]hy does g have such pervasive practical utility? For example, why is a higher level of g a substantial advantage in carpentry, managing people, and navigating vehicles of all kinds? And, very importantly, why do those advantages vary in the ways they do? Why is g more helpful in repairing trucks than in driving them for a living? Or more for doing well in school than staying out of trouble?…

Also, can we presume that similar activities in other venues might be similarly affected by intelligence? For example, if differences in intelligence change the odds of effectively managing and motivating people on the job, do they also change the odds of successfully dealing with one’s own children? If so, why, and how much?

The heart of the argument I develop here is this: For practical purposes, g is the ability to deal with cognitive complexity — in particular, with complex information processing. All tasks in life involve some complexity, that is, some information processing. Life tasks, like job duties, vary greatly in their complexity (g loadedness). This means that the advantages of higher g are large in some situations and small in others, but probably never zero….

Although researchers disagree on how they define intelligence, there is virtual unanimity that it reflects the ability to reason, solve problems, think abstractly, and acquire knowledge. Intelligence is not the amount of information people know, but their ability to recognize, acquire, organize, update, select, and apply it effectively. In educational contexts, these complex mental behaviors are referred to as higher order thinking skills.

Stated at a more molecular level, g is the ability to mentally manipulate information — “to fill a gap, turn something over in one’s mind, make comparisons, transform the input to arrive at the output”….

[T]he active ingredient in test items seems to reside in their complexity. Any kind of item content-words, numbers, figures, pictures, symbols, blocks, mazes, and so on-can be used to create less to more g-loaded tests and test items. Differences in g loading seem to arise from variations in items’ cognitive complexity and thus the amount of mental manipulation they require….

Life is replete with uncertainty, change, confusion, and misinformation, sometimes minor and at times massive. From birth to death, life continually requires us to master abstractions, solve problems, draw inferences, and make judgments on the basis of inadequate information. Such demands may be especially intense in school, but they hardly cease when one walks out the school door. A close look at job duties in the workplace shows why….

When job analysis data for any large set of jobs are factor analyzed, they always reveal the major distinction among jobs to be the mental complexity of the work they require workers to perform. Arvey’s job analysis is particularly informative in showing that job complexity is quintessentially a demand for g….

Not surprisingly, jobs high in overall complexity require more education, .86 and .88, training, .76 and .51, and experience, .62 — and are viewed as the most prestigious, . 82. These correlations have sometimes been cited in support of the training hypothesis discussed earlier, namely, that sufficient training can render differences in g moot.

However, prior training and experience in a job never fully prepare workers for all contingencies. This is especially so for complex jobs, partly because they require workers to continually update job knowledge, .85. As already suggested, complex tasks often involve not only the appropriate application of old knowledge, but also the quick apprehension and use of new information in changing environments….

Many of the duties that correlate highly with overall job complexity suffuse our lives: advising, planning, negotiating, persuading, supervising others, to name just a few….

The National Adult Literacy Survey (NALS) of 26,000 persons aged 16 and older is one in a series of national literacy assessments developed by the Educational Testing Service (ETS) for the U.S. Department of Education. It is a direct descendent, both conceptually and methodologically, of the National Assessment of Educational Progress (NAEP) studies of reading among school-aged children and literacy among adults aged 21 to 25.

NALS, like its NAEP predecessors, is extremely valuable in understanding the complexity of everyday life and the advantages that higher g provides. In particular, NALS provides estimates of the proportion of adults who are able to perform everyday tasks of different complexity levels….

A look at the items in Figure 2 reveals their general relevance to social life. These are not obscure skills or bits of knowledge whose value is limited to academic pursuits. They are skills needed to carry out routine transactions with banks, social welfare agencies, restaurants, the post office, and credit card agencies; to understand contrasting views on public issues (fuel efficiency, parental involvement in schools); and to comprehend the events of the day (sports stories, trends in oil exports) and one’s personal options (welfare benefits, discount for early payment of bills, relative merits between two credit cards)….

[A]lthough the NALS items represent skills that are valuable in themselves, they are merely samples from broad domains of such skill. As already suggested, scores on the NALS reflect people’s more general ability (the latent trait) to master on a routine basis skills of different information-processing complexity….

[I]ndeed, the five levels of NALS literacy are associated with very different odds of economic well-being….

Each higher level of proficiency substantially improves the odds of economic well-being, generally halving the percentage living in poverty and doubling the percentage employed in the professions or management….

The effects of intelligence-like other psychological traits-are probabilistic, not deterministic. Higher intelligence improves the odds of success in school and work. It is an advantage, not a guarantee. Many other things matter.

However, the odds disfavor low-IQ people just about everywhere they turn. The differences in odds are relatively small in some aspects of life (law-abidingness), moderate in some (income), and large in others (educational, occupational attainment). But they are consistent. At a minimum (say, under conditions of simple tasks and equal prior knowledge), higher levels of intelligence act like the small percentage (2.7%) favoring the house in roulette at Monte Carlo — it yields enormous gains over the long run. Similarly, all of us make stupid mistakes from time to time, but higher intelligence helps protect us from accumulating a long, debilitating record of them.

To mitigate unfavorable odds attributable to low IQ, an individual must have some equally pervasive compensatory advantage-family wealth, winning personality, enormous resolve, strength of character, an advocate or benefactor, and the like. Such compensatory advantages may frequently soften but probably never eliminate the cumulative impact of low IQ. Conversely, high IQ acts like a cushion against some of life’s adverse circumstances, perhaps partly accounting for
why some children are more resilient than others in the face of deprivation and abuse….

For the top 5% of the population (over IQ 125), success is really “yours to lose.” These people meet the minimum intelligence requirements of all occupations, are highly sought after for their extreme trainability, and have a relatively easy time with the normal cognitive demands of life. Their jobs are often high pressure, emotionally draining, and socially demanding …, but these jobs are prestigious and generally pay well. Although very high IQ individuals share many of the vicissitudes of life, such as divorce, illness, and occasional unemployment, they rarely become trapped in poverty or social pathology. They may be saints or sinners, healthy or unhealthy, content or emotionally troubled. They may or may not work hard and apply their talents to get ahead, and some will fail miserably. But their lot in life and their prospects for living comfortably are comparatively rosy.

There are, of course, multiple causes of different social and economic outcomes in life. However, g seems to be at the center of the causal nexus for many. Indeed, g is more important than social class background in predicting whether White adults obtain college degrees, live in poverty, are unemployed, go on welfare temporarily, divorce, bear children out of wedlock, and commit crimes.

There are many other valued human traits besides g, but none seems to affect individuals’ life chances so systematically and so powerfully in modern life as does g. To the extent that one is concerned about inequality in life chances, one must be concerned about differences in g….

Society has become more complex-and g loaded-as we have entered the information age and postindustrial economy. Major reports on the U.S. schools, workforce, and economy routinely argue, in particular, that the complexity of work is rising.

Where the old industrial economy rewarded mass production of standardized products for large markets, the new postindustrial economy rewards the timely customization and delivery of high-quality, convenient products for increasingly specialized markets. Where the old economy broke work into narrow, routinized, and closely supervised tasks, the new economy increasingly requires workers to work in cross-functional teams, gather information, make decisions, and undertake diverse, changing, and challenging sets of tasks in a fast-changing and dynamic global market….

Such reports emphasize that the new workplace puts a premium on higher order thinking, learning, and information-processing skills — in other words, on intelligence. Gone are the many simple farm and factory jobs where a strong back and willing disposition were sufficient to sustain a respected livelihood, regardless of IQ. Fading too is the need for highly developed perceptual-motor skills, which were once critical for operating and monitoring machines, as technology advances.

Daily life also seems to have become considerably more complex. For instance, we now have a largely moneyless economy-checkbooks, credit cards, and charge accounts-that requires more abstract thought, foresight, and complex management. More self-service, whether in banks or hardware stores, throws individuals back onto their own capabilities. We struggle today with a truly vast array of continually evolving complexities: the changing welter of social services across diverse, large bureaucracies; increasing options for health insurance, cable, and phone service; the steady flow of debate over health hazards in our food and environment; the maze of transportation systems and schedules; the mushrooming array of over-the-counter medicines in the typical drugstore; new technologies (computers) and forms of communication (cyberspace) for home as well as office.

Brighter individuals, families, and communities will be better able to capitalize on the new opportunities this increased complexity brings. The least bright will use them less effectively, if at all, and so fail to reap in comparable measure any benefits they offer. There is evidence that increasing proportions of individuals with below-average IQs are having trouble adapting to our increasingly complex modern life and that social inequality along IQ lines is increasing.

CHARLES MURRAY AND FISHTOWN VS. BELMONT

At the end of the last sentence, Gottfredson refers to Richard J. Herrnstein and Charles Murray’s The Bell Curve: Intelligence and Class Structure in American Life (1994). In a later book, Coming Apart: The State of White America, 1960-2010 (2012), Murray tackles the issue of social (and economic) inequality. Kay S. Hymowitz summarizes Murray’s thesis:

According to Murray, the last 50 years have seen the emergence of a “new upper class.” By this he means something quite different from the 1 percent that makes the Occupy Wall Streeters shake their pitchforks. He refers, rather, to the cognitive elite that he and his coauthor Richard Herrnstein warned about in The Bell Curve. This elite is blessed with diplomas from top colleges and with jobs that allow them to afford homes in Nassau County, New York and Fairfax County, Virginia. They’ve earned these things not through trust funds, Murray explains, but because of the high IQs that the postindustrial economy so richly rewards.

Murray creates a fictional town, Belmont, to illustrate the demographics and culture of the new upper class. Belmont looks nothing like the well-heeled but corrupt, godless enclave of the populist imagination. On the contrary: the top 20 percent of citizens in income and education exemplify the core founding virtues Murray defines as industriousness, honesty, marriage, and religious observance….

The American virtues are not doing so well in Fishtown, Murray’s fictional working-class counterpart to Belmont. In fact, Fishtown is home to a “new lower class” whose lifestyle resembles The Wire more than Roseanne. Murray uncovers a five-fold increase in the percentage of white male workers on disability insurance since 1960, a tripling of prime-age men out of the labor force—almost all with a high school degree or less—and a doubling in the percentage of Fishtown men working less than full-time…..

Most disastrous for Fishtown residents has been the collapse of the family, which Murray believes is now “approaching a point of no return.” For a while after the 1960s, the working class hung on to its traditional ways. That changed dramatically by the 1990s. Today, under 50 percent of Fishtown 30- to 49-year-olds are married; in Belmont, the number is 84 percent. About a third of Fishtowners of that age are divorced, compared with 10 percent of Belmonters. Murray estimates that 45 percent of Fishtown babies are born to unmarried mothers, versus 6 to 8 percent of those in Belmont.

And so it follows: Fishtown kids are far less likely to be living with their two biological parents. One survey of mothers who turned 40 in the late nineties and early 2000s suggests the number to be only about 30 percent in Fishtown. In Belmont? Ninety percent—yes, ninety—were living with both mother and father….

For all their degrees, the upper class in Belmont is pretty ignorant about what’s happening in places like Fishtown. In the past, though the well-to-do had bigger houses and servants, they lived in towns and neighborhoods close to the working class and shared many of their habits and values. Most had never gone to college, and even if they had, they probably married someone who hadn’t. Today’s upper class, on the other hand, has segregated itself into tony ghettos where they can go to Pilates classes with their own kind. They marry each other and pool their incomes so that they can move to “Superzips”—the highest percentiles in income and education, where their children will grow up knowing only kids like themselves and go to college with kids who grew up the same way.

In short, America has become a segregated, caste society, with a born elite and an equally hereditary underclass. A libertarian, Murray believes these facts add up to an argument for limited government. The welfare state has sapped America’s civic energy in places like Fishtown, leaving a population of disengaged, untrusting slackers….

But might Murray lay the groundwork for fatalism of a different sort? “The reason that upper-middle-class children dominate the population of elite schools,” he writes, “is that the parents of the upper-middle class now produce a disproportionate number of the smartest children.” Murray doesn’t pursue this logic to its next step, and no wonder. If rich, smart people marry other smart people and produce smart children, then it follows that the poor marry—or rather, reproduce with—the less intelligent and produce less intelligent children. [“White Blight,” City Journal, January 25, 2012]

In the last sentence of that quotation, Hymowitz alludes to assortative mating.

ADDING 2 AND 2 TO GET ?

So intelligence is real; it’s not confined to “book learning”; it has a strong influence on one’s education, work, and income (i.e., class); and because of those things it leads to assortative mating, which (on balance) reinforces class differences. Or so the story goes.

But assortative mating is nothing new. What might be new, or more prevalent than in the past, is a greater tendency for intermarriage within the smart-educated-professional class instead of across class lines, and for the smart-educated-professional class to live in “enclaves” with their like, and to produce (generally) bright children who’ll (mostly) follow the lead of their parents.

How great are those tendencies? And in any event, so what? Is there a potential social problem that will  have to be dealt with by government because it poses a severe threat to the nation’s political stability or economic well-being? Or is it just a step in the voluntary social evolution of the United States — perhaps even a beneficial one?

Is there a growing tendency toward intermarriage among the smart-educated-professional class? It depends on how you look at it. Here, for example, are excerpts of commentaries about a paper by Jeremy Greenwood et al., “Marry Your Like: Assortative Mating and Income Inequality” (American Economic Review, 104:5, 348-53, May 2014 — also published as NBER Working Paper 19289):

[T]he abstract is this:

Has there been an increase in positive assortative mating? Does assortative mating contribute to household income inequality? Data from the United States Census Bureau suggests there has been a rise in assortative mating. Additionally, assortative mating affects household income inequality. In particular, if matching in 2005 between husbands and wives had been random, instead of the pattern observed in the data, then the Gini coefficient would have fallen from the observed 0.43 to 0.34, so that income inequality would be smaller. Thus, assortative mating is important for income inequality. The high level of married female labor-force participation in 2005 is important for this result.

That is quite a significant effect. [Tyler Cowen, “Assortative Mating and Income Inequality,” Marginal Revolution, January 27, 2014]

__________

The wage gap between highly and barely educated workers has grown, but that could in theory have been offset by the fact that more women now go to college and get good jobs. Had spouses chosen each other at random, many well-paid women would have married ill-paid men and vice versa. Workers would have become more unequal, but households would not. With such “random” matching, the authors estimate that the Gini co-efficient, which is zero at total equality and one at total inequality, would have remained roughly unchanged, at 0.33 in 1960 and 0.34 in 2005.

But in reality the highly educated increasingly married each other. In 1960 25% of men with university degrees married women with degrees; in 2005, 48% did. As a result, the Gini rose from 0.34 in 1960 to 0.43 in 2005.

Assortative mating is hardly mysterious. People with similar education tend to work in similar places and often find each other attractive. On top of this, the economic incentive to marry your peers has increased. A woman with a graduate degree whose husband dropped out of high school in 1960 could still enjoy household income 40% above the national average; by 2005, such a couple would earn 8% below it. In 1960 a household composed of two people with graduate degrees earned 76% above the average; by 2005, they earned 119% more. Women have far more choices than before, and that is one reason why inequality will be hard to reverse. [The Economist, “Sex, Brains, and Inequality,” February 8, 2014]

__________

I’d offer a few caveats:

  • Comparing observed GINI with a hypothetical world in which marriage patterns are completely random is a bit misleading. Marriage patterns weren’t random in 1960 either, and the past popularity of “Cinderella marriages” is more myth than reality. In fact, if you look at the red diagonals [in the accompanying figures], you’ll notice that assortative mating has actually increased only modestly since 1960.
  • So why bother with a comparison to a random counterfactual? That’s a little complicated, but the authors mainly use it to figure out why 1960 is so different from 2005. As it turns out, they conclude that rising income inequality isn’t really due to a rise in assortative mating per se. It’s mostly due to the simple fact that more women work outside the home these days. After all, who a man marries doesn’t affect his household income much if his wife doesn’t have an outside job. But when women with college degrees all started working, it caused a big increase in upper class household incomes regardless of whether assortative mating had increased.
  • This can get to sound like a broken record, but whenever you think about rising income inequality, you always need to keep in mind that over the past three decades it’s mostly been a phenomenon of the top one percent. It’s unlikely that either assortative mating or the rise of working women has had a huge impact at those income levels, and therefore it probably hasn’t had a huge impact on increasing income inequality either. (However, that’s an empirical question. I might be wrong about it.)

[Kevin Drum, “No the Decline of Cinderella Marriages Probably Hasn’t Played a Big Role in Rising Income Inequality,” Mother Jones, January 27, 2014]

In sum:

  • The rate of intermarriage at every level of education rose slightly between 1960 and 2005.
  • But the real change between 1960 and 2005 was that more and more women worked outside the home — a state of affairs that “progressives” applaud. It is that change which has led to a greater disparity between the household incomes of poorly educated couples and those of highly educated couples. (Hereinafter, I omit the “sneer quotes” around “progressives,” “progressive,” and “Progressivism,” but only to eliminate clutter.)
  • While that was going on, the measure of inequality in the incomes of individuals didn’t change. (Go to “In Which We’re Vindicated. Again,” Political Calculations, January 28, 2014, and scroll down to the figure titled “GINI Ratios for U.S. Households, Families, and Individuals, 1947-2010.”)
  • Further, as Kevin Drum notes, the rise in income inequality probably has almost nothing to do with a rise in the rate of assortative mating and much to do with the much higher incomes commanded by executives, athletes, entrepreneurs, financiers, and “techies” — a development that shouldn’t bother anyone, even though it does bother a lot of people. (See my post “Mass (Economic) Hysteria: Income Inequality and Related Themes,” and follow the many links therein to other posts of mine and to the long list of related readings.)

Moreover, intergenerational mobility in the United States hasn’t changed in the past several decades:

Our analysis of new administrative records on income shows that children entering the labor market today have the same chances of moving up in the income distribution relative to their parents as children born in the 1970s. Putting together our results with evidence from Hertz (2007) and Lee and Solon (2009) that intergenerational elasticities of income did not change significantly between the 1950 and 1970 birth cohorts, we conclude that rank-based measures of social mobility have remained remarkably stable over the second half of the twentieth century in the United States….

The lack of a trend in intergenerational mobility contrasts with the increase in income inequality in recent decades. This contrast may be surprising given the well-known negative correlation between inequality and mobility across countries (Corak 2013). Based on this “Great Gatsby curve,” Krueger (2012) predicted that recent increases in inequality would increase the intergenerational persistence of income by 20% in the U.S. One explanation for why this prediction was not borne out is that much of the increase in inequality has been driven by the extreme upper tail (Piketty and Saez 2003, U.S. Census Bureau 2013). In [Chetty et al. 2014, we show that there is little or no correlation between mobility and extreme upper tail inequality – as measured e.g. by top 1% income shares – both across countries and across areas within the U.S….

The stability of intergenerational mobility is perhaps more surprising in light of evidence that socio-economic gaps in early indicators of success such as test scores (Reardon 2011), parental inputs (Ramey and Ramey 2010), and social connectedness (Putnam, Frederick, and Snellman 2012) have grown over time. Indeed, based on such evidence, Putnam, Frederick, and Snellman predicted that the “adolescents of the 1990s and 2000s are yet to show up in standard studies of intergenerational mobility, but the fact that working class youth are relatively more disconnected from social institutions, and increasingly so, suggests that mobility is poised to plunge dramatically.” An important question for future research is why such a plunge in mobility has not occurred. [Raj Chetty et al., “Is the United States Still a Land of Opportunity? Recent Trends in Intergenerational Mobility,” NBER Working Paper 19844, January 2014]

Figure 3 of the paper by Chetty et al. nails it down:

chetty-et-al-figure-3

The results for ages 29-30 are close to the results for age 26.

What does it all mean? For one thing, it means that the children of top-quintile parents reach the top quintile about 30 percent of the time. For another thing, it means that, unsurprisingly, the children of top-quintile parents reach the top quintile more often than children of second-quintile parents, who reach the top quintile more often than children of third-quintile parents, and so on.

There is nevertheless a growing, quasi-hereditary, smart-educated-professional-affluent class. It’s almost a sure thing, given the rise of the two-professional marriage, and given the correlation between the intelligence of parents and that of their children, which may be as high as 0.8. However, as a fraction of the total population, membership in the new class won’t grow as fast as membership in the “lower” classes because birth rates are inversely related to income.

And the new class probably will be isolated from the “lower” classes. Most members of the new class work and live where their interactions with persons of “lower” classes are restricted to boss-subordinate and employer-employee relationships. Professionals, for the most part, work in office buildings, isolated from the machinery and practitioners of “blue collar” trades.

But the segregation of housing on class lines is nothing new. People earn more, in part, so that they can live in nicer houses in nicer neighborhoods. And the general rise in the real incomes of Americans has made it possible for persons in the higher income brackets to afford more luxurious homes in more luxurious neighborhoods than were available to their parents and grandparents. (The mansions of yore, situated on “Mansion Row,” were occupied by the relatively small number of families whose income and wealth set them widely apart from the professional class of the day.) So economic segregation is, and should be, as unsurprising as a sunrise in the east.

WHAT’S THE PROGRESSIVE SOLUTION TO THE NON-PROBLEM?

None of this will assuage progressives, who like to claim that intelligence (like race) is a social construct (while also claiming that Republicans are stupid); who believe that incomes should be more equal (theirs excepted); who believe in “diversity,” except when it comes to where most of them choose to live and school their children; and who also believe that economic mobility should be greater than it is — just because. In their superior minds, there’s an optimum income distribution and an optimum degree of economic mobility — just as there is an optimum global temperature, which must be less than the ersatz one that’s estimated by combining temperatures measured under various conditions and with various degrees of error.

The irony of it is that the self-segregated, smart-educated-professional-affluent class is increasingly progressive. Consider the changing relationship between party preference and income:

voting-vs-income
Source: K.K. Rebecca Lai et al., “How Trump Won the Election According to Exit Polls,” The New York Times, November 16, 2016.

The elections between 2004 and 2016 are indicated by the elbows in the zig-zag lines for each of the income groups. For example, among voters earning more than $200,000,  the Times estimates that almost 80 percent (+30) voted Republican in 2004, as against 45 percent in 2008, 60 percent in 2012, and just over 50 percent in 2016. Even as voters in the two lowest brackets swung toward the GOP (and Trump) between 2004 and 2016, voters in the three highest brackets were swinging toward the Democrat Party (and Clinton).

Those shifts are consistent with the longer trend among persons with bachelor’s degrees and advanced degrees toward identification with the Democrat Party. See, for example, the graphs showing relationships between party affiliation and level of education at “Party Identification Trends, 1992-2014” (Pew Research Center, April 7, 2015). The smart-educated-professional-affluent class consists almost entirely of persons with bachelor’s and advanced degrees.

So I ask progressives, given that you have met the new class and it is you, what do you want to do about it? Is there a social problem that might arise from greater segregation of socio-economic classes, and is it severe enough to warrant government action. Or is the real “problem” the possibility that some people — and their children and children’s children, etc. — might get ahead faster than other people — and their children and children’s children, etc.?

Do you want to apply the usual progressive remedies? Penalize success through progressive (pun intended) personal income-tax rates and the taxation of corporate income; force employers and universities to accept low-income candidates (whites included) ahead of better-qualified ones (e.g., your children) from higher-income brackets; push “diversity” in your neighborhood by expanding the kinds of low-income housing programs that helped to bring about the Great Recession; boost your local property and sales taxes by subsidizing “affordable housing,” mandating the payment of a “living wage” by the local government, and applying that mandate to contractors seeking to do business with the local government; and on and on down the list of progressive policies?

Of course you do, because you’re progressive. And you’ll support such things in the vain hope that they’ll make a difference. But not everyone shares your naive beliefs in blank slates, equal ability, and social homogenization (which you don’t believe either, but are too wedded to your progressive faith to admit). What will actually be accomplished — aside from tokenism — is social distrust and acrimony, which had a lot to do with the electoral victory of Donald J. Trump, and economic stagnation, which hurts the “little people” a lot more than it hurts the smart-educated-professional-affluent class.

Where the progressive view fails, as it usually does, is in its linear view of the world and dependence on government “solutions.” As the late Herbert Stein said, “If something cannot go on forever, it will stop.” The top 1-percent doesn’t go on forever; its membership is far more volatile than that of lower income groups. Neither do the top 10-percent or top quintile go on forever. There’s always a top 1-percent, a top 10-percent and top quintile, by definition. But the names change constantly, as the paper by Chetty et al. attests.

The solution to the pseudo-problem of economic inequality is benign neglect, which isn’t a phrase that falls lightly from the lips of progressives. For more than 80 years, a lot of Americans — and too many pundits, professors, and politicians — have been led astray by that one-off phenomenon: the Great Depression. FDR and his sycophants and their successors created and perpetuated the myth that an activist government saved America from ruin and totalitarianism. The truth of the matter is that FDR’s policies prolonged the Great Depression by several years, and ushered in soft despotism, which is just “friendly” fascism. And all of that happened at the behest of people of above-average intelligence and above-average incomes.

Progressivism is the seed-bed of eugenics, and still promotes eugenics through abortion on demand (mainly to rid the world of black babies). My beneficial version of eugenics would be the sterilization of everyone with an IQ above 125 or top-40-percent income who claims to be progressive.

WHAT IS THE REAL PROBLEM? (ADDED 11/18/16)

It’s not the rise of the smart-educated-professional-affluent class. It’s actually a problem that has nothing to do with that. It’s the problem pointed to by Charles Murray, and poignantly underlined by a blogger named Tori:

Over the summer, my little sister had a soccer tournament at Bloomsburg University, located in central Pennsylvania. The drive there was about three hours and many of the towns we drove through shocked me. The conditions of these towns were terrible. Houses were falling apart. Bars and restaurants were boarded up. Scrap metal was thrown across front lawns. White, plastic lawn chairs were out on the drooping front porches. There were no malls. No outlets. Most of these small towns did not have a Walmart, only a dollar store and a few run down thrift stores. In almost every town, there was an abandoned factory.

My father, who was driving the car, turned to me and pointed out a Trump sign stuck in a front yard, surrounded by weeds and dead grass. “This is Trump country, Tori,” He said. “These people are desperate, trapped for life in these small towns with no escape. These people are the ones voting for Trump.”

My father understood Trump’s key to success, even though it would leave the media and half of America baffled and terrified on November 9th. Trump’s presidency has sparked nationwide outrage, disbelief and fear.

And, while I commend the passion many of my fellow millennials feels towards minorities and the fervency they oppose the rhetoric they find dangerous, I do find many of their fears unfounded.  I don’t find their fears unfounded because I negate the potency of racism. Or the potency of oppression. Or the potency of hate.

I find these fears unfounded because these people groups have an army fighting for them. This army is full of celebrities, politicians, billionaires, students, journalists and passionate activists. Trust me, minorities will be fine with an army like this defending them.

And, I would argue, that these minorities aren’t the only ones who need our help. The results of Tuesday night did not expose a red shout of racism but a red shout for help….

The majority of rhetoric going around says that if you’re white, you have an inherent advantage in life. I would argue that, at least for the members of these small impoverished communities, their whiteness only harms them as it keeps their immense struggles out of the public eye.

Rural Americans suffer from a poverty rate that is 3 points higher than the poverty rate found in urban America. In Southern regions, like Appalachia, the poverty rate jumps to 8 points higher than those found in cities. One fifth of the children living in poverty live rural areas. The children in this “forgotten fifth” are more likely to live in extreme poverty and live in poverty longer than their urban counterparts. 57% of these children are white….

Lauren Gurley, a freelance journalist, wrote a piece that focuses on why politicians, namely liberal ones, have written off rural America completely. In this column she quotes Lisa Pruitt, a law professor at the University of California who focuses many of her studies on life in rural America. Pruitt argues that mainstream America ignores poverty stricken rural America because the majority of America associates rural poverty with whiteness. She attributes America’s lack of empathy towards white poverty to the fact that black poverty is attributed to institutionalized racism, while white people have no reason to be poor, unless poor choices were made….

For arguably the first time since President Kennedy in the 1950’s, Donald Trump reached out to rural America. Trump spoke out often about jobs leaving the US, which has been felt deeply by those living in the more rural parts of the country. Trump campaigned in rural areas, while Clinton mostly campaigned in cities. Even if you do not believe Trump will follow through on his promises, he was still one of the few politicians who focused his vision on rural communities and said “I see you, I hear you and I want to help you.”

Trump was the “change” candidate of the 2016 election. Whether Trump proposed a good change or bad change is up to you, but it can’t be denied that Trump offered change. Hillary Clinton, on the other hand, was the establishment candidate. She ran as an extension of Obama and, even though this appealed to the majority of voters located in cities, those in the country were looking for something else. Obama’s policies did little to help  alleviate the many ailments felt by those in rural communities. In response, these voters came out for the candidate who offered to “make America great again.”

I believe that this is why rural, white communities voted for Trump in droves. I do not believe it was purely racism. I believe it is because no one has listened to these communities’ cries for help. The media and our politicians focus on the poverty and deprivation found in cities and, while bringing these issues to light is immensely important, we have neglected another group of people who are suffering. It is not right to brush off all of these rural counties with words like “deplorable” and not look into why they might have voted for Trump with such desperation.

It was not a racist who voted for Trump, but a father who has no possible way of providing a steady income for his family. It was not a misogynist who voted for Trump, but a mother who is feeding her baby mountain dew out of a bottle. It was not a deplorable who voted for Trump, but a young man who has no possibility of getting out of a small town that is steadily growing smaller.

The people America has forgotten about are the ones who voted for Donald Trump. It does not matter if you agree with Trump. It does not matter if you believe that these people voted for a candidate who won’t actually help them. What matters is that the red electoral college map was a scream for help, and we’re screaming racist so loud we don’t hear them. Hatred didn’t elect Donald Trump; People did. [“Hate Didn’t Elect Donald Trump; People Did,” Tori’s Thought Bubble, November 12, 2016]

Wise words. The best way to help the people of whom Tori writes — the people of Charles Murray’s Fishtown — is to ignore the smart-educated-professional-affluent class. It’s a non-problem, as discussed above. The best way to help the forgotten people of America is to unleash the latent economic power of the United States by removing the dead hand of government from the economy.

 

Taleb’s Ruinous Rhetoric

A correspondent sent me some links to writings of Nicholas Nassim Taleb. One of them is “The Intellectual Yet Idiot,” in which Taleb makes some acute observations; for example:

What we have been seeing worldwide, from India to the UK to the US, is the rebellion against the inner circle of no-skin-in-the-game policymaking “clerks” and journalists-insiders, that class of paternalistic semi-intellectual experts with some Ivy league, Oxford-Cambridge, or similar label-driven education who are telling the rest of us 1) what to do, 2) what to eat, 3) how to speak, 4) how to think… and 5) who to vote for.

But the problem is the one-eyed following the blind: these self-described members of the “intelligentsia” can’t find a coconut in Coconut Island, meaning they aren’t intelligent enough to define intelligence hence fall into circularities — but their main skill is capacity to pass exams written by people like them….

The Intellectual Yet Idiot is a production of modernity hence has been accelerating since the mid twentieth century, to reach its local supremum today, along with the broad category of people without skin-in-the-game who have been invading many walks of life. Why? Simply, in most countries, the government’s role is between five and ten times what it was a century ago (expressed in percentage of GDP)….

The IYI pathologizes others for doing things he doesn’t understand without ever realizing it is his understanding that may be limited. He thinks people should act according to their best interests and he knows their interests, particularly if they are “red necks” or English non-crisp-vowel class who voted for Brexit. When plebeians do something that makes sense to them, but not to him, the IYI uses the term “uneducated”. What we generally call participation in the political process, he calls by two distinct designations: “democracy” when it fits the IYI, and “populism” when the plebeians dare voting in a way that contradicts his preferences….

The IYI has been wrong, historically, on Stalinism, Maoism, GMOs, Iraq, Libya, Syria, lobotomies, urban planning, low carbohydrate diets, gym machines, behaviorism, transfats, freudianism, portfolio theory, linear regression, Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, selfish gene, Bernie Madoff (pre-blowup) and p-values. But he is convinced that his current position is right.

That’s all yummy red meat to a person like me, especially in the wake of November 8, which Taleb’s piece predates. But the last paragraph quoted above reminded me that I had read something critical about a paper in which Taleb applies the precautionary principle. So I found the paper, which is by Taleb (lead author) and several others. This is from the abstract:

Here we formalize PP [the precautionary principle], placing it within the statistical and probabilistic structure of “ruin” problems, in which a system is at risk of total failure, and in place of risk we use a formal “fragility” based approach. In these problems, what appear to be small and reasonable risks accumulate inevitably to certain irreversible harm….

Our analysis makes clear that the PP is essential for a limited set of contexts and can be used to justify only a limited set of actions. We discuss the implications for nuclear energy and GMOs. GMOs represent a public risk of global harm, while harm from nuclear energy is comparatively limited and better characterized. PP should be used to prescribe severe limits on GMOs. [“The Precautionary Principle (With Application to the Genetic Modification of Organisms),” Extreme Risk Initiative – NYU School of Engineering Working Paper Series]

Jon Entine demurs:

Taleb has recently become the darling of GMO opponents. He and four colleagues–Yaneer Bar-Yam, Rupert Read, Raphael Douady and Joseph Norman–wrote a paper, The Precautionary Principle (with Application to the Genetic Modification of Organisms, released last May and updated last month, in which they claim to bring risk theory and the Precautionary Principle to the issue of whether GMOS might introduce “systemic risk” into the environment….

The crux of his claims: There is no comparison between conventional selective breeding of any kind, including mutagenesis which requires the radiation or chemical dousing of seeds (and has resulted in more than 2500 varieties of fruits, vegetables, and nuts, almost all available in organic varieties) versus what his calls the top-down engineering that occurs when a gene is taken from an organism and transferred to another (ignoring that some forms of genetic engineering, including gene editing, do not involve gene transfers). Taleb goes on to argue that the chance of ecocide, or the destruction of the environment and potentially of humans, increases incrementally with each additional transgenic trait introduced into the environment. In other words, in his mind genetic engineering is a classic “black swan” scenario.

Neither Taleb nor any of the co-authors has any background in genetics or agriculture or food, or even familiarity with the Precautionary Principle as it applies to biotechology, which they liberally invoke to justify their positions….

One of the paper’s central points displays his clear lack of understanding of modern crop breeding. He claims that the rapidity of the genetic changes using the rDNA technique does not allow the environment to equilibrate. Yet rDNA techniques are actually among the safest crop breeding techniques in use today because each rDNA crop represents only 1-2 genetic changes that are more thoroughly tested than any other crop breeding technique. The number of genetic changes caused by hybridization or mutagensis techniques are orders of magnitude higher than rDNA methods. And no testing is required before widespread monoculture-style release. Even selective breeding likely represents a more rapid change than rDNA techniques because of the more rapid employment of the method today.

In essence. Taleb’s ecocide argument applies just as much to other agricultural techniques in both conventional and organic agriculture. The only difference between GMOs and other forms of breeding is that genetic engineering is closely evaluated, minimizing the potential for unintended consequences. Most geneticists–experts in this field as opposed to Taleb–believe that genetic engineering is far safer than any other form of breeding.

Moreover, as Maxx Chatsko notes, the natural environment has encountered new traits from unthinkable events (extremely rare occurrences of genetic transplantation across continents, species and even planetary objects, or extremely rare single mutations that gave an incredible competitive advantage to a species or virus) that have led to problems and genetic bottlenecks in the past — yet we’re all still here and the biosphere remains tremendously robust and diverse. So much for Mr. Doomsday. [“Is Nassim Taleb a ‘Dangerous Imbecile’ or on [sic] the Pay of Anti-GMO Activists?Genetic Literacy Project, November 13, 2014 — see footnote for an explanation of “dangerous imbecile”]

Gregory Conko also demurs:

The paper received a lot of attention in scientific circles, but was roundly dismissed for being long on overblown rhetoric but conspicuously short on any meaningful reference to the scientific literature describing the risks and safety of genetic engineering, and for containing no understanding of how modern genetic engineering fits within the context of centuries of far more crude genetic modification of plants, animals, and microorganisms.

Well, Taleb is back, this time penning a short essay published on The New York Times’s DealB%k blog with co-author Mark Spitznagel. The authors try to draw comparisons between the recent financial crisis and GMOs, claiming the latter represent another “Too Big to Fail” crisis in waiting. Unfortunately, Taleb’s latest contribution is nothing more than the same sort of evidence-free bombast posing as thoughtful analysis. The result is uninformed and/or unintelligible gibberish….

“In nature, errors stay confined and, critically, isolated.” Ebola, anyone? Avian flu? Or, for examples that are not “in nature” but the “small step” changes Spitznagel and Taleb seem to prefer, how about the introduction of hybrid rice plants into parts of Asia that have led to widespread outcrossing to and increased weediness in wild red rices? Or kudzu? Again, this seems like a bold statement designed to impress. But it is completely untethered to any understanding of what actually occurs in nature or the history of non-genetically engineered crop introductions….

“[T]he risk of G.M.O.s are more severe than those of finance. They can lead to complex chains of unpredictable changes in the ecosystem, while the methods of risk management with G.M.O.s — unlike finance, where some effort was made — are not even primitive.” Again, the authors evince no sense that they understand how extensively breeders have been altering the genetic composition of plants and other organisms for the past century, or what types of risk management practices have evolved to coincide.

In fact, compared with the wholly voluntary (and yet quite robust) risk management practices that are relied upon to manage introductions of mutant varieties, somaclonal variants, wide crosses, and the products of cell fusion, the legally obligatory risk management practices used for genetically engineered plant introductions are vastly over-protective.

In the end, Spitznagel and Taleb’s argument boils down to a claim that ecosystems are complex and rDNA modification seems pretty mysterious to them, so nobody could possibly understand it. Until they can offer some arguments that take into consideration what we actually know about genetic modification of organisms (by various methods) and why we should consider rDNA modification uniquely risky when other methods result in even greater genetic changes, the rest of us are entitled to ignore them. [“More Unintelligible Gibberish on GMO Risks from Nicholas Nassim Taleb,” Competitive Enterprise Institute, July 16, 2015]

And despite my enjoyment of Taleb’s red-meat commentary about IYIs, I have to admit that I’ve had my fill of Taleb’s probabilistic gibberish. This is from “Fooled by Non-Randomness,” which I wrote seven years ago about Taleb’s Fooled by Randomness:

The first reason that I am unfooled by Fooled… might be called a meta-reason. Standing back from the book, I am able to perceive its essential defect: According to Taleb, human affairs — especially economic affairs, and particularly the operations of financial markets — are dominated by randomness. But if that is so, only a delusional person can truly claim to understand the conduct of human affairs. Taleb claims to understand the conduct of human affairs. Taleb is therefore either delusional or omniscient.

Given Taleb’s humanity, it is more likely that he is delusional — or simply fooled, but not by randomness. He is fooled because he proceeds from the assumption of randomness instead of exploring the ways and means by which humans are actually capable of shaping events. Taleb gives no more than scant attention to those traits which, in combination, set humans apart from other animals: self-awareness, empathy, forward thinking, imagination, abstraction, intentionality, adaptability, complex communication skills, and sheer brain power. Given those traits (in combination) the world of human affairs cannot be random. Yes, human plans can fail of realization for many reasons, including those attributable to human flaws (conflict, imperfect knowledge, the triumph of hope over experience, etc.). But the failure of human plans is due to those flaws — not to the randomness of human behavior.

What Taleb sees as randomness is something else entirely. The trajectory of human affairs often is unpredictable, but it is not random. For it is possible to find patterns in the conduct of human affairs, as Taleb admits (implicitly) when he discusses such phenomena as survivorship bias, skewness, anchoring, and regression to the mean….

[R]andom events as events which are repeatable, convergent on a limiting value, and truly patternless over a large number of repetitions. Evolving economic events (e.g., stock-market trades, economic growth) are not alike (in the way that dice are, for example), they do not converge on limiting values, and they are not patternless, as I will show.

In short, Taleb fails to demonstrate that human affairs in general or financial markets in particular exhibit randomness, properly understood….

A bit of unpredictability (or “luck”) here and there does not make for a random universe, random lives, or random markets. If a bit of unpredictability here and there dominated our actions, we wouldn’t be here to talk about randomness — and Taleb wouldn’t have been able to marshal his thoughts into a published, marketed, and well-sold book.

Human beings are not “designed” for randomness. Human endeavors can yield unpredictable results, but those results do not arise from random processes, they derive from skill or the lack therof, knowledge or the lack thereof (including the kinds of self-delusions about which Taleb writes), and conflicting objectives….

No one believes that Ty Cobb, Babe Ruth, Ted Williams, Christy Matthewson, Warren Spahn, and the dozens of other baseball players who rank among the truly great were lucky. No one believes that the vast majority of the the tens of thousands of minor leaguers who never enjoyed more than the proverbial cup of coffee were unlucky. No one believes that the vast majority of the millions of American males who never made it to the minor leagues were unlucky. Most of them never sought a career in baseball; those who did simply lacked the requisite skills.

In baseball, as in life, “luck” is mainly an excuse and rarely an explanation. We prefer to apply “luck” to outcomes when we don’t like the true explanations for them. In the realm of economic activity and financial markets, one such explanation … is the exogenous imposition of governmental power….

Given what I have said thus far, I find it almost incredible that anyone believes in the randomness of financial markets. It is unclear where Taleb stands on the random-walk hypothesis, but it is clear that he believes financial markets to be driven by randomness. Yet, contradictorily, he seems to attack the efficient-markets hypothesis (see pp. 61-62), which is the foundation of the random-walk hypothesis.

What is the random-walk hypothesis? In brief, it is this: Financial markets are so efficient that they instantaneously reflect all information bearing on the prices of financial instruments that is then available to persons buying and selling those instruments….

When we step back from day-to-day price changes, we are able to see the underlying reality: prices (instead of changes) and price trends (which are the opposite of randomness). This (correct) perspective enables us to see that stock prices (on the whole) are not random, and to identify the factors that influence the broad movements of the stock market. For one thing, if you look at stock prices correctly, you can see that they vary cyclically….

[But] the long-term trend of the stock market (as measured by the S&P 500) is strongly correlated with GDP. And broad swings around that trend can be traced to governmental intervention in the economy….

The wild swings around the trend line began in the uncertain aftermath of World War I, which saw the imposition of production and price controls. The swings continued with the onset of the Great Depression (which can be traced to governmental action), the advent of the anti-business New Deal, and the imposition of production and price controls on a grand scale during World War II. The next downswing was occasioned by the culmination the Great Society, the “oil shocks” of the early 1970s, and the raging inflation that was touched off by — you guessed it — government policy. The latest downswing is owed mainly to the financial crisis born of yet more government policy: loose money and easy loans to low-income borrowers.

And so it goes, wildly but predictably enough if you have the faintest sense of history. The moral of the story: Keep your eye on government and a hand on your wallet.

There is randomness in economic affairs, but they are not dominated by randomness. They are dominated by intentions, including especially the intentions of the politicians and bureaucrats who run governments. Yet, Taleb has no space in his book for the influence of their deeds economic activity and financial markets.

Taleb is right to disparage those traders (professional and amateur) who are lucky enough to catch upswings, but are unprepared for downswings. And he is right to scoff at their readiness to believe that the current upswing (uniquely) will not be followed by a downswing (“this time it’s different”).

But Taleb is wrong to suggest that traders are fooled by randomness. They are fooled to some extent by false hope, but more profoundly by their inablity to perceive the economic damage wrought by government. They are not alone of course; most of the rest of humanity shares their perceptual failings.

Taleb, in that respect, is only somewhat different than most of the rest of humanity. He is not fooled by false hope, but he is fooled by non-randomness — the non-randomness of government’s decisive influence on economic activity and financial markets. In overlooking that influence he overlooks the single most powerful explanation for the behavior of markets in the past 90 years.

I followed up a few days later with “Randomness Is Over-Rated“:

What we often call random events in human affairs really are non-random events whose causes we do not and, in some cases, cannot know. Such events are unpredictable, but they are not random….

Randomness … is found in (a) the results of non-intentional actions, where (b) we lack sufficient knowledge to understand the link between actions and results.

It is unreasonable to reduce intentional human behavior to probabilistic formulas. Humans don’t behave like dice, roulette balls, or similar “random” devices. But that is what Taleb (and others) do when they ascribe unusual success in financial markets to “luck.”…

I say it again: The most successful professionals are not successful because of luck, they are successful because of skill. There is no statistically predetermined percentage of skillful traders; the actual percentage depends on the skills of entrants and their willingness (if skillful) to make a career of it….

The outcomes of human endeavor are skewed because the distribution of human talents is skewed. It would be surprising to find as many as one-half of traders beating the long-run average performance of the various markets in which they operate….

[Taleb] sees an a priori distribution of “winners” and losers,” where “winners” are determined mainly by luck, not skill. Moreover, we — the civilians on the sidelines — labor under the false impression about the relative number of “winners”….

[T]here are no “odds” favoring success — even in financial markets. Financial “players” do what they can do, and most of them — like baseball players — simply don’t have what it takes for great success. Outcomes are skewed, not because of (fictitious) odds but because talent is distributed unevenly.

The real lesson … is not to assume that the “winners” are merely lucky. No, the real lesson is to seek out those “winners” who have proven their skills over a long period of time, through boom and bust and boom and bust.

Those who do well, over the long run, do not do so merely because they have survived. They have survived because they do well.

There’s much more, and you should read the whole thing(s), as they say.

I turn now to Taleb’s version of the precautionary principle, which seems tailored to support the position that Taleb wants to support, namely, that GMOs should be banned. Who gets to decide what “threats” should be included in the “limited set of contexts” where the PP applies? Taleb, of course. Taleb has excreted a circular pile of horse manure; thus:

  • The PP applies only where I (Taleb) say it applies.
  • I (Taleb) say that the PP applies to GMOs.
  • Therefore, the PP applies to GMOs.

I (the proprietor of this blog) say that the PP ought to apply to the works of Nicholas Nassim Taleb. They ought to be banned because they may perniciously influence gullible readers.

I’ll justify my facetious proposal to ban Taleb’s writings by working my way through the “logic” of what Taleb calls the non-naive version of the PP, on which he bases his anti-GMO stance. Here are the main points of Taleb’s argument, extracted from “The Precautionary Principle (With Application to the Genetic Modification of Organisms).” Taleb’s statements (with minor, non-substantive elisions) are in roman type, followed by my comments in bold type.

The purpose of the PP is to avoid a certain class of what, in probability and insurance, is called “ruin” problems. A ruin problem is one where outcomes of risks have a non-zero probability of resulting in unrecoverable losses. An often-cited illustrative case is that of a gambler who loses his entire fortune and so cannot return to the game. In biology, an example would be a species that has gone extinct. For nature, “ruin” is ecocide: an irreversible termination of life at some scale, which could be planetwide.

The extinction of a species is ruinous only if one believes that species shouldn’t become extinct. But they do, because that’s the way nature works. Ruin, as Taleb means it, is avoidable, self-inflicted, and (at some point) irreversibly catastrophic. Let’s stick to that version of it.

Our concern is with public policy. While an individual may be advised to not “bet the farm,” whether or not he does so is generally a matter of individual preferences. Policy makers have a responsibility to avoid catastrophic harm for society as a whole; the focus is on the aggregate, not at the level of single individuals, and on globalsystemic, not idiosyncratic, harm. This is the domain of collective “ruin” problems.

This assumes that government can do something about a potentially catastrophic harm — or should do something about it. The Great Depression, for example, began as a potentially catastrophic harm that government made into a real catastrophic harm (for millions of Americans, though not all of them) and prolonged through its actions. Here Taleb commits the nirvana fallacy, by implicitly ascribing to government the power to anticipate harm without making a Type I or Type II error,  and then to take appropriate and effective action to prevent or ameliorate that harm.

By the ruin theorems, if you incur a tiny probability of ruin as a “one-off” risk, survive it, then do it again (another “one-off” deal), you will eventually go bust with probability 1. Confusion arises because it may seem that the “one-off” risk is reasonable, but that also means that an additional one is reasonable. This can be quantified by recognizing that the probability of ruin approaches 1 as the number of exposures to individually small risks, say one in ten thousand, increases. For this reason a strategy of risk taking is not sustainable and we must consider any genuine risk of total ruin as if it were inevitable.

But you have to know in advance that a particular type of risk will be ruinous. Which means that — given the uncertainty of such knowledge — the perception of (possible) ruin is in the eye of the assessor. (I’ll have a lot more to say about uncertainty.)

A way to formalize the ruin problem in terms of the destructive consequences of actions identifies harm as not about the amount of destruction, but rather a measure of the integrated level of destruction over the time it persists. When the impact of harm extends to all future times, i.e. forever, then the harm is infinite. When the harm is infinite, the product of any non-zero probability and the harm is also infinite, and it cannot be balanced against any potential gains, which are necessarily finite.

As discussed below, the concept of probability is inapplicable here. Further, and granting the use of probability for the sake of argument, Taleb’s contention holds only if there’s no doubt that the harm will be infinite, that is, totally ruinous. If there’s room for doubt, there’s room for disagreement as to the extent of the harm (if any) and the value of attempting to counter it (or not). Otherwise, it would be “rational” to devote as much as the entire economic output of the world to combat so-called catastrophic anthropogenic global warming (CAGW) because some “expert” says that there’s a non-zero probability of its occurrence. In practical terms, the logic of such a policy is that if you’re going to die of heat stroke, you might as well do it sooner rather than later — which would be one of the consequences of, say, banning the use of fossil fuels. Other consequences would be freezing to death if you live in a cold climate and starving to death because foodstuffs couldn’t be grown, harvested, processed, or transported. Those are also infinite harms, and they arise from Taleb’s preferred policy of acting on little information about a risk because (in someone’s view) it could lead to infinite harm. There’s a relevant cost-benefit analysis for you.

Because the “cost” of ruin is effectively infinite, costbenefit analysis (in which the potential harm and potential gain are multiplied by their probabilities and weighed against each other) is no longer a useful paradigm.

Here, Taleb displays profound ignorance in two fields: economics and probability. His ignorance of economics might be excusable, but his ignorance of probability isn’t, inasmuch as he’s made a name for himself (and probably a lot of money) by parading his “sophisticated” understanding of it in books and on the lecture circuit.

Regarding the economics of cost-benefit analysis (CBA), it’s properly an exercise for individual persons and firms, not governments. When a government undertakes CBA, it implicitly (and arrogantly) assumes that the costs of a project (which are defrayed in the end by taxpayers) can be weighed against the monetary benefits of the project (which aren’t distributed in proportion to the costs and are often deliberately distributed so that taxpayers bear most of the costs and non-taxpayers reap most of the benefits).

Regarding probability, Taleb quite wrongly insists on ascribing probabilities to events that might (or might not) occur in the future. A probability is a statement about a very large number of like events, each of which has an unpredictable (random) outcome. A valid probability is based either on a large number of past “trials” or a mathematical certainty (e.g., a fair coin, tossed a large number of times — 100 or more  — will come up heads about half the time and tails about half the time). Probability, properly understood, says nothing about the outcome of an individual future event; that is, it says nothing about what will happen next in a truly random trial, such as a coin toss. Probability certainly says nothing about the occurrence of a unique event. Therefore, Taleb cannot validly assign a probability of “ruin” to a speculative event as little understood (by him)  as the effect of GMOs on the world’s food supply.

The non-naive PP bridges the gap between precaution and evidentiary action using the ability to evaluate the difference between local and global risks.

In other words, if there’s a subjective, non-zero probability of CAGW in Taleb’s mind, that probability should outweigh evidence about the wrongness of a belief in CAGW. And such evidence is ample, not only in the various scientific fields that impinge on climatology, but also in the failure of almost all climate models to predict the long pause in what’s called global warming. Ah, but “almost all” — in Taleb’s mind — means that there’s a non-zero probability of CAGW.  It’s the “heads I win, tails you lose” method of gambling on the flip of a coin.

Here’s another way of putting it: Taleb turns the scientific method upside down by rejecting the null hypothesis (e.g., no CAGW) on the basis of evidence that confirms it (no observable rise in temperatures approaching the predictions of CAGW theory) because a few predictions happened to be close to the truth. Taleb, in his guise as the author of Fooled by Randomness, would correctly label such predictions as lucky.

While evidentiary approaches are often considered to reflect adherence to the scientific method in its purest form, it is apparent that these approaches do not apply to ruin problems. In an evidentiary approach to risk (relying on evidence-based methods), the existence of a risk or harm occurs when we experience that risk or harm. In the case of ruin, by the time evidence comes it will by definition be too late to avoid it. Nothing in the past may predict one fatal event. Thus standard evidence-based approaches cannot work.

It’s misleading to say that “by the time the evidence comes it will be by definition too late to avoid it.” Taleb assumes, without proof, that the linkage between GMOs, say, and a worldwide food crisis will occur suddenly and without warning (or sufficient warning), as if GMOs will be everywhere at once and no one will have been paying attention to their effects as their use spread. That’s unlikely given broad disparities in the distribution of GMOs, the state of vigilance about them, and resistance to them in many quarters. What Taleb really says is this: Some people (Taleb among them) believe that GMOs pose an existential risk with a probability greater than zero. (Any such “probability” is fictional, as discussed above.) Therefore, the risk of ruin from GMOs is greater than zero and ruin is inevitable. By that logic, there must be dozens of certain-death scenarios for the planet. Why is Taleb wasting his time on GMOs, which are small potatoes compared with, say, asteroids? And why don’t we just slit our collective throat and get it over with?

Since there are mathematical limitations to predictability of outcomes in a complex system, the central issue to determine is whether the threat of harm is local (hence globally benign) or carries global consequences. Scientific analysis can robustly determine whether a risk is systemic, i.e. by evaluating the connectivity of the system to propagation of harm, without determining the specifics of such a risk. If the consequences are systemic, the associated uncertainty of risks must be treated differently than if it is not. In such cases, precautionary action is not based on direct empirical evidence but on analytical approaches based upon the theoretical understanding of the nature of harm. It relies on probability theory without computing probabilities. The essential question is whether or not global harm is possible or not.

More of the same.

Everything that has survived is necessarily non-linear to harm. If I fall from a height of 10 meters I am injured more than 10 times than if I fell from a height of 1 meter, or more than 1000 times than if I fell from a height of 1 centimeter, hence I am fragile. In general, every additional meter, up to the point of my destruction, hurts me more than the previous one.

This explains the necessity of considering scale when invoking the PP. Polluting in a small way does not warrant the PP because it is essentially less harmful than polluting in large quantities, since harm is non-linear.

This is just a way of saying that there’s a threshold of harm, and harm becomes ruinous when the threshold is surpassed. Which is true in some cases, but there’s a wide variety of cases and a wide range of thresholds. This is just a framing device meant to set the reader up for the sucker punch, which is that the widespread use of GMOs will be ruinous, at some undefined point. Well, we’ve been hearing that about CAGW for twenty years, and the undefined point keeps receding into the indefinite future.

Thus, when impacts extend to the size of the system, harm is severely exacerbated by non-linear effects. Small impacts, below a threshold of recovery, do not accumulate for systems that retain their structure. Larger impacts cause irreversible damage.We should be careful, however, of actions that may seem small and local but then lead to systemic consequences.

“When impacts extend to the size of the system” means “when ruin is upon us there is ruin.” It’s a tautology without empirical content.

An increase in uncertainty leads to an increase in the probability of ruin, hence “skepticism” is that its impact on decisions should lead to increased, not decreased conservatism in the presence of ruin. Hence skepticism about climate models should lead to more precautionary policies.

This is through the looking glass and into the wild blue yonder. More below.

The rest of the paper is devoted to two things. One of them is making the case against GMOs because they supposedly exemplify the kind of risk that’s covered by the non-naive PP. I’ll let Jon Entine and Gregory Conko (quoted above) speak for me on that issue.

The other thing that the rest of the paper does is to spell out and debunk ten supposedly fallacious arguments against PP. I won’t go into them here because Taleb’s version of PP is self-evidently fallacious. The fallacy can be found in figure 6 of the paper:

taleb-figure-6

Taleb pulls an interesting trick here — or perhaps he exposes his fundamental ignorance about probability. Let’s take it a step at a time:

  1. Figure 6 depicts two normal distributions. But what are they normal distributions of? Let’s say that they’re supposed to be normal distributions of the probability of the occurrence of CAGW (however that might be defined) by a certain date, in the absence of further steps to mitigate it (e.g., banning the use of fossil fuels forthwith). There’s no known normal distribution of the probability of CAGW because, as discussed above, CAGW is a unique, hypothesized (future) event which cannot have a probability. It’s not 100 tosses of a fair coin.
  2. The curves must therefore represent something about models that predict the arrival of CAGW by a certain date. Perhaps those predictions are normally distributed, though that has nothing to do with the “probability” of CAGW if all of the predictions are wrong.
  3. The two curves shown in Taleb’s figure 6 are meant (by Taleb) to represent greater and lesser certainty about the arrival of CAGW (or the ruinous scenario of his choice), as depicted by climate models.
  4. But if models are adjusted or built anew in the face of evidence about their shortcomings (i.e., their gross overprediction of temperatures since 1998), the newer models (those with presumably greater certainty) will have two characteristics: (a) The tails will be thinner, as Taleb suggests. (b) The mean will shift to the left or right; that is they won’t have the same mean.
  5. In the case of CAGW, the mean will shift to the right because it’s already known that extant models overstate the risk of “ruin.” The left tail of the distribution of the new models will therefore shift to the right, further reducing the “probability” of CAGW.
  6. Taleb’s trick is to ignore that shift and, further, to implicitly assume that the two distributions coexist. By doing that he can suggest that there’s an “increase in uncertainty [that] leads to an increase in the probability of ruin.” In fact, there’s a decrease in uncertainty, and therefore a decrease in the probability of ruin.

I’ll say it again: As evidence is gathered, there is less uncertainty; that is, the high-uncertainty condition precedes the low-uncertainty one. The movement from high uncertainty to low uncertainty would result in the assignment of a lower probability to a catastrophic outcome (assuming, for the moment, that such a probability is meaningful). And that would be a good reason to worry less about the eventuality of the catastrophic outcome. Taleb wants to compare the two distributions, as if the earlier one (based on little evidence) were as valid as the later one (based on additional evidence).

That’s why Taleb counsels against “evidentiary approaches.” In Taleb’s worldview, knowing little about a potential risk to health, welfare, and existence is a good reason to take action with respect to that risk. Therefore, if you know little about the risk, you should act immediately and with all of the resources at your disposal. Why? Because the risk might suddenly cause an irreversible calamity. But that’s not true of CAGW or GMOs. There’s time to gather evidence as to whether there’s truly a looming calamity, and then — if necessary — to take steps to avert it, steps that are more likely to be effective because they’re evidence-based. Further, if there’s not a looming calamity, a tremendous wast of resources will be averted.

It follows from the non-naive PP — as interpreted by Taleb — that all human beings should be sterilized and therefore prevented from procreating. This is so because sometimes just a few human beings  — Hitler, Mussolini, and Tojo, for example — can cause wars. And some of those wars have harmed human beings on a nearly global scale. Global sterilization is therefore necessary, to ensure against the birth of new Hitlers, Mussolinis, and Tojos — even if it prevents the birth of new Schweitzers, Salks, Watsons, Cricks, and Mother Teresas.

In other words, the non-naive PP (or Taleb’s version of it) is pseudo-scientific claptrap. It can be used to justify any extreme and nonsensical position that its user wishes to advance. It can be summed up in an Orwellian sentence: There is certainty in uncertainty.

Perhaps this is better: You shouldn’t get out of bed in the morning because you don’t know with certainty everything that will happen to you in the course of the day.

*     *     *

NOTE: The title of Jon Entine’s blog post, quoted above, refers to Taleb as a “dangerous imbecile.” Here’s Entine’s explanation of that characterization:

If you think the headline of this blog [post] is unnecessarily inflammatory, you are right. It’s an ad hominem way to deal with public discourse, and it’s unfair to Nassim Taleb, the New York University statistician and risk analyst. I’m using it to make a point–because it’s Taleb himself who regularly invokes such ugly characterizations of others….

…Taleb portrays GMOs as a ‘castrophe in waiting’–and has taken to personally lashing out at those who challenge his conclusions–and yes, calling them “imbeciles” or paid shills.

He recently accused Anne Glover, the European Union’s Chief Scientist, and one of the most respected scientists in the world, of being a “dangerous imbecile” for arguing that GM crops and foods are safe and that Europe should apply science based risk analysis to the GMO approval process–views reflected in summary statements by every major independent science organization in the world.

Taleb’s ugly comment was gleefully and widely circulated by anti-GMO activist web sites. GMO Free USA designed a particularly repugnant visual to accompany their post.

Taleb is known for his disagreeable personality–as Keith Kloor at Discover noted, the economist Noah Smith had called Taleb  a “vulgar bombastic windbag”, adding, “and I like him a lot”. He has a right to flaunt an ego bigger than the Goodyear blimp. But that doesn’t make his argument any more persuasive.

*     *     *

Related posts:
“Warmism”: The Myth of Anthropogenic Global Warming
More Evidence against Anthropogenic Global Warming
Yet More Evidence against Anthropogenic Global Warming
Pascal’s Wager, Morality, and the State
Modeling Is Not Science
Fooled by Non-Randomness
Randomness Is Over-Rated
Anthropogenic Global Warming Is Dead, Just Not Buried Yet
Beware the Rare Event
Demystifying Science
Pinker Commits Scientism
AGW: The Death Knell
The Pretence of Knowledge
“The Science Is Settled”
“Settled Science” and the Monty Hall Problem
The Limits of Science, Illustrated by Scientists
Some Thoughts about Probability
Rationalism, Empiricism, and Scientific Knowledge
AGW in Austin?
The “Marketplace” of Ideas
My War on the Misuse of Probability
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
The Precautionary Principle and Pascal’s Wager
AGW in Austin? (II)
Is Science Self-Correcting?

Bigger, Stronger, and Faster — but Not Quicker?

SUBSTANTIALLY REVISED 12/22/16; FURTHER REVISED 01/22/16

There’s some controversial IQ research which suggests that reaction times have slowed and people are getting dumber, not smarter. Here’s Dr. James Thompson’s summary of the hypothesis:

We keep hearing that people are getting brighter, at least as measured by IQ tests. This improvement, called the Flynn Effect, suggests that each generation is brighter than the previous one. This might be due to improved living standards as reflected in better food, better health services, better schools and perhaps, according to some, because of the influence of the internet and computer games. In fact, these improvements in intelligence seem to have been going on for almost a century, and even extend to babies not in school. If this apparent improvement in intelligence is real we should all be much, much brighter than the Victorians.

Although IQ tests are good at picking out the brightest, they are not so good at providing a benchmark of performance. They can show you how you perform relative to people of your age, but because of cultural changes relating to the sorts of problems we have to solve, they are not designed to compare you across different decades with say, your grandparents.

Is there no way to measure changes in intelligence over time on some absolute scale using an instrument that does not change its properties? In the Special Issue on the Flynn Effect of the journal Intelligence Drs Michael Woodley (UK), Jan te Nijenhuis (the Netherlands) and Raegan Murphy (Ireland) have taken a novel approach in answering this question. It has long been known that simple reaction time is faster in brighter people. Reaction times are a reasonable predictor of general intelligence. These researchers have looked back at average reaction times since 1889 and their findings, based on a meta-analysis of 14 studies, are very sobering.

It seems that, far from speeding up, we are slowing down. We now take longer to solve this very simple reaction time “problem”.  This straightforward benchmark suggests that we are getting duller, not brighter. The loss is equivalent to about 14 IQ points since Victorian times.

So, we are duller than the Victorians on this unchanging measure of intelligence. Although our living standards have improved, our minds apparently have not. What has gone wrong? [“The Victorians Were Cleverer Than Us!” Psychological Comments, April 29, 2013]

Thompson discusses this and other relevant research in many posts, which you can find by searching his blog for Victorians and Woodley. I’m not going to venture my unqualified opinion of Woodley’s hypothesis, but I am going to offer some (perhaps) relevant analysis based on — you guessed it — baseball statistics.

It seems to me that if Woodley’s hypothesis has merit, it ought to be confirmed by the course of major-league batting averages over the decades. Other things being equal, quicker reaction times ought to produce higher batting averages. Of course, there’s a lot to hold equal, given the many changes in equipment, playing conditions, player conditioning, “style” of the game (e.g., greater emphasis on home runs), and other key variables over the course of more than a century.

Undaunted, I used the Play Index search tool at Baseball-Reference.com to obtain single-season batting statistics for “regular” American League (AL) players from 1901 through 2016. My definition of a regular player is one who had at least 3 plate appearances (PAs) per scheduled game in a season. That’s a minimum of 420 PAs in a season from 1901 through 1903, when the AL played a 140-game schedule; 462 PAs in the 154-game seasons from 1904 through 1960; and 486 PAs in the 162-game seasons from 1961 through 2016. I found 6,603 qualifying player-seasons, and a long string of batting statistics for each of them: the batter’s age, his batting average, his number of at-bats, his number of PAs, etc.

The raw record of batting averages looks like this, fitted with a 6th-order polynomial to trace the shifts over time:

FIGURE 1
batting-average-analysis-unadjusted-ba-1901-2016

Everything else being the same, the best fit would be a straight line that rises gradually, falls gradually, or has no slope. The undulation reflects the fact that everything hasn’t stayed the same. Major-league baseball wasn’t integrated until 1947, and integration was only token for a couple of decades after that. For example: night games weren’t played until 1935, and didn’t become common until after World War II; a lot of regular players went to war, and those who replaced them were (obviously) of inferior quality — and hitting suffered more than pitching; the “deadball” era ended after the 1919 season and averages soared in the 1920s and 1930s; fielders’ gloves became larger and larger.

The list goes on, but you get the drift. Playing conditions and the talent pool have changed so much over the decades that it’s hard to pin down just what caused batting averages to undulate rather than move in a relatively straight line. It’s unlikely that batters became a lot better, only to get worse, then better again, and then worse again, and so on.

Something else has been going on — a lot of somethings, in fact. And the 6th-order polynomial captures them in an undifferentiated way. What remains to be explained are the differences between official BA and the estimates yielded by the 6th-order polynomial. Those differences are the stage 1 residuals displayed in this graph:

FIGURE 2
batting-average-analysis-stage-1-residuals

There’s a lot of variability in the residuals, despite the straight, horizontal regression line through them. That line, by the way, represents a 6th-order polynomial fit, not a linear one. So the application of the equation shown in figure 1 does an excellent job of de-trending the data.

The variability of the stage 1 residuals has two causes: (1) general changes in the game and (2) the performance of individual players, given those changes. If the effects of the general changes can be estimated, the remaining, unexplained variability should represent the “true” performance of individual batters.

In stage 2, I considered 16 variables in an effort to isolate the general changes. I began by finding the correlations between each of the 16 candidate variables and the stage 1 residuals. I then estimated a regression equation with stage 1 residuals as the dependent variable and the most highly correlated variable as the explanatory variable. I next found the correlations between the residuals of that regression equation and the remaining 15 variables. I introduced the most highly correlated variable into a new regression equation, as a second explanatory variable. I continued this process until I had a regression equation with 16 explanatory variables. I chose to use the 13th equation, which was the last one to introduce a variable with a highly significant p-value (less than 0.01). Along the way, because of collinearity among the variables, the p-values of a few others became high, but I kept them in the equation because they contributed to its overall explanatory power.

Here’s the 13-variable equation (REV13), with each coefficient given to 3 significant figures:

R1 = 1.22 – 0.0185WW – 0.0270DB – 1.26FA + 0.00500DR + 0.00106PT + 0.00197Pa + 0.00191LT – 0.000122Ba – 0.00000765TR + 0.000816DH – 0.206IP + 0.0153BL – 0.000215CG

Where,

R1 = stage 1 residuals

WW = World War II years (1 for 1942-1945, 0 for all other years)

DB = “deadball” era (1 for 1901-1919, 0 thereafter)

FA = league fielding average for the season

DR = prevalence of performance-enhancing drugs (1 for 1994-2007, 0 for all other seasons)

PT = number of pitchers per team

Pa = average age of league’s pitchers for the season

LT = fraction of teams with stadiums equipped with lights for night baseball

Ba = batter’s age for the season (not a general condition but one that can explain the variability of a batter’s performance)

TR = maximum distance traveled between cities for the season

DH = designated-hitter rule in effect (0 for 1901-1972, 1 for 1973-2016)

IP = average number of innings pitched per pitcher per game (counting all pitchers in the league during a season)

BL = fraction of teams with at least 1 black player

CG = average number of complete games pitched by each team during the season

The r-squared for the equation is 0.035, which seems rather low, but isn’t surprising given the apparent randomness of the dependent variable. Moreover, with 6,603 observations, the equation has an extremely significant f-value of 1.99E-43.

A positive coefficient means that the variable increases the value of the stage 1 residuals. That is, it causes batting averages to rise, other things being equal. A negative coefficient means the opposite, of course. Do the signs of the coefficients seem intuitively right, and if not, why are they the opposite of what might be expected? I’ll take them one at a time:

World War II (WW)

A lot of the game’s best batters were in uniform in 1942-1945. That left regular positions open to older, weaker batters, some of whom wouldn’t otherwise have been regulars or even in the major leagues. The negative coefficient on this variable captures the war’s effect on hitting, which suffered despite the fact that a lot of the game’s best pitchers also went to war.

Deadball era (DB)

The so-called deadball era lasted from the beginning of major-league baseball in 1871 through 1919 (roughly). It was called the deadball era because the ball stayed in play for a long time (often for an entire game), so that it lost much of its resilience and became hard to see because it accumulated dirt and scuffs. Those difficulties (for batters) were compounded by the spitball, the use of which was officially curtailed beginning with the 1920 season. (See this and this.) As figure 1 indicates, batting averages rose markedly after 1919, so the negative coefficient on DB is unsurprising.

Performance-enhancing drugs (DR)

Their rampant use seems to have begun in the early 1990s and trailed off in the late 2000s. I assigned a dummy variable of 1 to all seasons from 1994 through 2007 in an effort to capture the effect of PEDs. The coefficient suggests that the effect was (on balance) positive.

Number of pitchers per AL team (PT)

This variable, surprisingly, has a positive coefficient. One would expect the use of more pitchers to cause BA to drop. PT may be a complementary variable, one that’s meaningless without the inclusion of related variable(s). (See IP.)

Average age of AL pitchers (Pa)

The stage 1 residuals rise with respect to Pa rise until Pa = 27.4 , then they begin to drop. This variable represents the difference between 27.4 and the average age of AL pitchers during a particular season. The coefficient is multiplied by 27.4 minus average age; that is, by a positive number for ages lower than 27.4, by zero for age 27.4, and by a negative number for ages above 27.4. The positive coefficient suggests that, other things being equal, pitchers younger than 27.4 give up hits at a lower rate than pitchers older than 27.4. I’m agnostic on the issue.

Night baseball, that is, baseball played under lights (LT)

It has long been thought that batting is more difficult under artificial lighting than in sunlight. This variable measures the fraction of AL teams equipped with lights, but it doesn’t measure the rise in night games as a fraction of all games. I know from observation that that fraction continued to rise even after all AL stadiums were equipped with lights. The positive coefficient on LT suggests that it’s a complementary variable. It’s very highly correlated with BL, for example.

Batter’s age (Ba)

The stage 1 residuals don’t vary with Ba until Ba = 37 , whereupon the residuals begin to drop. The coefficient is multiplied by 37 minus the batter’s age; that is, by a positive number for ages lower than 37, by zero for age 37, and by a negative number for ages above 37. The very small negative coefficient probably picks up the effects of batters who were good enough to have long careers and hit for high averages at relatively advanced ages (e.g., Ty Cobb and Ted Williams). Their longevity causes them to be “over represented” in the sample.

Maximum distance traveled by AL teams (TR)

Does travel affect play? Probably, but the mode and speed of travel (airplane vs. train) probably also affects it. The tiny negative coefficient on this variable — which is highly correlated with several others — is meaningless, except insofar as it combines with all the other variables to account for the stage 1 residuals. TR is highly correlated with the number of teams (expansion), which suggests that expansion has had little effect on hitting.

Designated-hitter rule (DH)

The small positive coefficient on this variable suggests that the DH is a slightly better hitter, on average, than other regular position players.

Innings pitched per AL pitcher per game (IP)

This variable reflects the long-term trend toward the use of more pitchers in a game, which means that batters more often face rested pitchers who come at them with a different delivery and repertoire of pitches than their predecessors. IP has dropped steadily over the decades, presumably exerting a negative effect on BA. This is reflected in the rather large, negative coefficient on the variable, which means that it’s prudent to treat this variable as a complement to PT (discussed above) and CG (discussed below), both of which have counterintuitive signs.

Integration (BL)

I chose this variable to approximate the effect of the influx of black players (including non-white Hispanics) since 1947. BL measures only the fraction of AL teams that had at least one black player for each full season. It begins at 0.25 in 1948 (the Indians and Browns signed Larry Doby and Hank Thompson during the 1947 season) and rises to 1 in 1960, following the signing of Pumpsie Green by the Red Sox during the 1959 season. The positive coefficient on this variable is consistent with the hypothesis that segregation had prevented the use of players superior to many of the whites who occupied roster slots because of their color.

Complete games per AL team (CG)

A higher rate of complete games should mean that starting pitchers stay in games longer, on average, and therefore give up more hits, on average. The negative coefficient seems to contradict that hypothesis. But there are other, related variables (PT and CG), so this one should be thought of as a complementary variable.

Despite all of that fancy footwork, the equation accounts for only a small portion of the stage 1 residuals:

FIGURE 3
batting-average-analysis-stage-2-estimates

What’s left over — the stage 2 residuals — is (or should be) a good measure of comparative hitting ability, everything else being more or less the same. One thing that’s not the same, and not yet accounted for is the long-term trend in home-park advantage, which has slightly (and gradually) weakened. Here’s a graph of the inverse of the trend, normalized to the overall mean value of home-park advantage:

FIGURE 4
batting-average-analysis-long-term-trend-in-ballpark-factors-adjustment-normed

To get a true picture of a player’s single-season batting average, it’s just a matter of adding the stage 2 residual for that player’s season to the baseline batting average for the entire sample of 6,603 single-season performances. The resulting value is then multiplied by the equation given in figure 4. The baseline is .280, which is the overall average  for 1901-2016, from which individual performances diverge. Thus, for example, the stage 2 residual for Jimmy Williams’s 1901 season, adjusted for the long-term trend shown in figure 4, is .022. Adding that residual to .280 results in an adjusted (true) BA of .302, which is 15 points (.015) lower than Williams’s official BA of .317 in 1901.

Here are the changes from official to adjusted averages, by year:

FIGURE 5
batting-average-analysis-ba-adjustments-by-year

Unsurprisingly, the pattern is roughly a mirror image of the 6th-order polynomial regression line in figure 1.

Here’s how the adjusted batting averages (vertical axis) correlate with the official batting averages (horizontal axis):

FIGURE 6
batting-average-analysis-adjusted-vs-official-ba

The red line represents the correlation between official and adjusted BA. The dotted gray line represents a perfect correlation. The actual correlation is very high (r = 0.93), and has a slightly lower slope than a perfect correlation. High averages tend to be adjusted downward and low averages tend to be adjusted upward. The gap separates the highly inflated averages of the 1920s and 1930s (lower right) from the less-inflated averages of most other decades (upper left).

Here’s a time-series view of the official and adjusted averages:

FIGURE 7
batting-average-analysis-official-and-adjusted-ba-time-series

The wavy, bold line is the 6th-order polynomial fit from figure 1. The lighter, almost-flat line is a 6th-order polynomial fit to the adjusted values. The flatness is a good sign that most of the general changes in game conditions have been accounted for, and that what’s left (the gray plot points) is a good approximation of “real” batting averages.

What about reaction times? Have they improved or deteriorated since 1901? The results are inconclusive. Year (YR) doesn’t enter the stage 2 analysis until step 15, and it’s statistically insignificant (p-value = 0.65). Moreover, with the introduction of another variable in step 16, the sign of the coefficient on YR flips from slightly positive to slightly negative.

In sum, this analysis says nothing definitive about reaction times, even though it sheds a lot of light on the relative hitting prowess of American League batters over the past 116 years. (I’ll have more to say about that in a future post.)

It’s been great fun but it was just one of those things.

Is a Theory of Everything Necessary?

I begin with Wikipedia:

A theory of everything (ToE), final theory, ultimate theory, or master theory is a hypothetical single, all-encompassing, coherent theoretical framework of physics that fully explains and links together all physical aspects of the universe. Finding a ToE is one of the major unsolved problems in physics. Over the past few centuries, two theoretical frameworks have been developed that, as a whole, most closely resemble a ToE. These two theories upon which all modern physics rests are general relativity (GR) and quantum field theory (QFT).

Michael Brooks, in “Has This Physicist Found the Key to Reality?” (The New Statesman, October 21, 2016), puts it this way:

In relativity, time is a mischievous sprite: there is no such thing as a universe-wide “now”. . .

He continues

. . . and movement through space makes once-reliable measures such as length and time intervals stretch and squeeze like putty in Einstein’s hands. Space and time are no longer the plain stage on which our lives play out: they are curved, with a geometry that depends on the mass and energy in any particular region. Worse, this curvature determines our movements. Falling because of gravity is in fact falling because of curves in space and time. Gravity is not so much a force as a geometric state of the universe.

Moreover:

The other troublesome theory is quantum mechanics [the core of QFT], which describes the subatomic world. It, too, is a century old, and it has proved just as disorienting as relativity. As [Carlo] Rovelli puts it, quantum mechanics “reveals to us that, the more we look at the detail of the world, the less constant it is. The world is not made up of tiny pebbles, it is a world of vibrations, a continuous fluctuation, a microscopic swarming of fleeting micro-events.”

But . . .

. . . here is the most disturbing point. Both of these theories are right, in the sense that their predictions have been borne out in countless experiments. And both must be wrong, too. We know that because they contradict one another, and because each fails to take the other into account when trying to explain how the universe works.

All of this is well-known and has been for a long time. I repeat it only to set the stage for my amateur view of the problem.

As is my wont, I turn to baseball for a metaphor. A pitcher who throws a fastball relies in part on gravity to make the pitch hard to hit. Whatever else the ball does because of the release velocity, angle of release, and spin imparted to the ball at the point of release, it also drops a bit from its apparent trajectory because of gravity.

What’s going on inside the ball as it makes it way to home plate? Nothing obvious. The rubber-and-cork core (the “pill”) and the various yarns that aare wound around it remain stationary relative to each other, thanks to the tightness of the cover, the tightness of the winding, and the adhesives that are used on the pill and the top layer of wound yarn. (See this video for a complete explanation of how a baseball is manufactured.)

But that’s only part of the story. The cover and the things inside it are composed of molecules, atoms, and various subatomic particles. The subatomic particles, if not the atoms and molecules, are in constant motion throughout the flight of the ball. Yet that motion is so weak that it has no effect on the motion of the ball as it moves toward the plate. (If there’s a physicist in the house, he will correct me if I’m wrong.)

In sum: The trajectory of the baseball (due in part to gravity) is independent of the quantum mechanical effects simultaneously at work inside the baseball. Perhaps the the universe is like that. Perhaps there’s no need for a theory of everything. In fact, such a theory may be a will-o-the-wisp — the unicorn of physics.

 

Economics and Science

This is the second entry in what I expect to be a series of loosely connected posts on economics. The first entry is here.

Science is unnecessarily daunting to the uninitiated, which is to say, the vast majority of the populace. Because scientific illiteracy is rampant, advocates of policy positions — scientists and non-scientists alike — are able to invoke “science” wantonly, thus lending unwarranted authority to their positions.

Here I will dissect science, then turn to economics and begin a discussion of its scientific and non-scientific aspects. It has both, though at least one non-scientific aspect (the Keynesian multiplier) draws an inordinate amount of attention, and has many true believers within the profession.

Science is knowledge, but not all knowledge is science. A scientific body of knowledge is systematic; that is, the granular facts or phenomena which comprise the body of knowledge must be connected in patterned ways. The purported facts or phenomena of a science must represent reality, things that can be observed and measured in some way. Scientists may hypothesize the existence of an unobserved thing (e.g., the ether, dark matter), in an effort to explain observed phenomena. But the unobserved thing stands outside scientific knowledge until its existence is confirmed by observation, or because it remains standing as the only plausible explanation of observable phenomena. Hypothesized things may remain outside the realm of scientific knowledge for a very long time, if not forever. The Higgs boson, for example, was hypothesized in 1964 and has been tentatively (but not conclusively) confirmed since its “discovery” in 2011.

Science has other key characteristics. Facts and patterns must be capable of validation and replication by persons other than those who claim to have found them initially. Patterns should have predictive power; thus, for example, if the sun fails to rise in the east, the model of Earth’s movements which says that it will rise in the east is presumably invalid and must be rejected or modified so that it correctly predicts future sunrises or the lack thereof. Creating a model or tweaking an existing model just to account for a past event (e.g., the failure of the Sun to rise, the apparent increase in global temperatures from the 1970s to the 1990s) proves nothing other than an ability to “predict” the past with accuracy.

Models are usually clothed in the language of mathematics and statistics. But those aren’t scientific disciplines in themselves; they are tools of science. Expressing a theory in mathematical terms may lend the theory a scientific aura, but a theory couched in mathematical terms is not a scientific one unless (a) it can be tested against facts yet to be ascertained and events yet to occur, and (b) it is found to accord with those facts and events consistently, by rigorous statistical tests.

A science may be descriptive rather than mathematical. In a descriptive science (e.g., plant taxonomy), particular phenomena sometimes are described numerically (e.g., the number of leaves on the stem of a species), but the relations among various phenomena are not reducible to mathematics. Nevertheless, a predominantly descriptive discipline will be scientific if the phenomena within its compass are connected in patterned ways, can be validated, and are applicable to newly discovered entities.

Non-scientific disciplines can be useful, whereas some purportedly scientific disciplines verge on charlatanism. Thus, for example:

  • History, by my reckoning, is not a science because its account of events and their relationships is inescapably subjective and incomplete. But a knowledge of history is valuable, nevertheless, for the insights it offers into the influence of human nature on the outcomes of economic and political processes.
  • Physics is a science in most of its sub-disciplines, but there are some (e.g., cosmology) where it descends into the realm of speculation. It is informed, fascinating speculation to be sure, but speculation all the same. The idea of multiverses, for example, can’t be tested, inasmuch as human beings and their tools are bound to the known universe.
  • Economics is a science only to the extent that it yields empirically valid insights about  specific economic phenomena (e.g., the effects of laws and regulations on the prices and outputs of specific goods and services). Then there are concepts like the Keynesian multiplier, about which I’ll say more in this series. It’s a hypothesis that rests on a simplistic, hydraulic view of the economic system. (Other examples of pseudo-scientific economic theories are the labor theory of value and historical determinism.)

In sum, there is no such thing as “science,” writ large; that is, no one may appeal, legitimately, to “science” in the abstract. A particular discipline may be a science, but it is a science only to the extent that it comprises a factual and replicable body of patterned knowledge. Patterned knowledge includes theories with predictive power.

A scientific theory is a hypothesis that has thus far been confirmed by observation. Every scientific theory rests eventually on axioms: self-evident principles that are accepted as true without proof. The principle of uniformity (which can be traced to Galileo) is an example of such an axiom:

Uniformitarianism is the assumption that the same natural laws and processes that operate in the universe now have always operated in the universe in the past and apply everywhere in the universe. It refers to invariance in the metaphysical principles underpinning science, such as the constancy of causal structure throughout space-time, but has also been used to describe spatiotemporal invariance of physical laws. Though an unprovable postulate that cannot be verified using the scientific method, uniformitarianism has been a key first principle of virtually all fields of science

Thus, for example, if observer B is moving away from observer A at a certain speed, observer A will perceive that he is moving away from observer B at that speed. It follows that an observer cannot determine either his absolute velocity or direction of travel in space. The principle of uniformity is a fundamental axiom of modern physics, most notably of Einstein’s special and general theories of relativity.

There’s a fine line between an axiom and a theory. Was the idea of a geocentric universe an axiom or a theory? If it was taken as axiomatic — as it surely was by many scientists for about 2,000 years — then it’s fair to say that an axiom can give way under the pressure of observational evidence. (Such an event is what Thomas Kuhn calls a paradigm shift.) But no matter how far scientists push the boundaries of knowledge, they must at some point rely on untestable axioms, such as the principle of uniformity. There are simply deep and (probably) unsolvable mysteries that science is unlikely to fathom.

This brings me to economics, which — in my view — rests on these self-evident axioms:

1. Each person strives to maximize his or her sense of satisfaction, which may also be called well-being, happiness, or utility (an ugly word favored by economists). Striving isn’t the same as achieving, of course, because of lack of information, emotional decision-making, buyer’s remorse, etc

2. Happiness can and often does include an empathic concern for the well-being of others; that is, one’s happiness may be served by what is usually labelled altruism or self-sacrifice.

3. Happiness can be and often is served by the attainment of non-material ends. Not all persons (perhaps not even most of them) are interested in the maximization of wealth, that is, claims on the output of goods and services. In sum, not everyone is a wealth maximizer. (But see axiom number 12.)

4. The feeling of satisfaction that an individual derives from a particular product or service is situational — unique to the individual and to the time and place in which the individual undertakes to acquire or enjoy the product or service. Generally, however, there is a (situationally unique) point at which the acquisition or enjoyment of additional units of a particular product or service during a given period of time tends to offer less satisfaction than would the acquisition or enjoyment of units of other products or services that could be obtained at the same cost.

5. The value that a person places on a product or service is subjective. Products and services don’t have intrinsic values that apply to all persons at a given time or period of time.

6. The ability of a person to acquire products and services, and to accumulate wealth, depends (in the absence of third-party interventions) on the valuation of the products and services that are produced in part or whole by the person’s labor (mental or physical), or by the assets that he owns (e.g., a factory building, a software patent). That valuation is partly subjective (e.g., consumers’ valuation of the products and services, an employer’s qualitative evaluation of the person’s contributions to output) and partly objective (e.g., an employer’s knowledge of the price commanded by a product or service, an employer’s measurement of an employees’ contribution to the quantity of output).

7. The persons and firms from which products and services flow are motivated by the acquisition of income, with which they can acquire other products and services, and accumulate wealth for personal purposes (e.g., to pass to heirs) or business purposes (e.g., to expand the business and earn more income). So-called profit maximization (seeking to maximize the difference between the cost of production and revenue from sales) is a key determinant of business decisions but far from the only one. Others include, but aren’t limited to, being a “good neighbor,” providing employment opportunities for local residents, and underwriting philanthropic efforts.

8. The cost of production necessarily influences the price at which a good or and service will be offered for sale, but doesn’t solely determine the price at which it will be sold. Selling price depends on the subjective valuation of the products or service, prospective buyers’ incomes, and the prices of other products and services, including those that are direct or close substitutes and those to which users may switch, depending on relative prices.

9. The feeling of satisfaction that a person derives from the acquisition and enjoyment of the “basket” of products and services that he is able to buy, given his income, etc., doesn’t necessarily diminish, as long as the person has access to a great variety of products and services. (This axiom and axiom 12 put paid to the myth of diminishing marginal utility of income.)

10. Work may be a source of satisfaction in itself or it may simply be a means of acquiring and enjoying products and services, or acquiring claims to them by accumulating wealth. Even when work is satisfying in itself, it is subject to the “law” of diminishing marginal satisfaction.

11. Work, for many (but not all) persons, is no longer be worth the effort if they become able to subsist comfortably enough by virtue of the wealth that they have accumulated, the availability of redistributive schemes (e.g., Social Security and Medicare), or both. In such cases the accumulation of wealth often ceases and reverses course, as it is “cashed in” to defray the cost of subsistence (which may be far more than minimal).

12. However, there are not a few persons whose “work” is such a great source of satisfaction that they continue doing it until they are no longer capable of doing so. And there are some persons whose “work” is the accumulation of wealth, without limit. Such persons may want to accumulate wealth in order to “do good” or to leave their heirs well off or simply for the satisfaction of running up the score. The justification matters not. There is no theoretical limit to the satisfaction that a particular person may derive from the accumulation of wealth. Moreover, many of the persons (discussed in axiom 11) who aren’t able to accumulate wealth endlessly would do so if they had the ability and the means to take the required risks.

13. Individual degrees of satisfaction (happiness, etc.) are ephemeral, nonquantifiable, and incommensurable. There is no such thing as a social welfare function that a third party (e.g., government) can maximize by taking from A to give to B. If there were such a thing, its value would increase if, for example, A were to punch B in the nose and derive a degree of pleasure that somehow more than offsets the degree of pain incurred by B. (The absurdity of a social-welfare function that allows As to punch Bs in their noses ought to be enough shame inveterate social engineers into quietude — but it won’t. They derive great satisfaction from meddling.) Moreover, one of the primary excuses for meddling is that income (and thus wealth) has a  diminishing marginal utility, so it makes sense to redistribute from those with higher incomes (or more wealth) to those who have less of either. Marginal utility is, however, unknowable (see axioms 4 and 5), and may not always be negative (see axioms 9 and 12).

14. Whenever a third party (government, do-gooders, etc.) intervene in the affairs of others, that third party is merely imposing its preferences on those others. The third party sometimes claims to know what’s best for “society as a whole,” etc., but no third party can know such a thing. (See axiom 13.)

15. It follows from axiom 13 that the welfare of “society as a whole” can’t be aggregated or measured. An estimate of the monetary value of the economic output of a nation’s economy (Gross Domestic Product) is by no means an estimate of the welfare of “society as a whole.” (Again, see axiom 13.)

That may seem like a lot of axioms, which might give you pause about my claim that some aspects of economics are scientific. But economics is inescapably grounded in axioms such as the ones that I propound. This aligns me (mainly) with the Austrian economists, whose leading light was Ludwig von Mises. Gene Callahan writes about him at the website of the Ludwig von Mises Institute:

As I understand [Mises], by categorizing the fundamental principles of economics as a priori truths and not contingent facts open to empirical discovery or refutation, Mises was not claiming that economic law is revealed to us by divine action, like the ten commandments were to Moses. Nor was he proposing that economic principles are hard-wired into our brains by evolution, nor even that we could articulate or comprehend them prior to gaining familiarity with economic behavior through participating in and observing it in our own lives. In fact, it is quite possible for someone to have had a good deal of real experience with economic activity and yet never to have wondered about what basic principles, if any, it exhibits.

Nevertheless, Mises was justified in describing those principles as a priori, because they are logically prior to any empirical study of economic phenomena. Without them it is impossible even to recognize that there is a distinct class of events amenable to economic explanation. It is only by pre-supposing that concepts like intention, purpose, means, ends, satisfaction, and dissatisfaction are characteristic of a certain kind of happening in the world that we can conceive of a subject matter for economics to investigate. Those concepts are the logical prerequisites for distinguishing a domain of economic events from all of the non-economic aspects of our experience, such as the weather, the course of a planet across the night sky, the growth of plants, the breaking of waves on the shore, animal digestion, volcanoes, earthquakes, and so on.

Unless we first postulate that people deliberately undertake previously planned activities with the goal of making their situations, as they subjectively see them, better than they otherwise would be, there would be no grounds for differentiating the exchange that takes place in human society from the exchange of molecules that occurs between two liquids separated by a permeable membrane. And the features which characterize the members of the class of phenomena singled out as the subject matter of a special science must have an axiomatic status for practitioners of that science, for if they reject them then they also reject the rationale for that science’s existence.

Economics is not unique in requiring the adoption of certain assumptions as a pre-condition for using the mode of understanding it offers. Every science is founded on propositions that form the basis rather than the outcome of its investigations. For example, physics takes for granted the reality of the physical world it examines. Any piece of physical evidence it might offer has weight only if it is already assumed that the physical world is real. Nor can physicists demonstrate their assumption that the members of a sequence of similar physical measurements will bear some meaningful and consistent relationship to each other. Any test of a particular type of measurement must pre-suppose the validity of some other way of measuring against which the form under examination is to be judged.

Why do we accept that when we place a yardstick alongside one object, finding that the object stretches across half the length of the yardstick, and then place it alongside another object, which only stretches to a quarter its length, that this means the first object is longer than the second? Certainly not by empirical testing, for any such tests would be meaningless unless we already grant the principle in question. In mathematics we don’t come to know that 2 + 2 always equals 4 by repeatedly grouping two items with two others and counting the resulting collection. That would only show that our answer was correct in the instances we examined — given the assumption that counting works! — but we believe it is universally true. [And it is universally true by the conventions of mathematics. If what we call “5” were instead called “4,” 2 + 2 would always equal 5. — TEA] Biology pre-supposes that there is a significant difference between living things and inert matter, and if it denied that difference it would also be denying its own validity as a special science. . . .

The great fecundity from such analysis in economics is due to the fact that, as acting humans ourselves, we have a direct understanding of human action, something we lack in pondering the behavior of electrons or stars. The contemplative mode of theorizing is made even more important in economics because the creative nature of human choice inherently fails to exhibit the quantitative, empirical regularities, the discovery of which characterizes the modern, physical sciences. (Biology presents us with an interesting intermediate case, as many of its findings are qualitative.) . . .

[A] person can be presented with scores of experiments supporting a particular scientific theory is sound, but no possible experiment ever can demonstrate to him that experimentation is a reasonable means by which to evaluate a scientific theory. Only his intuitive grasp of its plausibility can bring him to accept that proposition. (Unless, of course, he simply adopts it on the authority of others.) He can be led through hundreds of rigorous proofs for various mathematical theorems and be taught the criteria by which they are judged to be sound, but there can be no such proof for the validity of the method itself. (Kurt Gödel famously demonstrated that a formal system of mathematical deduction that is complex enough to model even so basic a topic as arithmetic might avoid either incompleteness or inconsistency, but always must suffer at least one of those flaws.) . . .

This ultimate, inescapable reliance on judgment is illustrated by Lewis Carroll in Alice Through the Looking Glass. He has Alice tell Humpty Dumpty that 365 minus one is 364. Humpty is skeptical, and asks to see the problem done on paper. Alice dutifully writes down:

365 – 1 = 364

Humpty Dumpty studies her work for a moment before declaring that it seems to be right. The serious moral of Carroll’s comic vignette is that formal tools of thinking are useless in convincing someone of their conclusions if he hasn’t already intuitively grasped the basic principles on which they are built.

All of our knowledge ultimately is grounded on our intuitive recognition of the truth when we see it. There is nothing magical or mysterious about the a priori foundations of economics, or at least nothing any more magical or mysterious than there is about our ability to comprehend any other aspect of reality.

(Callahan has more to say here. For a technical discussion of the science of human action, or praxeology, read this. Some glosses on Gödel’s incompleteness theorem are here.)

I omitted an important passage from the preceding quotation, in order to single it out. Callahan says also that

Mises’s protégé F.A. Hayek, while agreeing with his mentor on the a priori nature of the “logic of action” and its foundational status in economics, still came to regard investigating the empirical issues that the logic of action leaves open as a more important undertaking than further examination of that logic itself.

I agree with Hayek. It’s one thing to know axiomatically that the speed of light is constant; it is quite another (and useful) thing to know experimentally that the speed of light (in empty space) is about 671 million miles an hour. Similarly, it is one thing to deduce from the axioms of economics that demand curves generally slope downward; it is quite another (and useful) thing to estimate specific demand functions.

But one must always be mindful of the limitations of quantitative methods in economics. As James Sheehan writes at the website of the Mises Institute,

economists are prone to error when they ascribe excessive precision to advanced statistical techniques. They assume, falsely, that a voluminous amount of historical observations (sample data) can help them to make inferences about the future. They presume that probability distributions follow a bell-shaped pattern. They make no provision for the possibility that past correlations between economic variables and data were coincidences.

Nor do they account for the possibility, as economist Robert Lucas demonstrated, that people will incorporate predictable patterns into their expectations, thus canceling out the predictive value of such patterns. . . .

As [Nassim Nicholas] Taleb points out [in Fooled by Randomness], the popular Monte Carlo simulation “is more a way of thinking than a computational method.” Employing this way of thinking can enhance one’s understanding only if its weaknesses are properly understood and accounted for. . . .

Taleb’s critique of econometrics is quite compatible with Austrian economics, which holds that dynamic human actions are too subjective and variegated to be accurately modeled and predicted.

In some parts of Fooled by Randomness, Taleb almost sounds Austrian in his criticisms of economists who worship “the efficient market religion.” Such economists are misguided, he argues, because they begin with the flawed hypothesis that human beings act rationally and do what is mathematically “optimal.” . . .

As opposed to a Utopian Vision, in which human beings are rational and perfectible (by state action), Taleb adopts what he calls a Tragic Vision: “We are faulty and there is no need to bother trying to correct our flaws.” It is refreshing to see a highly successful practitioner of statistics and finance adopt a contrarian viewpoint towards economics.

Yet, as Arnold Kling explains, many (perhaps most) economists have lost sight of the axioms of economics in their misplaced zeal to emulate the methods of the physical sciences:

The most distinctive trend in economic research over the past hundred years has been the increased use of mathematics. In the wake of Paul Samuelson’s (Nobel 1970) Ph.D dissertation, published in 1948, calculus became a requirement for anyone wishing to obtain an economics degree. By 1980, every serious graduate student was expected to be able to understand the work of Kenneth Arrow (Nobel 1972) and Gerard Debreu (Nobel 1983), which required mathematics several semesters beyond first-year calculus.

Today, the “theory sequence” at most top-tier graduate schools in economics is controlled by math bigots. As a result, it is impossible to survive as an economics graduate student with a math background that is less than that of an undergraduate math major. In fact, I have heard that at this year’s American Economic Association meetings, at a seminar on graduate education one professor quite proudly said that he ignored prospective students’ grades in economics courses, because their math proficiency was the key predictor of their ability to pass the coursework required to obtain an advanced degree.

The raising of the mathematical bar in graduate schools over the past several decades has driven many intelligent men and women (perhaps women especially) to pursue other fields. The graduate training process filters out students who might contribute from a perspective of anthropology, biology, psychology, history, or even intense curiosity about economic issues. Instead, the top graduate schools behave as if their goal were to produce a sort of idiot-savant, capable of appreciating and adding to the mathematical contributions of other idiot-savants, but not necessarily possessed of any interest in or ability to comprehend the world to which an economist ought to pay attention.

. . . The basic question of What Causes Prosperity? is not a question of how trading opportunities play out among a given array of goods. Instead, it is a question of how innovation takes place or does not take place in the context of institutional factors that are still poorly understood.

Mathematics, as I have said, is a tool of science, it’s not science in itself. Dressing hypothetical relationships in the garb of mathematics doesn’t validate them.

Where, then, is the science in economics? And where is the nonsense? I’ve given you some hints (and more than hints). There’s more to come.

Not-So-Random Thoughts (XVIII)

Links to the other posts in this occasional series may be found at “Favorite Posts,” just below the list of topics.

Charles Murray opines about “America Against Itself“:

With the publication in 2012 of Coming Apart: The State of White America, 1960-2010, political scientist Charles Murray – celebrated and denigrated in equal measure for his earlier works, Losing Ground (1984) and The Bell Curve (1994) – produced a searing, searching analysis of a nation cleaving along the lines of class, a nation, as he put it, ‘coming apart at the seams’. On the one side of this conflicted society, as Murray sees it, there is the intellectual or ‘cognitive’ elite, graduates of America’s leading universities, bound together through marriage and work, and clustered together in the same exclusive zipcodes, places such as Beverly Hills, Santa Monica and Boston. In these communities of the likeminded, which Murray gives the fictional title of ‘Belmont’, the inhabitants share the same values, the same moral outlook, the same distinct sense of themselves as superior. And on the other side, there is the ‘new lower class’, the white Americans who left education with no more than a high-school diploma, who increasingly divorce among themselves, endure unemployment together, and are gathered in neighbourhoods that Murray gives the title of ‘Fishtown’ – inspired by an actual white, blue-collar neighbourhood of the same name in Philadelphia.

It is in Fishtown that the trends Murray identifies as the most damaging over the past 50 years – family breakdown, loss of employment, crime and a loss of social capital – are felt and experienced. Its inhabitants have a set of values (albeit threadbare ones), an outlook and a way of life that are entirely at odds with those from Belmont. And it is between these two almost entirely distinct moral communities, that the new Culture Wars now appear to be being fought….

Collins: I was thinking about how, in Coming Apart, you explore how the elites seek to distance themselves from the working class. They eat so-called healthier foods, they have different child-rearing practices, and so on. Then, from afar, they preach their preferred ways to the working class, as if they know better. The elites may no longer preach traditional civic virtues, as you note in Coming Apart, but they are still preaching, in a way. Only now they’re preaching about health, parenting and other things.

Murray: They are preaching. They are legislating. They are creating policies. The elites (on both the right and the left) do not get excited about low-skill immigration. Let’s face it, if you are members of the elite, immigration provides you with cheap nannies, cheap lawn care, and so on. There are a variety of ways in which it is a case of ‘hey, it’s no skin off my back’ to have all of these new workers. The elites are promulgating policies for which they do not pay the price. That’s true of immigration, that’s true of education. When they support the teachers’ unions in all sorts of practices that are terrible for kids, they don’t pay that price. Either they send their kids to private schools, or they send their kids to schools in affluent suburbs in which they, the parents, really do have a lot of de facto influence over how the school is run.

So they don’t pay the price for policy after policy. Perhaps the most irritating to me – and here we are talking about preaching – is how they are constantly criticising the working class for being racist, for seeking to live in neighbourhoods in which whites are the majority. The elites live in zipcodes that are overwhelmingly white, with very few blacks and Latinos. The only significant minorities in elite zipcodes are East and South Asians. And, as the American sociologist Andrew Hacker has said, Asians are ‘honorary whites’. The integration that you have in elite neighbourhoods is only for the model minority, not for other minorities. That’s a kind of hypocrisy, to call working-class whites ‘racist’ for doing exactly the same thing that the elites do. It’s terrible.

The elites live in a bubble, which Murray explains in Coming Apart, and which I discuss in “Are You in the Bubble?” — I’m not — and “Bubbling Along.”

*     *     *

Meanwhile, in the climate war, there’s an interesting piece about scientists who got it right, but whose article was pulled because they used pseudonyms. In “Scientists Published Climate Research Under Fake Names. Then They Were Caught” we learn that

they had constructed a model, a mathematical argument, for calculating the average surface temperature of a rocky planet. Using just two factors — electromagnetic radiation beamed by the sun into the atmosphere and the atmospheric pressure at a planet’s surface — the scientists could predict a planet’s temperature. The physical principle, they said, was similar to the way that high-pressure air ignites fuel in a diesel engine.

If proved to be the case on Earth, the model would have dramatic implications: Our planet is warming, but the solar radiation and our atmosphere would be to blame, not us.

It seems to me that their real sin was contradicting the “settled science” of climatology.

Well, Francis Menton — author of “The ‘Science’ Underlying Climate Alarmism Turns Up Missing” — has something to say about that “settled science”:

In the list of President Obama’s favorite things to do, using government power to save the world from human-caused “climate change” has to rank at the top.  From the time of his nomination acceptance speech in June 2008 (“this was the moment when the rise of the oceans began to slow and our planet began to heal . . .”), through all of his State of the Union addresses, and right up to the present, he has never missed an opportunity to lecture us on how atmospheric warming from our sinful “greenhouse gas” emissions is the greatest crisis facing humanity….

But is there actually any scientific basis for this?  Supposedly, it’s to be found in a document uttered by EPA back in December 2009, known as the “Endangerment Finding.”  In said document, the geniuses at EPA purport to find that the emissions of “greenhouse gases” into the atmosphere are causing a danger to human health and welfare through the greenhouse warming mechanism.  But, you ask, is there any actual proof of that?  EPA’s answer (found in the Endangerment Finding) is the “Three Lines of Evidence”….

The news is that a major new work of research, from a large group of top scientists and mathematicians, asserts that EPA’s “lines of evidence,” and thus its Endangerment Finding, have been scientifically invalidated….

So the authors of this Report, operating without government or industry funding, compiled the best available atmospheric temperature time series from 13 independent sources (satellites, balloons, buoys, and surface records), and then backed out only ENSO (i.e., El Nino/La Nina) effects.  And with that data and that sole adjustment they found: no evidence of the so-called Tropical Hot Spot that is the key to EPA’s claimed “basic physical understanding” of the claimed atmospheric greenhouse warming model, plus no statistically significant atmospheric warming at all to be explained.

What an amazing non-coincidence. That’s exactly what I found when I looked at the temperature record for Austin, Texas, since the late 1960s, when AGW was supposedly making life miserable for the planet. See “AGW in Austin? (II)” and the list of related readings and posts at the bottom. See also “Is Science Self Correcting?” (answer: no).

*     *     *

REVISED 11/18/16

Ten years ago, I posted “An Immigration Roundup,” a collection of 13 posts dated March 29 through September 22, 2006. The bottom line: to encourage and allow rampant illegal immigration borders on social and economic suicide. I remain a hardliner because of the higher crime rate among Hispanics (“Immigration and Crime“), and because of Steven Camarota’s “So What Is the Fiscal and Economic Impact of Immigration?“:

The National Academies of Sciences, Engineering, and Medicine have just released what can fairly be described as the most comprehensive look at the economic and fiscal impact of immigration on the United States. It represents an update of sorts of a similar NAS study released in 1997, in the middle of an earlier immigration debate. Overall the report is quite balanced, with a lot of interesting findings….
The most straightforward part of the study is its assemblage of estimates of the current fiscal impact of immigrants. The study shows that immigrants (legal and illegal) do not come close to paying enough in taxes to cover their consumption of public services at the present time. The NAS present eight different scenarios based on different assumptions about the current fiscal impact of immigrants and their dependent children — and every scenario is negative. No matter what assumption the NAS makes, immigrants use more in public services than they pay in taxes. The largest net drain they report is $299 billion a year. It should be pointed out that native-born American are also shown to be a net fiscal drain, mainly because of the federal budget deficit — Washington gives out a lot more than it takes in. But the fiscal drain created by immigrants is disproportionately large relative to the size of their population. Equally important, a fiscal drain caused by natives may be unavoidable. Adding more immigrants who create a fiscal drain, on the other hand, can be avoided with a different immigration policy….
With regard to economics — jobs and wages — the results in the NAS study, based on the standard economic model, show that immigration does make the U.S economy larger by adding workers and population. But a larger economy is not necessarily a benefit to natives. The report estimates that the actual benefit to the native-born could be $54.2 billion a year — referred to as the “immigrant surplus.” This is the benefit that accrues to American businesses because immigration increases the supply of workers and reduces American wages. Several points need to be made about this estimate. First, to generate this surplus, immigration has to create a very large redistribution of income from workers to owners of capital. The model works this way: Immigration reduces the wages of natives in competition with immigrant workers by $493.9 billion annually, but it increases the income of businesses by $548.1 billion, for a net gain of $54.2 billion. Unfortunately, the NAS does not report this large income redistribution, though it provides all the information necessary to calculate it. A second key point about this economic gain is that, relative to the income of natives, the benefit is very small, representing a “0.31 percent overall increase in income” for native-born Americans.
Third, the report also summarizes empirical studies that have tried to measure directly the impact of immigration on the wages of natives (the analysis above being based on economic theory rather than direct measurement). The size of the wage impact in those empirical studies is similar to that shown above. The NAS report cites over a dozen studies indicating that immigration does reduce wages primarily for the least-educated and poorest Americans. It must be pointed out, however, that there remains some debate among economists about immigration’s wage impact. The fourth and perhaps most important point about the “immigrant surplus” is that it is eaten up by the drain on the public fisc. For example, the average of all eight fiscal scenarios is a net drain (taxes minus services) of $83 billion a year at the present time, a good deal larger than the $54.2 billion immigrant surplus.

There’s much more, but that’s enough for me. Build that wall!

*     *     *

It’s also time to revisit the question of crime. Heather Mac Donald says “Yes, the Ferguson Effect Is Real,” and Paul Mirengoff shows that “Violent Crime Jumped in 2015.” I got to the root of the problem in “Crime Revisited,” to which I’ve added “Amen to That” and “Double Amen.”

What’s the root of the problem? A certain, violence-prone racial minority, of course, and also under-incarceration. Follow all of the links in the preceding paragraph, and read and weep.

Brandeis’s Ignorance

Louis D. Brandeis (1856-1941; Supreme Court justice, 1916-1939) penned many snappy aphorisms. Here’s one that “progressives” are especially fond of: “Behind every argument is someone’s ignorance.” Here it is in larger context:

Behind every argument is someone’s ignorance. Re-discover the foundation of truth and the purpose and causes of dispute immediately disappear.

Spoken like the true technocrat that Brandeis was. The “truth” was his to know, and to enforce through government action, beginning long before his ascent to the Supreme Court.

There are fundamental and irreconcilable differences that Brandeis’s “truth” cannot bridge. Brandeis and his intellectual kin would never admit that, of course, so bent were (and are) they on imposing their “truth” on all Americans.

Is it ignorant to value liberty over the promise of economic security, especially when it’s obtained at the expense of liberty?

Is it ignorant to treat terrorism as a risk that’s categorically different than a traffic accident or lightning strike?

Is it ignorant to defend traditional values and their civilizing influence against the depradations of one’s cultural and physical enemies?

Is is ignorant to fear that America’s police and armed forces will become less able to defend peaceful citizens when those forces are weakened in the name of “sexual equality”?

Is it ignorant to oppose the subversion of the institution of marriage, which is the bedrock of civil society, in the name of “marriage equality”?

“Progressives” will answer “yes” to all the questions. Thus proving the ignorance of “progressives” and the wisdom of opposing “progressivism.”

Related posts:
Getting It All Wrong about the Risk of Terrorism
A Skewed Perspective on Terrorism
Intellectuals and Capitalism
Intellectuals and Society: A Review
The Left’s Agenda
The Left and Its Delusions
The Myth That Same-Sex “Marriage” Causes No Harm
The Spoiled Children of Capitalism
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
Are You in the Bubble?
Defense as an Investment in Liberty and Prosperity
Abortion, Doublethink, and Left-Wing Blather
Abortion, “Gay Rights,” and Liberty
The 80-20 Rule, Illustrated
Economic Horror Stories: The Great “Demancipation” and Economic Stagnation
The Culture War
The Keynesian Multiplier: Phony Math
The True Multiplier
The Pretence of Knowledge
Social Accounting: A Tool of Social Engineering
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
A Case for Redistribution, Not Made
Evolution, Culture, and “Diversity”
Ruminations on the Left in America
McCloskey on Piketty
The Rahn Curve Revisited
Nature, Nurture, and Inequality
The Real Burden of Government
Diminishing Marginal Utility and the Redistributive Urge
Rationalism, Empiricism, and Scientific Knowledge
Academic Ignorance
The Euphemism Conquers All
Superiority
The “Marketplace” of Ideas
Whiners
A Dose of Reality
Ty Cobb and the State of Science
Understanding Probability: Pascal’s Wager and Catastrophic Global Warming
God-Like Minds
The Beginning of the End of Liberty in America
Revisiting the “Marketplace” of Ideas
The Technocratic Illusion
Capitalism, Competition, Prosperity, and Happiness
Further Thoughts about the Keynesian Multiplier
The Precautionary Principle and Pascal’s Wager
Marriage: Privatize It and Revitalize It
From Each According to His Ability…
Non-Judgmentalism as Leftist Condescension
An Addendum to (Asymmetrical) Ideological Warfare
Unsurprising News about Health-Care Costs
Further Pretensions of Knowledge
“And the Truth Shall Set You Free”
Social Justice vs. Liberty
The Wages of Simplistic Economics
Is Science Self-Correcting?

“Feelings, nothing more than feelings”

Physicalism is the thesis that everything is physical, or as contemporary philosophers sometimes put it, that everything supervenes on the physical. The thesis is usually intended as a metaphysical thesis, parallel to the thesis attributed to the ancient Greek philosopher Thales, that everything is water, or the idealism of the 18th Century philosopher Berkeley, that everything is mental. The general idea is that the nature of the actual world (i.e. the universe and everything in it) conforms to a certain condition, the condition of being physical. Of course, physicalists don’t deny that the world might contain many items that at first glance don’t seem physical — items of a biological, or psychological, or moral, or social nature. But they insist nevertheless that at the end of the day such items are either physical or supervene on the physical.

Daniel Stoljar, “Physicialism” (Stanford Encyclopedia of Philosophy,
first published February 13, 2001, substantively revised March 9, 2015)

Robin Hanson, an economics professor and former physicist, takes the physicalist position in “All Is Simple Parts Interacting Simply“:

There is nothing that we know of that isn’t described well by physics, and everything that physicists know of is well described as many simple parts interacting simply. Parts are localized in space, have interactions localized in time, and interactions effects don’t move in space faster than the speed of light. Simple parts have internal states that can be specified with just a few bits (or qubits), and each part only interacts directly with a few other parts close in space and time. Since each interaction is only between a few bits on a few sides, it must also be simple. Furthermore, all known interactions are mutual in the sense that the state on all sides is influenced by states of the other sides….

Not only do we know that in general everything is made of simple parts interacting simply, for pretty much everything that happens here on Earth we know those parts and interactions in great precise detail. Yes there are still some areas of physics we don’t fully understand, but we also know that those uncertainties have almost nothing to say about ordinary events here on Earth….

Now it is true that when many simple parts are combined into complex arrangements, it can be very hard to calculate the detailed outcomes they produce. This isn’t because such outcomes aren’t implied by the math, but because it can be hard to calculate what math implies.

However,

what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable.

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

Thus it seems hard to square a belief in this extra feeling stuff with standard physics in either cases, where feeling stuff does or does not have strong interactions with ordinary stuff. The obvious conclusion: extra feeling stuff just doesn’t exist.

Of course the “feeling stuff” interacts strongly and reliably with the familiar parts of the world — unless you’re a Robin Hanson, who seems to have no “feeling stuff.” Has he never been insulted, cut off by a rude lane-changer, been in love, held a baby in his arms, and so on unto infinity?

Hanson continues:

If this type of [strong] interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

Which is just a stupid thing to say. Physicists haven’t measured the interactions — and probably never will — because they’re not the kinds of phenomena that physicists study. Psychologists, yes; physicists, no.

Not being satisfied with obtuseness and stupidity, Hanson concedes the existence of “feelings,” but jumps to a conclusion in order to dismiss them:

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist….

Thus it seems hard to square a belief in this extra feeling stuff with standard physics in either cases, where feeling stuff does or does not have strong interactions with ordinary stuff. The obvious conclusion: extra feeling stuff just doesn’t exist.

How does Hanson — the erstwhile physicist — know any of this? I submit that he doesn’t know. He’s just arguing circularly, as an already-committed physicalist.

First, Hanson assumes that feelings aren’t “real” because physicists haven’t measured their effects. But that failure has been for lack of trying.

Then Hanson assumes that the absence of evidence is evidence of absence. Specifically, because there’s no evidence (as he defines it) for the existence of “feelings,” their existence (if real) is merely coincidental with claims of their existence.

And then Hanson the Obtuse ignores strong interactions of “feeling stuff” with “ordinary stuff.” Which suggests that he has never experienced love, desire, or hate (for starters).

It would be reasonable for Hanson to suggest that feelings are real, in a physical sense, in that they represent chemical states of the central nervous system. He could then claim that feelings don’t exist apart from such states; that is, “feeling stuff” is nothing more than a physical phenomenon. Hanson makes that claim, but in a roundabout way:

If everything around us is explained by ordinary physics, then a detailed examination of the ordinary physics of familiar systems will eventually tells us everything there is to know about the causes and consequences of our feelings. It will say how many different feelings we are capable of, what outside factors influence them, and how our words and actions depend on them.

However, he gets there by assuming an answer to the question whether “feelings” are something real and apart from physical existence. He hasn’t proven anything, one way or the other.

Hanson’s blog is called Overcoming Bias. It’s an apt title: Hanson has a lot of bias to overcome.

Related posts:
Why I Am Not an Extreme Libertarian
Blackmail, Anyone?
NEVER FORGIVE, NEVER FORGET, NEVER RELENT!
Utilitarianism vs. Liberty (II)