Free Markets and Democracy

I am not slavishly devoted to free markets.

And I am deeply cynical about democracy as it is effected through electoral politics. But to almost everyone “democracy” is electoral democracy — and a “good thing”.

Of course, a goodly fraction of the people who think of “democracy” as a good thing have a particular formulation in mind: The “people” ought to decide how resources are allocated, businesses are run, profits are distributed, etc., etc., etc. The only practical way for such things to be done is for the “people” to elect office-holders who will use the power of government to make such things happen, as they (the office-holders and their unelected bureaucratic minions) prefer them to be done.

So the end result of electoral democracy isn’t democratic at all. The masses of people who are affected by government decisions about their social and economic affairs don’t really have a say in the making of those decisions. They only have a say in the election of office-holders who offer vague and nice-sounding promises about the things that they will accomplish. Those office-holders then turn things over to bureaucrats who have their own, very specific, undemocratic views about what should be accomplished, and how.

Which brings me back to free markets. Free markets are those in which buyers and sellers, through the price mechanism, determine what products and services should be produced, at what prices, and for whom. Every market participant acts voluntarily, and no one is coerced into selling something that he doesn’t want to produce or buying something that he doesn’t want to have. (Government intervention in markets yields exactly that kind of coercion by dictating, in effect, what can and cannot be produced, under what conditions, and by whom. The consumer is therefore coerced into a range of choices, or non-choices, that aren’t the ones he would prefer.)

Free markets, in sum, are democratic, in that their outcomes are determined directly by the participants in those markets.

And so we are left with the paradox that the loudest proponents of “democracy” are responsible for subverting it by their adamant opposition to free markets.

Competitiveness in Major-League Baseball

Only 30 days and fewer than 30 games per team remain in major-league baseball’s regular season. There are all-but-certain winners in three of six divisions: the New York Yankees, American League (AL) East; Houston Astros, AL West; and Los Angeles Dodgesr, National League (NL) West.

The Boston Red Sox, last year’s AL East and World Series champions, probably won’t make it to the AL wild-card playoff game. The Milwaukee Brewers, last year’s NL Central champs, are in the same boat. The doormat of AL Central, the Detroit Tigers, are handily winning the race to the bottom, with this year’s worst record in the major leagues.

Anecdotes, however, won’t settle the question whether major-league baseball is becoming more or less competitive. Numbers won’t settle the question, either, but they might shed some light on the matter. Consider this graph, which I will explain and discuss below:


Based on statistics for the National League and American League compiled at Baseball-Reference.com.

Though the NL began play in 1876, I have analyzed its record from 1901 through 2018, for parallelism with the AL, which began play in 1901. The rough similarity of the two time series lends weight to the analysis that I will offer shortly.

First, what do the numbers mean? The deviation between a team’s won-lost (W-L) record and the average for the league is simply

Dt = Rt – Rl , where, Rt is the team’s record and Rl is the league’s record in a given season.

If the team’s record is .600 and the league’s record is .500 (as it always was until the onset of interleague play in 1997), then Dt = .100. And if a team’s record is .400 and the league’s record is .500, then Dt = -.100. Given that wins and losses cancel each other, the mean deviation for all teams in a league would be zero, or very near zero, which wouldn’t tell us much about the spread around the league average. So I use the absolute values of Dt and average them. In the case of teams with deviations of .100 and -.100, the absolute values of the deviations would be .100 and .100, yielding a mean of .100. In a more closely contested season, the deviations for the two teams might be .050 and -.050, yielding a mean absolute deviation of .050.

The smaller the mean absolute deviation, the more competitive the league in that season. Season-by-season plots of the means are rather jagged, obscuring long-term trends. I therefore used centered five-year averages of mean absolute deviations.

Both leagues generally became more competitive from the early 1900s until around 1970. Since then, the AL has experienced two less-competitive periods: the late 1970s (when the New York Yankees re-emerged as a dominant team), and the early 2000s (when the Yankees were enjoying another era of dominance that began in the late 1990s). (The Yankees’ earlier periods of dominance show up as local peaks in the black line centered at 1930 and the early 1950s.)

The NL line highlights the dominance of the Chicago Cubs in the early 1900s and the recent dominance of the Chicago Cubs, Los Angeles Dodgers, and St. Louis Cardinals.

What explains the long-term movement toward greater competitiveness in both leagues? Here’s my hypothesis:

Integration, which began in the late 1940s, eventually expanded the pool of baseball talent by opening the door not only to American blacks but also to black and white Hispanics from Latin America. (There were a few non-black Hispanic players in major-league ball before integration, but they were notable freckles on the game’s pale complexion.) Integration of blacks and Latins continue for decades after the last major-league team was nominally integrated in the late 1950s.

Meanwhile, the minor leagues were dwindling — from highs of 59 leagues and 448 teams in 1949 to 15 leagues and 176 teams in 2017. Players who might otherwise have stayed in the minor leagues have been promoted to the major leagues more often than in the past.

That tendency was magnified by expansion of the major leagues. The AL started in 1961 (2 teams), and the NL followed suit in 1962 (2 teams). Further expansion in 1969 (2 teams in each league), 1977 (2 teams in the AL), 1993 (2 teams in the NL), and 1998 (1 team in each league) brought the number of major-league teams to 30.

While there are now 88 percent more major-league teams than there were in 1949, there are far fewer teams in major-league and minor-league ball, combined. Meanwhile the population of the United States has more than doubled, and that source of talent has been augmented significantly by the recruitment of players of Latin America.

Further, free agency, which began in the mid-1970s, allowed weaker teams to attract high-quality players by offering them more money than stronger teams found it wise to offer, given roster limitations. Each team may carry only 25 players on its active roster until the final month of the season. Therefore, no matter how much money a team’s owner has, the limit on the size of his team’s roster constrains his ability to sign the best players available for every position. So the richer pool of talent is spread more evenly across teams.

“Economic Growth Since World War II” Updated

I have updated key portions of “Economic Growth Since World War II“; specifically, these sections:

II. The Record Since World War II

VI. Employment vs. Big Government and Disincentives to Work

There’s a long way to go before the dead hand of big government has been lifted enough to allow the restoration of robust growth — last enjoyed in the 1980s (see the table following figure 3). If a Democrat is elected president in 2020, the dead hand will, instead, lie more heavily on the economy.

Conservatism vs. “Libertarianism” and Leftism on the Moral Dimension

I said this recently:

Conservatives rightly defend free markets because they exemplify the learning from trial and error that underlies the wisdom of voluntarily evolved social norms — norms that bind a people in mutual trust, respect, and forbearance.

Conservatives also rightly condemn free markets — or some of the produce of free markets — because that produce is often destructive of social norms.

What about “libertarianism”* and leftism? So-called libertarians, if they are being intellectually consistent, will tell you that it doesn’t matter what markets produce (as long as the are truly free ones). What matters, in their view, is whether the produce of markets isn’t used to cause harm to others. (I have elsewhere addressed the vacuousness and fatuousness of the harm principle.) Therein lies a conundrum — or perhaps a paradox — for if the produce of markets can be used to cause harm, that is, used in immoral ways, the produce (and the act of producing it) may be immoral, that is, inherently and unambiguously harmful.

Guns aren’t a good example because they can be (and are) used in peaceful and productive or neutral ways (e.g., hunting for food, target-shooting for the enjoyment of it). Their use in self-defense and in wars against enemies, though not peaceful, is productive for the persons and nations engaged in defensive actions. (A war that contains elements of offense — even preemption — may nevertheless be defensive.)

Child pornography, on the other hand, is rightly outlawed because the production of it involves either (a) forcible participation by children or (b) the exploitation of “willing” children who are too young and inexperienced in life to know that they are subjecting themselves to physical and emotional dangers. Inasmuch as the produce (child pornography) can result only from an immoral process (physical or emotional coercion), the produce is therefore inherently and unambiguously immoral. I will leave it to the reader to find similar examples.

Here, I will turn in a different direction and tread on controversial ground by saying that the so-called marketplace of ideas sometimes yields inherently and unambiguously immoral outcomes:

Unlike true markets, where competition usually eliminates sellers whose products and services are found wanting, the competition of ideas often leads to the broad acceptance of superstitions, crackpot notions, and plausible but mistaken theories. These often find their way into government policy, where they are imposed on citizens and taxpayers for the psychic benefit of politicians and bureaucrats and the monetary benefit of their cronies.

The “marketplace” of ideas is replete with vendors who are crackpots, charlatans, and petty tyrants. They run rampant in the media, academia, and government.

If that were the only example of odious outcomes, it would be more than enough to convince me (if I needed convincing) that “libertarians” are dangerously naive. They are the kind of people who believe that disputes can and will be resolved peacefully through the application of “reason”, when they live in a world where most of the evidence runs in the other direction. They are as Lord Halifax — Winston Churchill’s first foreign secretary — was to Churchill: whimpering appeasers vs. defiant defenders of civilization.

The willingness of leftists (especially office-holders, office-seekers, and apparatchiks) to accept market outcomes is easier to analyze. Despite their preference for government dictation of market outcomes, they are willing to accept those outcomes as long as they comport with what should be, as leftists happen to see it at the moment. Leftists are notoriously unsteady in their views of what should be, because those views are contrived to yield power. Today’s incessant attacks on “racism”, “inequality”, and “sexism” are aimed at disarming the (rather too reluctant and gentlemanly) defenders of liberty (which isn’t synonymous with the unfettered operation of markets).

Power is the ultimate value of leftist office-holders, office-seekers, and apparatchiks. The inner compass of that ilk — regardless of posturing to the contrary — points toward power, not morality. Rank-and-file leftists — most of them probably sincere in their moral views — are merely useful idiots who lend their voices, votes, and money (often unwittingly) to the cause of repression.

Leftism, in short, exploits the inherent immorality of the “marketplace of ideas”.

Is it any wonder that leftism almost always triumphs over “libertarianism” and conservatism? Leftism is the cajoling adult who convinces the unwitting child to partake of physically and psychologically harmful sexual activity.


* I have used “sneer quotes” because “libertarianism” is a shallow ideology. True libertarianism is found in tradistional conservatism. (See “What Is Libertarianism?” and “True Libertarianism, One More Time“, for example.)

(See also “Asymmetrical (Ideological) Warfare“, “An Addendum to Asymmetrical (Ideological) Warfare“, and “The Left-Libertarian Axis“.)

Another Thought about “Darkest Hour”

I said recently about Darkest Hour  that

despite Gary Oldman’s deservedly acclaimed, Oscar-winning impersonation of Winston Churchill, earned a rating of 7 from me. It was an entertaining film, but a rather trite offering of Hollywoodized history.

There was a subtle aspect of the film which led me to believe that Churchill’s firm stance against a negotiated peace with Hitler had more support from the Labour Party than from Churchill’s Conservative colleagues. So I went to Wikipedia, which says this (among many things) in a discussion of the film’s historical accuracy:

In The New Yorker, Adam Gopnik wrote: “…in late May of 1940, when the Conservative grandee Lord Halifax challenged Churchill, insisting that it was still possible to negotiate a deal with Hitler, through the good offices of Mussolini, it was the steadfast anti-Nazism of Attlee and his Labour colleagues that saved the day – a vital truth badly underdramatized in the current Churchill-centric film, Darkest Hour“. This criticism was echoed by Adrian Smith, emeritus professor of modern history at the University of Southampton, who wrote in the New Statesman that the film was “yet again overlooking Labour’s key role at the most dangerous moment in this country’s history … in May 1940 its leaders gave Churchill the unequivocal support he needed when refusing to surrender. Ignoring Attlee’s vital role is just one more failing in a deeply flawed film”.

I thought that, if anything, the film did portray Labour as more steadfast than the Tories. First, the Conservatives (especially Halifax and Neville Chamberlain) were made to seem derisive of Churchill and all-too-willing to compromise with Hitler. Second — and here’s the subtlety — at the end of Churcill’s speech to the House of Commons on June 4, 1940, which is made the climactic scene in Darkest Hour, the Labour side of the House erupts in enthusiastic applause, while the Conservative side is subdued until it follows Labour’s suit.

The final lines of Churchill’s speech are always worth repeating:

Even though large tracts of Europe and many old and famous States have fallen or may fall into the grip of the Gestapo and all the odious apparatus of Nazi rule, we shall not flag or fail. We shall go on to the end, we shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our Island, whatever the cost may be, we shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender, and even if, which I do not for a moment believe, this Island or a large part of it were subjugated and starving, then our Empire beyond the seas, armed and guarded by the British Fleet, would carry on the struggle, until, in God’s good time, the New World, with all its power and might, steps forth to the rescue and the liberation of the old.

If G.W. Bush could have been as adamant in his opposition to the enemy (instead of pandering to the “religion of peace”), and as eloquent in his speech to Congress after 9/11 and at subsequent points in the ill-executed “war on terror”, there might now be a Pax Americana in the Middle East.

(See also “September 20, 2001: Hillary Clinton Signals the End of ‘Unity’“, “The War on Terror As It Should Have Been Fought“, and “A Rearview Look at the Invasion of Iraq and the War on Terror“.)

Analysis vs. Reality

In my days as a defense analyst I often encountered military officers who were skeptical about the ability of civilian analysts to draw valid conclusions from mathematical models about the merits of systems and tactics. I took me several years to understand and agree with their position. My growing doubts about the power of quantitative analysis of military matters culminated in a paper where I wrote that

combat is not a mathematical process…. One may describe the outcome of combat mathematically, but it is difficult, even after the fact, to determine the variables that made a difference in the outcome.

Much as we would like to fold the many different parameters of a weapon, a force, or a strategy into a single number, we can not. An analyst’s notion of which variables matter and how they interact is no substitute for data. Such data as exist, of course, represent observations of discrete events — usually peacetime events. It remains for the analyst to calibrate the observations, but without a benchmark to go by. Calibration by past battles is a method of reconstruction — of cutting one of several coats to fit a single form — but not a method of validation.

Lacking pertinent data, an analyst is likely to resort to models of great complexity. Thus, if useful estimates of detection probabilities are unavailable, the detection process is modeled; if estimates of the outcomes of dogfights are unavailable, aerial combat is reduced to minutiae. Spurious accuracy replaces obvious inaccuracy; untestable hypotheses and unchecked calibrations multiply apace. Yet the analyst claims relative if not absolute accuracy, certifying that he has identified, measured, and properly linked, a priori, the parameters that differentiate weapons, forces, and strategies.

In the end, “reasonableness” is the only defense of warfare models of any stripe.

It is ironic that analysts must fall back upon the appeal to intuition that has been denied to military men — whose intuition at least flows from a life-or-death incentive to make good guesses when choosing weapons, forces, or strategies.

My colleagues were not amused, to say the least.

I was reminded of all this by a recent exchange with a high-school classmate who had enlisted my help in tracking down a woman who, according to a genealogy website, is her first cousin, twice removed. The success of the venture is as yet uncertain. But if it does succeed it will be because of the classmate’s intimate knowledge of her family, not my command of research tools. As I said to my classmate,

You know a lot more about your family than I know about mine. I have all of the names and dates in my genealogy data bank, but I really don’t know much about their lives. After I moved to Virginia … I was out of the loop on family gossip, and my parents didn’t relate it to me. For example, when I visited my parents … for their 50th anniversary I happened to see a newspaper clipping about the death of my father’s sister a year earlier. It was news to me. And I didn’t learn of the death of my mother’s youngest brother (leaving her as the last of 10 children) until my sister happened to mention it to me a few years after he had died. And she didn’t know that I didn’t know.

All of which means that there’s a lot more to life than bare facts — dates of birth, death, etc. That’s why military people (with good reason) don’t trust analysts who draw conclusions about military weapons and tactics based on mathematical models. Those analysts don’t have a “feel” for how weapons and tactics actually work in the heat of battle, which is what matters.

Climate modelers are even more in the dark than military analysts because, unlike military officers with relevant experience, there’s no “climate officer” who can set climate modelers straight — or (more wisely) ignore them.

(See also “Modeling Is Not Science“, “The McNamara Legacy: A Personal Perspective“, “Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry“, “Analytical and Scientific Arrogance“, “Why I Don’t Believe in ‘Climate Change’“, and “Predicting ‘Global’ Temperatures — An Analogy with Baseball“.)

Predicting “Global” Temperatures — An Analogy with Baseball

The following graph is a plot of the 12-month moving average of “global” mean temperature anomalies for 1979-2018 in the lower troposphere, as reported by the climate-research unit of the University of Alabama-Huntsville (UAH):

The UAH values, which are derived from satellite-borne sensors, are as close as one can come to an estimate of changes in “global” mean temperatures. The UAH values certainly are more complete and reliable than the values derived from the surface-thermometer record, which is biased toward observations over the land masses of the Northern Hemisphere (the U.S., in particular) — observations that are themselves notoriously fraught with siting problems, urban-heat-island biases, and “adjustments” that have been made to “homogenize” temperature data, that is, to make it agree with the warming predictions of global-climate models.

The next graph roughly resembles the first one, but it’s easier to describe. It represents the fraction of games won by the Oakland Athletics baseball team in the 1979-2018 seasons:

Unlike the “global” temperature record, the A’s W-L record is known with certainty. Every game played by the team (indeed, by all teams in organized baseball) is diligently recorded, and in great detail. Those records yield a wealth of information not only about team records, but also about the accomplishments of the individual players whose combined performance determines whether and how often a team wins its games. Given that information, and much else about which statistics are or could be compiled (records of players in the years and games preceding a season or game; records of each team’s owner, general managers, and managers; orientations of the ballparks in which each team compiled its records; distances to the fences in those ballparks; time of day at which games were played; ambient temperatures, and on and on).

Despite all of that knowledge, there is much uncertainty about how to model the interactions among the quantifiable elements of the game, and how to give weight to the non-quantifiable elements (a manager’s leadership and tactical skills, team spirit, and on and on). Even the professional prognosticators at FiveThirtyEight, armed with a vast compilation of baseball statistics from which they have devised a complex predictive model of baseball outcomes will admit that perfection (or anything close to it) eludes them. Like many other statisticians, they fall back on the excuse that “chance” or “luck” intrudes too often to allow their statistical methods to work their magic. What they won’t admit to themselves is that the results of simulations (such as those employed in the complex model devised by FiveThirtyEight),

reflect the assumptions underlying the authors’ model — not reality. A key assumption is that the model … accounts for all relevant variables….

As I have said, “luck” is mainly an excuse and rarely an explanation. Attributing outcomes to “luck” is an easy way of belittling success when it accrues to a rival.

It is also an easy way of dodging the fact that no model can accurately account for the outcomes of complex systems. “Luck” is the disappointed modeler’s excuse.

If the outcomes of baseball games and seasons could be modeled with great certainly, people wouldn’t bet on those outcomes. The existence of successful models would become general knowledge, and betting would cease, as the small gains that might accrue from betting on narrow odds would be wiped out by vigorish.

Returning now to “global” temperatures, I am unaware of any model that actually tries to account for the myriad factors that influence climate. The pseudo-science of “climate change” began with the assumption that “global” temperatures are driven by human activity, namely the burning of fossil fuels that releases CO2 into the atmosphere. CO2 became the centerpiece of global climate models (GCMs), and everything else became an afterthought, or a non-thought. It is widely acknowledged that cloud formation and cloud cover — obviously important determinants of near-surface temperatures — are treated inadequately (when treated at all). The mechanism by which the oceans absorb heat and transmit it to the atmosphere also remain mysterious. The effect of solar activity on cosmic radiation reaching Earth (and thus on cloud formation) remains is often dismissed despite strong evidence of its importance. Other factors that seem to have little or no weight in GCMs (though they are sometimes estimated in isolation) include plate techtonics, magma flows, volcanic activity, and vegetation.

Despite all of that, builders of GCMs — and the doomsayers who worship them — believe that “global” temperatures will rise to catastrophic readings. The rising oceans will swamp coastal cities; the earth will be scorched. except where it is flooded by massive storms; crops will fail accordingly; tempers will flare and wars will break out more frequently.

There’s just one catch, and it’s a big one. Minute changes in the value of a dependent variable (“global” temperature, in this case) can’t be explained by a model in which key explanatory variables are unaccounted for, about which there is much uncertainty surrounding the values of those explanatory variables that can be accounted for, and about which there is great uncertainty about the mechanisms by which the variables interact. Even an impossibly complete model would be wildly inaccurate given the uncertainty of the interactions among variables and the values of those variables (in the past as well as in the future).

I say “minute changes” because first graph above is grossly misleading. An unbiased depiction of “global” temperatures looks like this:

There’s a much better chance of predicting the success or failure of the Oakland A’s, whose record looks like this on an absolute scale:

Just as no rational (unemotional) person should believe that predictions of “global” temperatures should dictate government spending and regulatory policies, no sane bettor is holding his breath in anticipation that the success or failure of the A’s (or any team) can be predicted with bankable certainty.

All of this illustrates a concept known as causal density, which Arnold Kling explains:

When there are many factors that have an impact on a system, statistical analysis yields unreliable results. Computer simulations give you exquisitely precise unreliable results. Those who run such simulations and call what they do “science” are deceiving themselves.

The folks at FiveThirtyEight are no more (and no less) delusional than the creators of GCMs.

A Footnote to “Movies”

I noted here that I’ve updated my “Movies” page. There’s a further update. I’ve added a list of my very favorite films — the 69 that I’ve rated a 10 or 9 (out of 10). The list is reproduced below, complete with links to IMDb pages so that you can look up a film with which you may be unfamiliar.

Many of the films on my list are slanted to the left (e.g., Inherit the Wind), but they’re on my list because of their merit as entertainment. Borrowing from the criteria posted at the bottom of “Movies”, a rating of 9 means that I found a film to be superior several of thhe following dimensions: mood, plot, dialogue, music (if applicable), dancing (if applicable), quality of performances, production values, and historical or topical interest; worth seeing twice but not a slam-dunk great film. A “10” is an exemplar of its type; it can be enjoyed many times.

My Very Favorite Films: Releases from 1920 through 2018
(listed roughly in descending order of my ratings)
Ratings
Title (year of release) IMDb Me
1. The Wizard of Oz (1939)  8 10
2. Alice in Wonderland (1951)  7.4 10
3. A Man for All Seasons (1966)  7.7 10
4. Amadeus (1984)  8.3 10
5. The Harmonists (1997)  7.1 10
6. Dr. Jack (1922)  7.1 9
7. The General (1926)  8.1 9
8. City Lights (1931)  8.5 9
9. March of the Wooden Soldiers (1934)  7.3 9
10. The Gay Divorcee (1934)  7.5 9
11. David Copperfield (1935)  7.4 9
12. Captains Courageous (1937)  8 9
13. The Adventures of Robin Hood (1938)  7.9 9
14. Alexander Nevsky (1938)  7.6 9
15. Bringing Up Baby (1938)  7.9 9
16. A Christmas Carol (1938)  7.5 9
17. Destry Rides Again (1939)  7.7 9
18. Gunga Din (1939)  7.4 9
19. The Hunchback of Notre Dame (1939)  7.8 9
20. Mr. Smith Goes to Washington (1939)  8.1 9
21. The Women (1939)  7.8 9
22. The Grapes of Wrath (1940)  8 9
23. The Philadelphia Story (1940)  7.9 9
24. Pride and Prejudice (1940)  7.4 9
25. Rebecca (1940)  8.1 9
26. Sullivan’s Travels (1941)  8 9
27. Woman of the Year (1942)  7.2 9
28. The African Queen (1951)  7.8 9
29. The Browning Version (1951)  8.2 9
30. The Bad Seed (1956)  7.5 9
31. The Bridge on the River Kwai (1957)  8.1 9
32. Inherit the Wind (1960)  8.1 9
33. Psycho (1960)  8.5 9
34. The Hustler (1961)  8 9
35. Billy Budd (1962)  7.8 9
36. Lawrence of Arabia (1962)  8.3 9
37. Zorba the Greek (1964)  7.7 9
38. Doctor Zhivago (1965)  8 9
39. The Graduate (1967)  8 9
40. The Lion in Winter (1968)  8 9
41. Butch Cassidy and the Sundance Kid (1969)  8 9
42. Five Easy Pieces (1970)  7.5 9
43. The Godfather (1972)  9.2 9
44. Papillon (1973)  8 9
45. Chinatown (1974)  8.2 9
46. The Godfather: Part II (1974)  9 9
47. One Flew Over the Cuckoo’s Nest (1975)  8.7 9
48. Star Wars: Episode IV – A New Hope (1977)  8.6 9
49. Breaker Morant (1980)  7.8 9
50. Star Wars: Episode V – The Empire Strikes Back (1980)  8.7 9
51. Das Boot (1981)  8.3 9
52. Chariots of Fire (1981)  7.2 9
53. Raiders of the Lost Ark (1981)  8.4 9
54. Blade Runner (1982)  8.1 9
55. Gandhi (1982)  8 9
56. The Last Emperor (1987)  7.7 9
57. Dangerous Liaisons (1988)  7.6 9
58. Henry V (1989)  7.5 9
59. Chaplin (1992)  7.6 9
60. Noises Off… (1992)  7.6 9
61. Three Colors: Blue (1993)  7.9 9
62. Pulp Fiction (1994)  8.9 9
63. Richard III (1995)  7.4 9
64. The English Patient (1996)  7.4 9
65. Fargo (1996)  8.1 9
66. Chicago (2002)  7.1 9
67. Master and Commander: The Far Side of the World (2003)  7.4 9
68. The Chronicles of Narnia: The Lion, the Witch and the Wardrobe (2005)  6.9 9
69. The Kite Runner (2007)  7.6 9

What? Where?

The small city of Marysville, Michigan (population ca. 10,000), is in the news because Jean Cramer, a candidate for a seat on the city council, is reported by The New York Times (and other of our “moral guardians”) to have said “Keep Marysville a white community as much as possible” during a forum at which she and the other candidates spoke.

The Times adds that

Kathy Hayman, the city’s mayor pro tempore, said during the forum that she took Ms. Cramer’s comments personally….

“My son-in-law is a black man and I have biracial grandchildren,” Ms. Hayman said.

After the forum, Ms. Cramer submitted to an  interview with the local newspaper:

… Ms. Cramer expanded on her views to The Times Herald and said that Ms. Hayman’s family was “in the wrong” because it was multiracial.

“Husband and wife need to be the same race,” Ms. Cramer told the paper. “Same thing with the kids. That’s how it’s been from the beginning of, how can I say, when God created the heaven and the earth. He created Adam and Eve at the same time. But as far as me being against blacks, no I’m not.”

Ms. Cramer told The Times Herald on Friday that she would not have an issue if a black couple moved next door to her. “What is the issue is the biracial marriages, that’s the big problem,” Ms. Cramer said. “And there are a lot of people who don’t know it’s in the Bible and so they’re going outside of that.”

I find this brouhaha rather amusing because I’m familiar with Marysville, the population of which in 2010 was

97.5% White, 0.3% African American, 0.2% Native American, 0.6% Asian, 0.4% from other races, and 0.9% from two or more races. Hispanic or Latino of any race were 1.8% of the population.

Marysville is a “suburb” of Port Huron, a shrinking city of 30,000 souls. Among the reasons for the shrinkage of Port Huron’s population is its growing “blackness”. When I toured Port Huron on my last trip to Michigan (four years ago), I saw that neighborhoods which used to be all-white have changed complexion. Marysville was the original white-flight destination for Port Huronites. Other “suburbs” of Port Huron have grown, even as the city’s population shrinks, for much the same reason.

So Ms. Cramer is guilty of saying what residents of places around the nation — upscale and downscale — believe about keeping a “white community”. Her stated reason — a Biblical injunction against miscegenation — probably isn’t widely shared. But her objective — economic-social-cultural segregation — is widely shared, nonetheless.

The only newsworthy thing about Ms. Cramer’s statement is the hypocrisy of the cosseted editors and reporters of The New York Times and other big-media outlets for making a big deal of it.

“Movies” Updated

I have updated my “Movies” page. I was prompted to do so by having recently (and unusually) viewed two feature-length films (on consecutive evenings, no less): Darkest Hour and Goodbye Christopher Robin.

The former, despite Gary Oldman’s deservedly acclaimed, Oscar-winning impersonation of Winston Churchill, earned a rating of 7 from me. It was an entertaining film, but a rather trite offering of Hollywoodized history. The latter film, on the other hand earned a rating of 8 from me for the quality of its script, excellent performances, and non-saccharine treatment of Christopher Robin’s boyhood and his parents’ failings as parents.

In any event, go to “Movies”. Even if you’ve been there before, you will find new material in the updated version. You will find at the bottom of the page an explanation of my use of the 10-point rating scale.

Conservatism’s Fundamental Dilemma: Markets vs. Morality

Conservatives rightly defend free markets because they exemplify the learning from trial and error that underlies the wisdom of voluntarily evolved social norms — norms that bind a people in mutual trust, respect, and forbearance.

Conservatives also rightly condemn free markets — or some of the produce of free markets — because that produce is often destructive of social norms.

Collaborationist Conservatives

Michael Anton, author of “The Flight 93 Election“, has coined the apt term Vichycons for collaborationist conservatives. (I wish I had thought of it first.) Anton nails them in “Vichycons and Mass Shootings“:

One prominent member of the species has called for “civility.” I’m all for “civility,” but it takes two to tango and the kind of “civility” on which he insists amounts—in the face of the Left’s intensifying power-hungry wrath—to unilateral disarmament. The Vichycons are like pearl-clutching old ladies somehow unperturbed by the ambient culture’s mass obscenity who upbraid their husbands for saying “damn.” They may claim to favor high standards for all, but in practice all their fire is consistently directed rightward….

Conservatives, as noted, are supposed to know something about nature, human nature, natural limits, politics, history, and permanent truths. That they do not is plainly evident from the fact that an alternative explanation for El Paso—and for other recent mass atrocities—is right under their collective nose and yet has never occurred to them. Or maybe it has but they’re too chicken to voice it. Again, I don’t know which would be worse.

Anton’s “alternative explanation” is the unraveling of social norms since the 1960s, which has led to greater violence and far less social harmony.

And Vichycons bear a big share of the responsibility for what has happened. Too many of them — especially in high and influential places — have been (and are) so anxious to seem “civil” and so eager to “get along” that they have failed to challenge the willful unraveling of social norms by the left. Theirs is a moral failing, though they don’t think of it as such because, for them, “image” and “connections” are far more important than actual adherence to principle. Perhaps it’s because, like Max Boot, they were never really conservative in the first place.

(See also “Corresponding with a Collaborator“, “‘Conservative’ Collabos“, and “Rooted in the Real World of Real People“.)

“Justice on Trial” A Brief Review

I recently read Justice on Trial: The Kavanaugh Confirmation and the Future of the Supreme Court by Mollie Hemingway and Carrie Severino. The book augments and reinforces my understanding of the political battle royal that began a nanosecond after Justice Kennedy announced his retirement from the Supreme Court.

The book is chock-full of details that are damning to the opponents of the nomination of Brett Kavanaugh (or any other constitutionalist) to replace Kennedy. Rather, the opponents would consider the details to be damning if they had an ounce of honesty and integrity. What comes through — loudly, clearly, and well-documented — is the lack of honesty and integrity on the part of the opponents of the Kavanaugh nomination, which is to say most of the Democrats in the Senate, most of the media, and all of the many interest groups that opposed the nomination.

Unfortunately, it is unlikely the authors’ evident conservatism and unflinching condemnation of the anti-Kavanaugh forces will convince anyone but the already-convinced, like me. The anti-Kavanaugh, anti-Constitution forces will redouble their efforts to derail the next Trump nominee (if there is one). As the authors say in the book’s closing paragraphs,

for all the hysteria, there is still no indication that anyone on the left is walking away from the Kavanaugh confirmation chastened by the electoral consequences or determined to prevent more damage to the credibility of the judiciary… [S]ooner or later there will be another vacancy on the Court, whether it is [RBG’s] seat or another justice’s. It’s hard to imagine how a confirmation battle could compete with Kavanaugh’s for ugliness. But if the next appointment portends a major ideological shift, it could be worse. When President Reagan had a chance to replace Louis Powell, a swing vote, with Bork, Democrats went to the mat to oppose him. When Thurgood Marshall, one of the Court’s most liberal members, stood to be replaced by Clarence Thomas, the battle got even uglier. And trading the swing vote Sandra Day O’Connor for Alito triggered an attempted filibuster.

As ugly as Kavanaugh’s confirmation battle became, he is unlikely to shift the Court dramatically. Except on abortion and homosexuality, Justice Kennedy usually voted with the conservatives. If Justice Ginsburg were to retire while Trump was in the White House, the resulting appointment would probably be like the Thomas-for-Marshall trade. Compared with what might follow, the Kavanaugh confirmation might look like the good old days of civility.

Indeed.

Simple Economic Truths Worth Repeating

From “Keynesian Multiplier: Fiction vs. Fact“:

There are a few economic concepts that are widely cited (if not understood) by non-economists. Certainly, the “law” of supply and demand is one of them. The Keynesian (fiscal) multiplier is another; it is

the ratio of a change in national income to the change in government spending that causes it. More generally, the exogenous spending multiplier is the ratio of a change in national income to any autonomous change in spending (private investment spending, consumer spending, government spending, or spending by foreigners on the country’s exports) that causes it.

The multiplier is usually invoked by pundits and politicians who are anxious to boost government spending as a “cure” for economic downturns. What’s wrong with that? If government spends an extra $1 to employ previously unemployed resources, why won’t that $1 multiply and become $1.50, $1.60, or even $5 worth of additional output?

What’s wrong is the phony math by which the multiplier is derived, and the phony story that was long ago concocted to explain the operation of the multiplier….

To show why the math is phony, I’ll start with a derivation of the multiplier. The derivation begins with the accounting identity Y = C + I + G, which means that total output (Y) = consumption (C) + investment (I) + government spending (G)….

Now, let’s say that b = 0.8. This means that income-earners, on average, will spend 80 percent of their additional income on consumption goods (C), while holding back (saving, S) 20 percent of their additional income. With b = 0.8, k = 1/(1 – 0.8) = 1/0.2 = 5. That is, every $1 of additional spending — let us say additional government spending (∆G) rather than investment spending (∆I) — will yield ∆Y = $5. In short, ∆Y = k(∆G), as a theoretical maximum.

But:

[The multiplier] it isn’t a functional representation — a model — of the dynamics of the economy. Assigning a value to b (the marginal propensity to consume) — even if it’s an empirical value — doesn’t alter that fact that the derivation is nothing more than the manipulation of a non-functional relationship, that is, an accounting identity.

Consider, for example, the equation for converting temperature Celsius (C) to temperature Fahrenheit (F): F = 32 + 1.8C. It follows that an increase of 10 degrees C implies an increase of 18 degrees F. This could be expressed as ∆F/∆C = k* , where k* represents the “Celsius multiplier”. There is no mathematical difference between the derivation of the investment/government-spending multiplier (k) and the derivation of the Celsius multiplier (k*). And yet we know that the Celsius multiplier is nothing more than a tautology; it tells us nothing about how the temperature rises by 10 degrees C or 18 degrees F. It simply tells us that when the temperature rises by 10 degrees C, the equivalent rise in temperature F is 18 degrees. The rise of 10 degrees C doesn’t cause the rise of 18 degrees F.

Therefore:

[T]he Keynesian investment/government-spending multiplier simply tells us that if ∆Y = $5 trillion, and if b = 0.8, then it is a matter of mathematical necessity that ∆C = $4 trillion and ∆I + ∆G = $1 trillion. In other words, a rise in I + G of $1 trillion doesn’t cause a rise in Y of $5 trillion; rather, Y must rise by $5 trillion for C to rise by $4 trillion and I + G to rise by $1 trillion. If there’s a causal relationship between ∆G and ∆Y, the multiplier doesn’t portray it.

In sum, the fiscal multiplier puts the cart before the horse. It begins with a non-functional, mathematical relationship, stipulates a hypothetical increase in GDP, and computes that increase in consumption (and other things) that would occur if that increase were to be realized.

As economist Steve Landsburg explains in “The Landsburg Multiplier: How to Make Everyone Rich”,

Murray Rothbard … observed that the really neat thing about this [fiscal stimulus] argument is that you can do exactly the same thing with any accounting identity. Let’s start with this one:

Y = L + E

Here Y is economy-wide income, L is Landsburg’s income, and E is everyone else’s income. No disputing that one.

Next we observe that everyone else’s share of the income tends to be about 99.999999% of the total. In symbols, we have:

E = .99999999 Y

Combine these two equations, do your algebra, and voila:

Y = 100,000,000

That 100,000,000 there is the soon-to-be-famous “Landsburg multiplier”. Our equation proves that if you send Landsburg a dollar, you’ll generate $100,000,000 worth of income for everyone else.

Send me your dollars, yearning to be free.

Tax cuts may stimulate economic activity, but not nearly to the extent suggested by the multiplier. Moreover, if government spending isn’t reduced at the same time that taxes are cut, and if there is something close to full employment of labor and capital, the main result of a tax cut will be inflation.

Government spending (as shown in “Keynsian Multiplier: Fact vs. Fiction” and “Economic Growth Since World War II“) doesn’t stimulate the economy, and usually has the effect of reducing private consumption and investment. That may be to the liking of big-government worshipers, but it’s bad for most of us.

Has Humanity Reached Peak Intelligence?

That’s the title of a post at BBC Future by David Robson, a journalist who has written a book called The Intelligence Trap: Why Smart People Make Dumb Mistakes. Inasmuch as “humanity” isn’t a collective to which “intelligence” can be attached, the title is more titillating than informative about the substance of the post, wherein Mr. Robson says some sensible things; for example:

When the researcher James Flynn looked at [IQ] scores over the past century, he discovered a steady increase – the equivalent of around three points a decade. Today, that has amounted to 30 points in some countries.

Although the cause of the Flynn effect is still a matter of debate, it must be due to multiple environmental factors rather than a genetic shift.

Perhaps the best comparison is our change in height: we are 11cm (around 5 inches) taller today than in the 19th Century, for instance – but that doesn’t mean our genes have changed; it just means our overall health has changed.

Indeed, some of the same factors may underlie both shifts. Improved medicine, reducing the prevalence of childhood infections, and more nutritious diets, should have helped our bodies to grow taller and our brains to grow smarter, for instance. Some have posited that the increase in IQ might also be due to a reduction of the lead in petrol, which may have stunted cognitive development in the past. The cleaner our fuels, the smarter we became.

This is unlikely to be the complete picture, however, since our societies have also seen enormous shifts in our intellectual environment, which may now train abstract thinking and reasoning from a young age. In education, for instance, most children are taught to think in terms of abstract categories (whether animals are mammals or reptiles, for instance). We also lean on increasingly abstract thinking to cope with modern technology. Just think about a computer and all the symbols you have to recognise and manipulate to do even the simplest task. Growing up immersed in this kind of thinking should allow everyone [hyperbole alert] to cultivate the skills needed to perform well in an IQ test….

[Psychologist Robert Sternberg] is not alone in questioning whether the Flynn effect really represented a profound improvement in our intellectual capacity, however. James Flynn himself has argued that it is probably confined to some specific reasoning skills. In the same way that different physical exercises may build different muscles – without increasing overall “fitness” – we have been exercising certain kinds of abstract thinking, but that hasn’t necessarily improved all cognitive skills equally. And some of those other, less well-cultivated, abilities could be essential for improving the world in the future.

Here comes the best part:

You might assume that the more intelligent you are, the more rational you are, but it’s not quite this simple. While a higher IQ correlates with skills such as numeracy, which is essential to understanding probabilities and weighing up risks, there are still many elements of rational decision making that cannot be accounted for by a lack of intelligence.

Consider the abundant literature on our cognitive biases. Something that is presented as “95% fat-free” sounds healthier than “5% fat”, for instance – a phenomenon known as the framing bias. It is now clear that a high IQ does little to help you avoid this kind of flaw, meaning that even the smartest people can be swayed by misleading messages.

People with high IQs are also just as susceptible to the confirmation bias – our tendency to only consider the information that supports our pre-existing opinions, while ignoring facts that might contradict our views. That’s a serious issue when we start talking about things like politics.

Nor can a high IQ protect you from the sunk cost bias – the tendency to throw more resources into a failing project, even if it would be better to cut your losses – a serious issue in any business. (This was, famously, the bias that led the British and French governments to continue funding Concorde planes, despite increasing evidence that it would be a commercial disaster.)

Highly intelligent people are also not much better at tests of “temporal discounting”, which require you to forgo short-term gains for greater long-term benefits. That’s essential, if you want to ensure your comfort for the future.

Besides a resistance to these kinds of biases, there are also more general critical thinking skills – such as the capacity to challenge your assumptions, identify missing information, and look for alternative explanations for events before drawing conclusions. These are crucial to good thinking, but they do not correlate very strongly with IQ, and do not necessarily come with higher education. One study in the USA found almost no improvement in critical thinking throughout many people’s degrees.

Given these looser correlations, it would make sense that the rise in IQs has not been accompanied by a similarly miraculous improvement in all kinds of decision making.

So much for the bright people who promote and pledge allegiance to socialism and its various manifestations (e.g., the Green New Deal, and Medicare for All). So much for the bright people who suppress speech with which they disagree because it threatens the groupthink that binds them.

Robson, still using “we” inappropriately, also discusses evidence of dysgenic effects in IQ:

Whatever the cause of the Flynn effect, there is evidence that we may have already reached the end of this era – with the rise in IQs stalling and even reversing. If you look at Finland, Norway and Denmark, for instance, the turning point appears to have occurred in the mid-90s, after which average IQs dropped by around 0.2 points a year. That would amount to a seven-point difference between generations.

Psychologist (and intelligence specialist) James Thompson has addressed dysgenic effects at his blog on the website of The Unz Review. In particular, he had a lot to say about the work of an intelligence researcher named Michael Woodley. Here’s a sample from a post by Thompson:

We keep hearing that people are getting brighter, at least as measured by IQ tests. This improvement, called the Flynn Effect, suggests that each generation is brighter than the previous one. This might be due to improved living standards as reflected in better food, better health services, better schools and perhaps, according to some, because of the influence of the internet and computer games. In fact, these improvements in intelligence seem to have been going on for almost a century, and even extend to babies not in school. If this apparent improvement in intelligence is real we should all be much, much brighter than the Victorians.

Although IQ tests are good at picking out the brightest, they are not so good at providing a benchmark of performance. They can show you how you perform relative to people of your age, but because of cultural changes relating to the sorts of problems we have to solve, they are not designed to compare you across different decades with say, your grandparents.

Is there no way to measure changes in intelligence over time on some absolute scale using an instrument that does not change its properties? In the Special Issue on the Flynn Effect of the journal Intelligence Drs Michael Woodley (UK), Jan te Nijenhuis (the Netherlands) and Raegan Murphy (Ireland) have taken a novel approach in answering this question. It has long been known that simple reaction time is faster in brighter people. Reaction times are a reasonable predictor of general intelligence. These researchers have looked back at average reaction times since 1889 and their findings, based on a meta-analysis of 14 studies, are very sobering.

It seems that, far from speeding up, we are slowing down. We now take longer to solve this very simple reaction time “problem”.  This straightforward benchmark suggests that we are getting duller, not brighter. The loss is equivalent to about 14 IQ points since Victorian times.

So, we are duller than the Victorians on this unchanging measure of intelligence. Although our living standards have improved, our minds apparently have not. What has gone wrong?

From a later post:

The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.

Woodley’s claim is based on a set of papers written since 2013, which have been recently reviewed by [Matthew] Sarraf.

The review is unusual, to say the least. It is rare to read so positive a judgment on a young researcher’s work, and it is extraordinary that one researcher has changed the debate about ability levels across generations, and all this in a few years since starting publishing in psychology.

The table in that review which summarizes the main findings is shown below. As you can see, the range of effects is very variable, so my rough estimate of 1 point per decade is a stab at calculating a median. It is certainly less than the Flynn Effect in the 20th Century, though it may now be part of the reason for the falling of that effect, now often referred to as a “negative Flynn effect”….

Here are the findings which I have arranged by generational decline (taken as 25 years).

  • Colour acuity, over 20 years (0.8 generation) 3.5 drop/decade.
  • 3D rotation ability, over 37 years (1.5 generations) 4.8 drop/decade.
  • Reaction times, females only, over 40 years (1.6 generations) 1.8 drop/decade.
  • Working memory, over 85 years (3.4 generations) 0.16 drop/decade.
  • Reaction times, over 120 years (4.8 generations) 0.57-1.21 drop/decade.
  • Fluctuating asymmetry, over 160 years (6.4 generations) 0.16 drop/decade.

Either the measures are considerably different, and do not tap the same underlying loss of mental ability, or the drop is unlikely to be caused by dysgenic decrements from one generation to another. Bar massive dying out of populations, changes do not come about so fast from one generation to the next. The drops in ability are real, but the reason for the falls are less clear. Gathering more data sets would probably clarify the picture, and there is certainly cause to argue that on various real measures there have been drops in ability. Whether this is dysgenics or some other insidious cause is not yet clear to me.

My view is that whereas formerly the debate was only about the apparent rise in ability, discussions are now about the co-occurrence of two trends: the slowing down of the environmental gains and the apparent loss of genetic quality. In the way that James Flynn identified an environmental/cultural effect, Michael Woodley has identified a possible genetic effect, and certainly shown that on some measures we are doing less well than our ancestors.

How will they be reconciled? Time will tell, but here is a prediction. I think that the Flynn effect will fade in wealthy countries, persist with fading effect in poor countries, and that the Woodley effect will continue, though I do not know the cause of it.

Here’s my hypothesis, which I offer on the assumption that the test-takers are demographically representative of the whole populations of the countries in which they were tested: The less-intelligent portions of the populace are breeding faster than the more-intelligent portions. That phenomenon is magnified by the rapid growth of the Muslim component of Europe’s population and the rapid growth of the Latino component of America’s population.

(See also “The Learning Curve and the Flynn Effect“, “More about Intelligence“, “Selected Writings about Intelligence“, and especially “Intelligence“.)

Colleges and Universities are Overrated

Keith Whittington writes at The Volokh Conspiracy about “The Partisan Split on Higher Education“:

A new Pew survey reveals that the partisan split that became visible a couple of years ago in public perceptions of American higher education has continued. In the long term, this cannot be good for American colleges and universities….

Colleges and universities are fairly distinctive in being non-political institutions that are nonetheless seen in increasingly partisan terms. There is an extensive conservative infrastructure now dedicated to publicizing the foibles of academia. Of course, the reality is that college professors and administrators lean heavily to the political left, though this has been true for decades. Republicans now perceive universities as politicized, partisan institutions….

[I]f Republicans continue to believe that on the whole universities are damaging American society, they are unlikely to try to defend them against misguided political interventions from the political left and are more likely to propose misguided political interventions of their own.

Colleges and universities are (and long have been) “political” institutions, as Whittington himself acknowledges. But that isn’t my quibble with Whittington.

His tone implies that he holds colleges and universities in higher regard than they should be held. But there isn’t anything sacred about colleges and universities. Free inquiry (which most of them no longer support) can go on without them. Advances in theoretical and applied science can go on without them, as long as there are free markets to support the development and application of scientific knowledge. In fact, colleges and universities have (on the whole) become so inimical to free markets that Americans would be better off with far fewer colleges and universities.

Sending kids to college has become conspicuous consumption. The practical value of colleges and universities is realized through courses that could be replicated by for-profit institutions. The rest — including the bloated, mostly leftist administrative apparatus — is waste.

(See also “Is College for Everyone?“, “College for Almost No One“, and “More Evidence against College for Everyone“.)

 

Megaprojects, Cost-Benefit Analysis, and “Social Welfare”

Timothy Taylor writes about “The Iron Law of Megaprojects vs. the Hiding Hand Principle“. He begins by quoting a piece by Bent Flyvbjerg in Cato Policy Report (January 2017):

Megaprojects are large-scale, complex ventures that typically cost a billion dollars or more, take many years to develop and build, involve multiple public and private stakeholders, are transformational, and impact millions of people. Examples of megaprojects are high-speed rail lines, airports, seaports, motorways, hospitals, national health or pension information and communications technology (ICT) systems, national broadband, the Olympics, largescale signature architecture, dams, wind farms, offshore oil and gas extraction, aluminum smelters, the development of new aircrafts, the largest container and cruise ships, high-energy particle accelerators, and the logistics systems used to run large supply-chain-based companies like Amazon and Maersk.

For the largest of this type of project, costs of $50-100 billion are now common, as for the California and UK high-speed rail projects, and costs above $100 billion are not uncommon, as for the International Space Station and the Joint Strike Fighter. If they were nations, projects of this size would rank among the world’s top 100 countries measured by gross domestic product. When projects of this size go wrong, whole companies and national economies suffer. …

If, as the evidence indicates, approximately one out of ten megaprojects is on budget, one out of ten is on schedule, and one out of ten delivers the promised benefits, then approximately one in a thousand projects is a success, defined as on target for all three. Even if the numbers were wrong by a factor of two, the success rate would still be dismal.

So far, so good. But then Taylor says this:

A common comeback to the Iron Law of Megaprojects is that if we pay attention to it, we will be so dissuaded by costs and risks of megaprojects that nothing will ever get done. Alfred O. Hirschman offered a sophisticated expression of this concern in his 1967 essay, “The Hiding Hand.” Hirschman argued there there is rough balance in megaprojects: we tend underestimate the costs and problems of megaprojects, but we also tend to underestimate the creative with which people address the costs and and problems that arise.

I will come to the irrelevance of Hirschman’s argument, but first a few more tidbits from Taylor:

[Flyvbjerg] argues that a number of prominent megaprojects have been completed on time and on budget. When choosing which megaprojects to pursue, it is useful to avoid underestimating costs and overestimating benefits. [Wow, what an astute observation.] …

Further, Flyvbjerg offers a reminder that even when a megaproject is eventually completed, and seems to be working well, project may still have been uneconomic–and society may have been better off without it.

The second comment brings Taylor close to the heart of the matter. But he never gets there. Like most economists, he overlooks the major flaw in the application of cost-benefit analysis to government projects: Costs and benefits usually have different distributions across the population. At the extreme, benefits that accrue only to the indigent are borne almost entirely by the non-indigent. (The indigent may pay some sales taxes.)

Cost-benefit analysis (applied to government projects) effectively rests on the assumption of a social welfare function. If there were such a thing, then it would be all right for people to go around punching each other (and worse), as long as the aggressors derived more gains in “utility” than the losses suffered by the victims.

Information Security in the Cyber-Age

No system is perfect, but I am doing the best that I can:

1. PCs and mobile devices protected by anti-virus and anti-malware programs.

2. Password-protected home network, wrapped in a virtual private network, which is also used by the mobile devices when they aren’t linked to the home network.

3. Different usernames and passwords for every one of dozens of sites that require (and need) them.

4. Passwords created by a complex spreadsheet routine that generates and selects from random series of upper-case letters, lower-case letters, digits, and special characters.

5. Passwords stored in a password-protected file, with paper backup in a secure container.

6. Master password required for access to passwords stored in browser.

Measures 4, 5, and 6 adopted in lieu of reliance on vulnerable vendors for password generation and storage.

Suggestions for improvement are always welcome.

Social Security Is an Entitlement

Entitlement has come to mean the right to guaranteed benefits under a government program. In the nature of government programs, those who receive the benefits usually don’t pay the taxes required to fund those benefits.

I recently saw on Facebook (which I look at occasionally) a discussion to the effect that Social Security isn’t an entitlement program because “we (the discussants) paid into it”.

Well, paying into Social Security doesn’t mean that you paid your own way. First, the system is rigged so the persons in lower income brackets receive benefits that are disproportionately high relative to the payments that they (and their employers) made during their working years.

Second, the money that a person pays into Social Security doesn’t earn anything. You are not buying a financial instrument that funds productive investments, which in turn reward you with a future stream of income.

True, there’s the mythical Social Security Trust Fund, which has been paying out benefits that have been defrayed in part by interest earned on “investments” in U.S. Treasury securities. Where does that interest come from? Not from the beneficiaries of Social Security. It comes from taxpayers who are, at the same time, also making payments into Social Security in exchange for the “promise” of future Social Security benefits. (I say “promise” because there is no binding contract for Social Security benefits; you get what Congress provides by law.)

So, yes, Social Security is an entitlement program. Paying into it doesn’t mean that the payer earns what he eventually receives from it. Quite the contrary. Most participants are feeding from the public trough.

The Unique “Me”

Children, at some age, will begin to understand that there is death, the end of a human life (in material form, at least). At about the same time, in my experience, they will begin to speculate about the possibility that they might have been someone else: a child born in China, for instance.

Death eventually loses its fascination, though it may come to mind from time to time as one grows old. (Will I wake up in the morning? Is this the day that my heart stops beating? Will I be able to break my fall when the heart attack happens, or will I just go down hard and die of a fractured skull?)

But after careful reflection, at some age, the question of being been born as someone else is answered in the negative.

For each person, there is only one “I”, the unique “me”. If I hadn’t been born, I wouldn’t be “I” — there wouldn’t be a “me”. I couldn’t have been born as someone else: a child born in China, instance. A child born in China — or at any place and time other than where and when my mother gave birth to me — must be a different “I’. not the one I think of as “me”.

(Inspired by Sir Roger Scruton’s On Human Nature, for which I thank my son.)