From Cerdic (r. 519-534) to Charles III (r. 2022- )
Charles III is the 83nd monarch. His royal lineage goes back to Sweyn, the 34th monarch (b. 960, r. 1013-1014).
Click on the image to zoom in.

Charles III is the 83nd monarch. His royal lineage goes back to Sweyn, the 34th monarch (b. 960, r. 1013-1014).
Click on the image to zoom in.

Once upon a time I read a post, “The Nature of Intelligence”, at a now-defunct blog called MBTI Truths. (MBTI refers to a controversial personality test: Myers-Briggs Type Indicator.) Here is the entire text of the post:
A commonly held misconception within the MBTI community is that iNtuitives are smarter than Sensors. They are thought to have higher intelligence, but this belief is misguided. In an assessment of famous people with high IQs, the vast majority of them are iNtuitive. However, IQ tests measure only two types of intelligences: linguistic and logical-mathematical. In addition to these, there are six other types of intelligence: spatial, bodily-kinesthetic, musical, interpersonal, intrapersonal, and naturalistic. Sensors would probably outscore iNtuitives in several of these areas. Perhaps MBTI users should come to see iNtuitives, who make up 25 percent of the population, as having a unique type of intelligence instead of superior intelligence.
The use of “intelligence” with respect to traits other than brain-power is misguided. “Intelligence” has a clear and unambiguous meaning in everyday language; for example:
The capacity to acquire, understand, and use knowledge.
That is the way in which I use “intelligence” in “Intelligence, Personality, Politics, and Happiness”, and it is the way in which the word is commonly understood. The application of “intelligence” to other kinds of ability — musical, interpersonal, etc. — is a fairly recent development that smacks of anti-elitism. It is a way of saying that highly intelligent individuals (where “intelligence” carries its traditional meaning) are not necessarily superior in all respects. No kidding!
As to the merits of the post at MBTI Truths, it is mere speculation to say that “Sensors would probably outscore iNtuitives in several of these” other types of ability. (And what is “naturalistic intelligence”, anyway?)
Returning to a key point of “Intelligence, Personality, Politics, and Happiness”, the claim that iNtuitives are generally smarter than Sensors is nothing but a claim about the relative capacity of iNtuitives to acquire and apply knowledge. It is quite correct to say that iNtuitives are not necessarily better than Sensors at, say, sports, music, glad-handing, and so one. It is also quite correct to say that iNtuitives generally are more intelligent than Sensors, in the standard meaning of “intelligence”.
Other so-called types of intelligence are not types of intelligence, at all. They are simply other types of ability, each of them is (perhaps) valuable in its own way. But calling them types of intelligence is a transparent effort to denigrate the importance of real intelligence, which is an important determinant of significant life outcomes: learning, job performance, income, health, and criminality (in the negative).
It is a sign of the times that an important human trait is played down in an effort to inflate the egos of persons who are not well endowed with respect to that trait. The attempt to redefine or minimize intelligence is of a piece with the use of genteelisms, which Wilson Follett defines as
soft-spoken expressions that are either unnecessary or too regularly used. The modern world is much given to making up euphemisms that turn into genteelisms. Thus newspapers and politicians shirk speaking of the poor and the crippled. These persons become, respectively, the underprivileged (or disadvantaged) and the handicapped [and now -challenged and -abled: ED]. (Modern American Usage (1966), p. 169)
Finally:
Genteelisms may be of … the old-fashioned sort that will not name common things outright, such as the absurd plural bosoms for breasts, and phrases that try to conceal accidental associations of ideas, such as back of for behind. The advertiser’s genteelisms are too numerous to count. They range from the false comparative (e.g., the better hotels) to the soapy phrase (e.g., gracious living), which is supposed to poeticize and perfume the proffer of bodily comforts. (Ibid., p. 170)
And so it is that such traits as athleticism, musical virtuosity, and garrulousness become kinds of intelligence. Why? Because it is somehow inegalitarian — and therefore unmentionable — that some persons are smarter than others.
Life just isn’t fair, so get over it.
A closely related matter is the use of euphemisms. A euphemism is
an innocuous word or expression used in place of one that is deemed offensive or suggests something unpleasant.
The market in euphemisms has been cornered by politically correct leftists, who can’t confront reality and wish to erect a fantasy in its place. A case in point is a “bias-free language guide” that was posted on the website of the University of New Hampshire in 2013 and stayed there for a few years. The guide disappeared after Mark Huddleston, the university’s president, issued this statement:
While individuals on our campus have every right to express themselves, I want to make it absolutely clear that the views expressed in this guide are NOT the policy of the University of New Hampshire. I am troubled by many things in the language guide, especially the suggestion that the use of the term ‘American’ is misplaced or offensive. The only UNH policy on speech is that it is free and unfettered on our campuses. It is ironic that what was probably a well-meaning effort to be ‘sensitive’ proves offensive to many people, myself included. [Quoted in “University President Offended by Bias-Free Language Guide,” an Associated Press story published in USA Today, July 30, 2015]
The same story adds some detail about the contents of the guide:
One section warns against the terms “older people, elders, seniors, senior citizens.” It suggests “people of advanced age” as preferable, though it notes that some have “reclaimed” the term “old people.” Other preferred terms include “person of material wealth” instead of rich, “person who lacks advantages that others have” instead of poor and “people of size” to replace the word overweight.
There’s more from another source:
Saying “American” to reference Americans is also problematic. The guide encourages the use of the more inclusive substitutes “U.S. citizen” or “Resident of the U.S.”
The guide notes that “American” is problematic because it “assumes the U.S. is the only country inside [the continents of North and South America].” (The guide doesn’t address whether or not the terms “Canadians” and “Mexicans” should be abandoned in favor of “Residents of Canada” and “Residents of Mexico,” respectively.)
The guide clarifies that saying “illegal alien” is also problematic. While “undocumented immigrant” is acceptable, the guide recommends saying “person seeking asylum,” or “refugee,” instead. Even saying “foreigners” is problematic; the preferred term is “international people.”
Using the word “Caucasian” is considered problematic as well, and should be discontinued in favor of “European-American individuals.” The guide also states that the notion of race is “a social construct…that was designed to maintain slavery.”
The guide also discourages the use of “mothering” or “fathering,” so as to “avoid gendering a non-gendered activity.”
Even saying the word “healthy” is problematic, the university says. The “preferred term for people without disabilities,” the university says, is “non-disabled.” Similarly, saying “handicapped” or “physically-challenged” is also problematic. Instead, the university wants people to use the more inclusive “wheelchair user,” or “person who is wheelchair mobile.”
Using the words “rich” or “poor” is also frowned upon. Instead of saying “rich,” the university encourages people to say “person of material wealth.” Rather than saying a person is “poor,” the university encourages its members to substitute “person who lacks advantages that others have” or “low economic status related to a person’s education, occupation and income.”
Terms also considered problematic include: “elders,” “senior citizen,” “overweight” (which the guide says is “arbitrary”), “speech impediment,” “dumb,” “sexual preference,” “manpower,” “freshmen,” “mailman,” and “chairman,” in addition to many others. [Peter Hasson, “Bias-Free Language Guide Claims the Word ‘American’ Is ‘Problematic’,” Campus Reform, July 28, 2015]
And more, from yet another source:
Problematic: Opposite sex. Preferred: Other sex.
Problematic: Homosexual. Preferred: Gay, Lesbian, Same Gender Loving
Problematic: Normal … healthy or whole. Preferred: Non-disabled.
Problematic/Outdated: Mothering, fathering. Preferred: Parenting, nurturing. [Jennifer Kabbany, “University’s ‘Bias-Free Language Guide’ Criticizing Word ‘American’ Prompts Shock, Anger,” The College Fix, July 30, 2015
The UNH students who concocted the guide — and the thousands (millions?) at other campuses who think similarly — must find it hard to express themselves clearly. Every word must be weighed before it is written or spoken, for fear of giving offense to a favored group or implying support of an idea, cause, institution, or group of which the left disapproves. (But it’s always open season on “fascist, capitalist pigs”.)
Gee, it must be nice to live in a fantasy world, where reality can be obscured or changed just by saying the right words. Here’s a thought for the fantasists of the left: You don’t need to tax, spend, and regulate Americans until they’re completely impoverished and subjugated. Just say that it’s so — and leave the rest of us alone.
Have you ever noticed that Americans are perfectionists? It’s true.
It all began with the U.S. Constitution. The preamble to the Constitution says it was “ordained and established” (a ringing phrase, that) “in order to form a more perfect union” — among other things. It’s been all downhill since then.
The Federalists (pro-Constitution) and anti-Federalists (anti-Constitution) continued to squabble for a decade or so after ratification of the Constitution. The anti-Federalists believed the union to be perfect enough under the Articles of Confederation. But those blasted perfectionist Federalists won the debate. So here we are.
The Federalists were such perfectionists that they left room in the Constitution for amending it. After all, a “more perfect union” can’t be attained in a day. Thus, in our striving toward perfection — Consitution-wise, that is — we have now amended it 27 times. We even adopted an amendment (XVIII, the Prohibition amendment, 1919) and only 14 years later amended it out of existence (XXI, the Repeal amendment, 1933).
But we can be very patient when it comes to perfecting the Constitution through amendments. Amendment XXVII (the most recent amendment) was submitted to the States on September 25, 1789, as part of the proposed Bill of Rights. It wasn’t ratified until May 7, 1992. Not to worry, though, Amendment XXVII isn’t about rights, it merely prevents a sitting Congress from raising its own pay:
No law, varying the compensation for the services of the Senators and Representatives, shall take effect, until an election of Representatives shall have intervened.
So now the only group of public servants that can vote itself a pay raise must wait out an election before a raise takes effect. Big deal. Most members of Congress get re-elected, anyway.
Where was I? Oh, yes, perfectionism. Well, after the Constitution was ratified, the next big squabble was about States’ rights. Some politicians from the North preferred to secede rather than remain in a union that permitted slavery. Some politicians from the South said the slavery issue was just a Northern excuse to bully the South; the South, they said, should secede from the union. The union, it seems, just wasn’t perfect enough for either the North or the South. Well, the South won that squabble by seceding first, which so ticked off the North that it dragged the South back into the union, kickin’ and hollerin’. The North had decided that the only perfect union was a whole union, rednecks and all.
The union didn’t get noticeably more perfect with the South back in the fold. Things just went squabbling along through the Spanish-American War and World War I. There was a lot more prosperity in the Roaring ’20s, but that was spoiled by Prohibition. It wasn’t hard to find a drink, but you never knew when your local speakeasy might be raided by Elliot Ness or when you might get caught in a shoot-out between rival bootleggers.
The Great Depression put an end to the Roaring ’20s, and that sent perfection for a real loop. But Franklin D. Roosevelt got the idea that he could help us out of the Depression by creating a bunch of alphabet-soup agencies, including the CCC, the PWA, the FSA, and the WPA. I guess he got his idea from his older cousin, Teddy, who created his own alphabet-soup agencies back in the early 1900s.
Well, Franklin really got the ball rolling, and it seems like almost every president since him has added a bunch of alphabet-soup agencies to the executive branch. And when a president has been unable to think of new alphabet-soup agencies, Congress has stepped in and helped him out. (It’s not yet necessary to say “him, her, or it”) It seems that our politicians think we’ll attain perfection when there are enough agencies to use every possible combination of three letters from the alphabet. (That’s only 17,576 agencies; we must be getting close to perfection by now.)
During the Great Depression some people began to think that criminals (especially juvenile delinquents) weren’t naturally bad. Nope, their criminality was “society’s fault” (not enough jobs), and juvenile delinquents could be rehabilitated — made more perfect, if you will — through “understanding”, which would make model citizens of psychopaths. That idea was put on hold during World War II because we needed those former juvenile delinquents and their younger brothers to kill Krauts and Japs. (Oops, spell-checker doesn’t like “Japs”; “Nips” is okay, though.)
The idea of rehabilitating juvenile delinquents through “understanding” took hold after the war. (There was no longer a Great Depression, but there still weren’t enough jobs because of the minimum wage, another great advance toward perfection.) In fact, the idea “undertanding” miscreants spread beyond the ranks of juvenile delinquents to encompass every tot and pre-adolescent in the land. Corporal punishment became a no-no. Giving into Johnny and Jill’s every whim became a yes-yes. Guess what? What: Johnny and Jill grew up to be voters. Politicians quickly learned not to say “no” to Johnny and Jill’s demands for — whatever — otherwise Johnny and Jill would throw a fit (and throw a politician out of office). So, politicians just got in the habit of approving things Johnny and Jill asked for. In fact, they even got in the habit of approving things Johnny and Jill might ask for. (Better safe than out of office.) A perfect union, after all, is one that grants our every wish — isn’t it? We’re not there yet, but we’re trying like hell.
Sometimes you can’t attain perfection through legislation. Then you go to court. Remember a few years ago when an Alabama jury awarded millions (millions!) of dollars to the purchaser of a new BMW who discovered that its paint job was not pristine? Or how about the small machine-tool company that was sued by a workman who lost three fingers while using (or misusing) the company’s product, even though the machine had been rebuilt at least once and had changed hands four times. (Somebody’s gotta pay for my stupidity.) Then there was the infamous case in which a jury found in favor of a woman who had burned herself with hot coffee (what did she expect?) dispensed by a fast-food chain.
The upshot of our litigiousness? The politicians elected by Johnny and Jill — ever in the pursuit of more perfection — have mandated warning labels for everything. THIS SAW IS SHARP. THIS COFFEE IS HOT. DON’T PUT THIS PLASTIC BAG OVER YOUR HEAD, STUPID. DON’T STICK YOUR HAND DOWN THIS GARBAGE DISPOSAL, YOU MORON. THIS TOY GUN WON’T KILL AN ARMED INTRUDER (HA, HA, HA, YOU GUN NUT!).
You may have noticed a trend in my tale. Politicians quit trying some years ago to perfect the union; their aim is to perfect US (We the People). That’s why they keep raising cigarette and gasoline taxes. Everyone knows that smoking is a slovenly redneck habit (movie stars excepted, of course).
As for gasoline, it’s a fossil fuel. (Think of all the dinosaurs who gave their lives so that you can guzzle gas.) But lots of people — especially politicians — “know” (because “the science” says so) that the use of fossil fuels has caused a (rather erratic) rise in Earth’s average temperature (as measured mainly by thermometers situated in urban heat-islands) amounting to 2 degrees in 170 years. (You can get same effect by sitting in the same place for a few minutes after the sun rises.)
According to “the science” this rather puny (and mostly phony) phenomenon is due to something called the “greenhouse effect”, in which atmospheric gases capture heat and prevent it from escaping to outer space. One of the gases (a very minor one) is CO2, some of which is emitted by human activity (e.g., guzzling gas), although the amount of CO2 in the atmosphere keeps rising even when human activity slows down (as in during economic recessions and pandemics). Neverthelyess, politicians believe “the science” and therefore believe that the use of fossil fuels must be stopped (STOPPED) even if it means another Great Depression and mass starvation. Or, even better, stopping the use of fossil fuels will result in the near-extinction of the human race so that evil human beings will no longer be able to use fossil fuels, and good ones (who also use them but are excused because they believe in “the science”) will have all the fossil fuels to themselves.
Ah, perfection at last.
This is the sixth (and perhaps final) installment of a series about Cass Sunstein, whom I have dubbed the plausible authoritarian because of his ability to make authoritarian measures seem like reasonable ways of advancing democratic participation and social comity. The first five installments are here, here, here, here, and here.
Cass Sunstein (CS) became Barack Obama’s “regulatory czar” (Administrator of the Office of Information and Regulatory Affairs), following a prolonged delay in action on his nomination to the office because of his controversial views. This post draws on posts that I wrote during and after CS’s “czarship”, which lasted from September 2009 to August 2012.
Alec Rawls, writing at his blog, Error Theory, found CS on the wrong side of history (to borrow one of his boss’s favorite slogans):
As Congress considers vastly expanding the power of copyright holders to shut down fair use of their intellectual property, this is a good time to remember the other activities that Obama’s “regulatory czar” Cass Sunstein wants to shut down using the tools of copyright protection. For a couple of years now, Sunstein has been advocating that the “notice and take down” model from copyright law should be used against rumors and conspiracy theories, “to achieve the optimal chilling effect.”
Why?
Sunstein seems most intent on suppressing is the accusation, leveled during the 2008 election campaign, that Barack Obama “pals around with terrorists.” (“Look Inside” page 3.) Sunstein fails to note that the “palling around with terrorists” language was introduced by the opposing vice presidential candidate, Governor Sarah Palin (who was implicating Obama’s relationship with domestic terrorist Bill Ayers). Instead Sunstein focuses his ire on “right wing websites” that make “hateful remarks about the alleged relationship between Barack Obama and the former radical Bill Ayers,” singling out Sean Hannity for making hay out of Obama’s “alleged associations” (op. cit., pages 13-14, no longer displayed).
What could possibly be more important than whether a candidate for president does indeed “pal around with terrorists”? Of all the subjects to declare off limits, this one is right up there with whether the anti-CO2 alarmists who are trying to unplug the modern world are telling the truth. And Sunstein’s own bias on the matter could hardly be more blatant. Bill Ayers is a “former” radical? Bill “I don’t regret setting bombs” Ayers? Bill “we didn’t do enough” Ayers?
For the facts of the Obama-Ayers relationship, Sunstein apparently accepts Obama’s campaign dismissal of Ayers as just “a guy who lives in my neighborhood.” In fact their relationship was long and deep. Obama’s political career was launched via a fundraiser in Bill Ayers’ living room; Obama was appointed the first chairman of the Ayers-founded Annenberg Challenge, almost certainly at Ayers’ request [link broken]; Ayers and Obama served together on the board of the Woods Foundation, distributing money to radical left-wing causes; and it has now been reported by full-access White House biographer Christopher Andersen (and confirmed by Bill Ayers) that Ayers actually ghost wrote Obama’s first book Dreams from My Father.
Whenever free speech is attacked, the real purpose is to cover up the truth. Not that Sunstein himself knows the truth about anything. He just knows what he wants to suppress, which is exactly why government must never have this power.
As Rawls further noted, CS also wanted to protect “warmists” from their critics, that is, to suppress science in the name of science:
In climate science, there is no avoiding “reference to the machinations of powerful people, who have also managed to conceal their role.” The Team has always been sloppy about concealing its machinations, but that doesn’t stop Sunstein from using climate skepticism as an exemplar of pernicious conspiracy theorizing, and his goal is perfectly obvious: he wants the state to take aggressive action that will make it easier for our powerful government funded scientists to conceal their machinations.
After CS returned to academe, his spirit lived on in the White House, particularly with regard to CS’s advocacy of thought control, which I exposed at length in part 4 of this series.
Thus:
[Obama] Administration officials have asked YouTube to review a controversial video that many blame for spurring a wave of anti-American violence in the Middle East.
The administration flagged the 14-minute “Innocence of Muslims” video and asked that YouTube evaluate it to determine whether it violates the site’s terms of service, officials said Thursday. The video, which has been viewed by nearly 1.7 million users, depicts Muhammad as a child molester, womanizer and murderer — and has been decried as blasphemous and Islamophobic.
“Review” it, or else. When the 500-pound gorilla speaks, you say “yes, sir.”
Way to go, O-blame-a. Do not stand up for Americans. Suppress them instead. It’s the CS way.
CS later regaled his followers with this:
Suppose that an authoritarian government decides to embark on a program of curricular reform, with the explicit goal of indoctrinating the nation’s high school students. Suppose that it wants to change the curriculum to teach students that their government is good and trustworthy, that their system is democratic and committed to the rule of law, and that free markets are a big problem.
Will such a government succeed? Or will high school students simply roll their eyes?
Questions of this kind have long been debated, but without the benefit of reliable evidence. New research, from Davide Cantoni of the University of Munich and several co-authors, shows that recent curricular reforms in China, explicitly designed to transform students’ political views, have mostly worked….
… [G]overnment planners were able to succeed in altering students’ views on fundamental questions about their nation. As Cantoni and his co-authors summarize their various findings, “the state can effectively indoctrinate students.” To be sure, families and friends matter, as do economic incentives, but if an authoritarian government is determined to move students in major ways, it may well be able to do so.
Is this conclusion limited to authoritarian nations? In a democratic country with a flourishing civil society, a high degree of pluralism, and ample room for disagreement and dissent — like the U.S. — it may well be harder to use the curriculum to change the political views of young people. But even in such societies, high schools probably have a significant ability to move students toward what they consider “a correct worldview, a correct view on life, and a correct value system.” That’s an opportunity, to be sure, but it is also a warning. [“Open Brain, Insert Ideology,” Bloomberg View, May 20, 2014]
Where had CS been? He seemed unaware of the left-wing ethos that has long prevailed in most of America’s so-called institutions of learning. It doesn’t take an authoritarian government (well, not one as authoritarian as China’s) to indoctrinate students in “a correct worldview, a correct view on life, and a correct value system”. All it takes is the spread of left-wing “values” by the media and legions of pedagogues, most of them financed (directly and indirectly) by a thoroughly subverted government. It’s almost a miracle — and something of a moral victory — that there are still tens of millions of Americans who resist and oppose left-wing “values”.
Moving on, I found CS arguing circularly in his contribution to a collection of papers entitled “Economists on the Welfare State and the Regulatory State: Why Don’t Any Argue in Favor of One and Against the Other?” (Econ Journal Watch, Volume 12, Issue 1, January 2015):
… [I]t seems unhelpful, even a recipe for confusion, to puzzle over the question whether economists (or others) ‘like,’ or ‘lean toward,’ both the regulatory state and the welfare state, or neither, or one but not the other. But there is a more fine-grained position on something like that question, and I believe that many (not all) economists would support it. The position is this: The regulatory state should restrict itself to the correction of market failures, and redistributive goals are best achieved through the tax system. Let’s call this (somewhat tendentiously) the Standard View….
My conclusion is that it is not fruitful to puzzle over the question whether economists and others ‘favor’ or ‘lean’ toward the regulatory or welfare state, and that it is better to begin by emphasizing that the first should be designed to handle market failures, and that the second should be designed to respond to economic deprivation and unjustified inequality…. [Sunstein, “Unhelpful Abstractions and the Standard View,” op cit.]
“Market failures” and “unjustified inequality” are the foundation stones of what passes for economic and social thought on the left. Every market outcome that falls short of the left’s controlling agenda is a “failure”. And market and social outcomes that fall short of the left’s illusory egalitarianism are “unjustified”. CS, in other words, couldn’t (and probably still can’t) see that he is a typical leftist who (implicitly) favors both the regulatory state and the welfare state. He is like a fish in water.
Along came a writer who seemed bent on garnering sympathy for CS. I am referring to Andrew Marantz, who wrote “How a Liberal Scholar of Conspiracy Theories Became the Subject of a Right-Wing Conspiracy Theory” (New Yorker, December 27, 2017):
In 2010, Marc Estrin, a novelist and far-left activist from Vermont, found an online version of a paper by Cass Sunstein, a professor at Harvard Law School and the most frequently cited legal scholar in the world. The paper, called “Conspiracy Theories,” was first published in 2008, in a small academic journal called the Journal of Political Philosophy. In it, Sunstein and his Harvard colleague Adrian Vermeule attempted to explain how conspiracy theories spread, especially online. At one point, they made a radical proposal: “Our main policy claim here is that government should engage in cognitive infiltration of the groups that produce conspiracy theories.” The authors’ primary example of a conspiracy theory was the belief that 9/11 was an inside job; they defined “cognitive infiltration” as a program “whereby government agents or their allies (acting either virtually or in real space, and either openly or anonymously) will undermine the crippled epistemology of believers by planting doubts about the theories and stylized facts that circulate within such groups.”
Nowhere in the final version of the paper did Sunstein and Vermeule state the obvious fact that a government ban on conspiracy theories would be unconstitutional and possibly dangerous. (In a draft that was posted online, which remains more widely read, they emphasized that censorship is “inconsistent with principles of freedom of expression,” although they “could imagine circumstances in which a conspiracy theory became so pervasive, and so dangerous, that censorship would be thinkable.”)* “I was interested in the mechanisms by which information, whether true or false, gets passed along and amplified,” Sunstein told me recently. “I wanted to know how extremists come to believe the warped things they believe, and, to a lesser extent, what might be done to interrupt their radicalization. But I suppose my writing wasn’t very clear.”
On the contrary, CS’s writing was quite clear. So clear that even leftists were alarmed by it. Returning to Marantz’s account:
When Barack Obama became President, in 2009, he appointed Sunstein, his friend and former colleague at the University of Chicago Law School, to be the administrator of the Office of Information and Regulatory Affairs. The O.I.R.A. reviews drafts of federal rules, and, using tools such as cost-benefit analysis, recommends ways to make them more efficient. O.I.R.A. administrator is the sort of in-the-weeds post that even lifelong technocrats might find unglamorous; Sunstein had often described it as his “dream job.” He took a break from academia and moved to Washington, D.C. It soon became clear that some of his published views, which he’d thought of as “maybe a bit mischievous, but basically fine, within the context of an academic journal,” could seem far more nefarious in the context of the open Internet.
Estrin, who seems to have been the first blogger to notice the “Conspiracy Theories” paper, published a post in January, 2010, under the headline “Got Fascism?” “Put into English, what Sunstein is proposing is government infiltration of groups opposing prevailing policy,” he wrote on the “alternative progressive” Web site the Rag Blog. Three days later, the journalist Daniel Tencer (Twitter bio: “Lover of great narratives in all their forms”) expanded on Estrin’s post, for Raw Story. Two days after that, the civil-libertarian journalist Glenn Greenwald wrote a piece for Salon headlined “Obama Confidant’s Spine-Chilling Proposal.” Greenwald called Sunstein’s paper “truly pernicious,” concluding, “The reason conspiracy theories resonate so much is precisely that people have learned—rationally—to distrust government actions and statements. Sunstein’s proposed covert propaganda scheme is a perfect illustration of why that is.” Sunstein’s “scheme,” as Greenwald put it, wasn’t exactly a government action or statement. Sunstein wasn’t in government when he wrote it, in 2008; he was in the academy, where his job was to invent thought experiments, including provocative ones. But Greenwald was right that not all skepticism is paranoia.
And then:
Three days after Estrin’s post was published on the Rag Blog, the fire jumped to the other side of the road. Paul Joseph Watson, writing for the libertarian conspiracist outfit InfoWars, linked to Estrin’s post and riffed on it, in a free-associative mode, for fifteen hundred words. “It is a firmly established fact that the military-industrial complex which also owns the corporate media networks in the United States has numerous programs aimed at infiltrating prominent Internet sites and spreading propaganda to counter the truth,” Watson wrote. His boss at InfoWars, Alex Jones, began expanding on this talking point on his daily radio show: “Cass Sunstein says ban conspiracy theories, and that’s whatever he says it is. That’s on record.”
At the time, Glenn Beck hosted both a daily TV show on Fox News and a syndicated radio show; according to a Harris poll, he was the country’s second-favorite TV personality, after Oprah Winfrey. Beck had been delivering impassioned rants against Sunstein for months, calling him “the most dangerous man in America.” Now he added the paper about conspiracy theories to his litany of complaints. In one typical TV segment, in April of 2010, he devoted several minutes to a close reading of the paper, which lists five possible ways that a government might respond to conspiracy theories, including banning them outright. “The government should ban them,” Beck said, over-enunciating to express his incredulity. “How a government with an amendment guaranteeing freedom of speech bans a conspiracy theory is absolutely beyond me, but it’s not beyond a great mind and a great thinker like Cass Sunstein.” In another show, Beck insinuated that Sunstein had been inspired by Edward Bernays, the author of a 1928 book called “Propaganda.” “I got a flood of messages that night, saying, ‘You should be ashamed of yourself, you’re a disciple of Bernays,’ ” Sunstein recalled. “The result was that I was led to look up this interesting guy Bernays, whom I might not have heard of otherwise.”
For much of 2010 and 2011, Sunstein was such a frequent target on right-wing talk shows that some Tea Party-affiliated members of Congress started to invoke his name as a symbol of government overreach. Earlier in the Obama Administration, Beck had targeted Van Jones, now of CNN, who was then a White House adviser on green jobs. After a few weeks of Beck’s attacks, Jones resigned. “Then Beck made it sort of clear that he wanted me to be next,” Sunstein said. “It wasn’t a pleasant fact, but I didn’t see what I could do about it. So I put it out of my mind.”
Sunstein was never asked to resign. He served as the head of O.I.R.A. for three years, then returned to Harvard, in 2012. Two years later, he published an essay collection called “Conspiracy Theories and Other Dangerous Ideas.” The first chapter was a revised version of the “Conspiracy Theories” paper, with several qualifications added and with Vermeule’s name removed. But the revisions did nothing to improve Sunstein’s standing on far-right talk shows, where he had already earned a place, along with Saul Alinsky and George Soros and Al Gore, in the pantheon of globalist bogeymen. Beck referred to Sunstein as recently as last year, on his radio show, while discussing the Obama Administration’s “propaganda” in favor of the Iran nuclear deal. “We no longer have Jefferson and Madison leading us,” Beck said. “We have Saul Alinsky and Cass Sunstein. Whatever it takes to win, you do.” Last December, Alex Jones—who is, improbably, now taken more seriously than Beck by many conservatives, including some in the White House—railed against a recent law, the Countering Foreign Propaganda and Disinformation Act, claiming, speciously, that it would “completely federalize all communications in the United States” and “put the C.I.A. in control of media.” According to Jones, blame for the law rested neither with the members of Congress who wrote it nor with President Obama, who signed it. “I was sitting here this morning . . . And I keep thinking, What are you looking at that’s triggered a memory here?” Jones said. “And then I remembered, Oh, my gosh! It’s Cass Sunstein.”
Cue the tears for Sunstein:
Recently, on the Upper East Side, Sunstein stood behind a Lucite lectern and gave a talk about “#Republic.” Attempting to end on a hopeful note, he quoted John Stuart Mill: “It is hardly possible to overrate the value . . . of placing human beings in contact with persons dissimilar to themselves.” He then admitted, with some resignation, that this describes the Internet we should want, not the Internet we have.
After the talk, we sat in a hotel restaurant and ordered coffee. Sunstein has a sense of humor about his time in the spotlight—what he calls not his fifteen minutes of fame but his Two Minutes Hate, an allusion to “1984”—and yet he wasn’t sure what lessons he had learned from the experience, if any. “I can’t say I spent much time thinking about it, then or now,” he said. “The rosy view would be that it says something hopeful about us—about Americans, that is. We’re highly distrustful of anything that looks like censorship, or spying, or restriction of freedom in any way. That’s probably a good impulse.” He folded his hands on the table, as if to signal that he had phrased his thoughts as diplomatically as possible.
I’m not buying it. CS deserved (and deserves) every bit of blame that has come his way, and I certainly wouldn’t buy a car or house from him. He was attacked from the left and right for good reason, and portraying his attackers as kooks and extremists doesn’t change the facts of the matter. Sunstein’s 2010 article wasn’t a one-off thing. Six years earlier he published “The Future of Free Speech”, which I quoted from and analyzed in part 4 of this series. I ended with this:
[T]he fundamental reason to reject [CS’s] scheme its authoritarianism. It would effectively bring the broadcast media and the internet under control by a government bureaucracy. Any bureaucracy that is empowered to insist upon “completeness”, “fairness”, and “balance” in the exposition of ideas is thereby empowered to define and enforce its conception of those attributes. It is easy to imagine how a bureaucracy that is dominated by power-crazed zealots who espouse socialism, gender fluidity, “equity”, etc., etc., would deploy its power.
In an earlier post I said that Cass Sunstein is to the integrity of constitutional law as Pete Rose was to the integrity of baseball. It’s worse than that: Sunstein’s willingness to abuse constitutional law in the advancement of a statist agenda reminds me of Hitler’s abuse of German law to advance his repugnant agenda.
There is remorse for having done something wrong, and there is chagrin at having been caught doing something wrong. CS’s conversation-over-coffee with Marantz reads very much like the latter.
It remains a mystery to me why CS has been called a “legal Olympian.” Then, again, if there were a legal Olympics, its main events would be Obfuscation and Casuistry, and CS would be a formidable contestant in both events.
I was unaware of the Implicit Association Test (IAT) until a few years ago, when I took a test at YourMorals.Org that purported to measure my implicit racial preferences. I’ll say more about that after discussing IAT, which has been exposed as junk. That’s what John. J. Ray calls it:
Psychologists are well aware that people often do not say what they really think. It is therefore something of a holy grail among them to find ways that WILL detect what people really think. A very popular example of that is the Implicit Associations test (IAT). It supposedly measures racist thoughts whether you are aware of them or not. It sometimes shows people who think they are anti-racist to be in fact secretly racist.
I dismissed it as a heap of junk long ago (here and here) but it has remained very popular and is widely accepted as revealing truth. I am therefore pleased that a very long and thorough article has just appeared which comes to the same conclusion that I did. [“Psychology’s Favorite Tool for Measuring Racism Isn’t Up to the Job“, Political Correctness Watch, September 6, 2017]
The article in question (which has the same title as Ray’s post) is by Jesse Singal. It appeared at Science of Us on January 11, 2017. Here are some excerpts:
Perhaps no new concept from the world of academic psychology has taken hold of the public imagination more quickly and profoundly in the 21st century than implicit bias — that is, forms of bias which operate beyond the conscious awareness of individuals. That’s in large part due to the blockbuster success of the so-called implicit association test, which purports to offer a quick, easy way to measure how implicitly biased individual people are….
Since the IAT was first introduced almost 20 years ago, its architects, as well as the countless researchers and commentators who have enthusiastically embraced it, have offered it as a way to reveal to test-takers what amounts to a deep, dark secret about who they are: They may not feel racist, but in fact, the test shows that in a variety of intergroup settings, they will act racist….
[The] co-creators are Mahzarin Banaji, currently the chair of Harvard University’s psychology department, and Anthony Greenwald, a highly regarded social psychology researcher at the University of Washington. The duo introduced the test to the world at a 1998 press conference in Seattle — the accompanying press release noted that they had collected data suggesting that 90–95 percent of Americans harbored the “roots of unconscious prejudice.” The public immediately took notice: Since then, the IAT has been mostly treated as a revolutionary, revelatory piece of technology, garnering overwhelmingly positive media coverage….
Maybe the biggest driver of the IAT’s popularity and visibility, though, is the fact that anyone can take the test on the Project Implicit website, which launched shortly after the test was unveiled and which is hosted by Harvard University. The test’s architects reported that, by October 2015, more than 17 million individual test sessions had been completed on the website. As will become clear, learning one’s IAT results is, for many people, a very big deal that changes how they view themselves and their place in the world.
Given all this excitement, it might feel safe to assume that the IAT really does measure people’s propensity to commit real-world acts of implicit bias against marginalized groups, and that it does so in a dependable, clearly understood way….
Unfortunately, none of that is true. A pile of scholarly work, some of it published in top psychology journals and most of it ignored by the media, suggests that the IAT falls far short of the quality-control standards normally expected of psychological instruments. The IAT, this research suggests, is a noisy, unreliable measure that correlates far too weakly with any real-world outcomes to be used to predict individuals’ behavior — even the test’s creators have now admitted as such.
How does IAT work? Singal summarizes:
You sit down at a computer where you are shown a series of images and/or words. First, you’re instructed to hit ‘i’ when you see a “good” term like pleasant, or to hit ‘e’ when you see a “bad” one like tragedy. Then, hit ‘i’ when you see a black face, and hit ‘e’ when you see a white one. Easy enough, but soon things get slightly more complex: Hit ‘i’ when you see a good word or an image of a black person, and ‘e’ when you see a bad word or an image of a white person. Then the categories flip to black/bad and white/good. As you peck away at the keyboard, the computer measures your reaction times, which it plugs into an algorithm. That algorithm, in turn, generates your score.
If you were quicker to associate good words with white faces than good words with black faces, and/or slower to associate bad words with white faces than bad words with black ones, then the test will report that you have a slight, moderate, or strong “preference for white faces over black faces,” or some similar language. You might also find you have an anti-white bias, though that is significantly less common. By the normal scoring conventions of the test, positive scores indicate bias against the out-group, while negative ones indicate bias against the in-group.
The rough idea is that, as humans, we have an easier time connecting concepts that are already tightly linked in our brains, and a tougher time connecting concepts that aren’t. The longer it takes to connect “black” and “good” relative to “white” and “good,” the thinking goes, the more your unconscious biases favor white people over black people.
Singal continues (at great length) to pile up the mountain of evidence against IAT, and to caution against reading anything into the results it yields.
Having become aware of the the debunking of IAT, I went to the website of Project Implicit. I was surprised to learn that I could not only find out whether I’m a closet racist but also whether I prefer dark or light skin tones, Asians or non-Asians, Trump or a previous president, and several other things or their opposites. I chose to discover my true feelings about Trump vs. a previous president, and was faced with a choice between Trump and Clinton.
What was the result of my several minutes of tapping “e” and “i” on the keyboard of my PC? This:
Your data suggest a moderate automatic preference for Bill Clinton over Donald Trump.
Balderdash! Though Trump is obviously not of better character than Clinton, he’s obviously not of worse character. And insofar as policy goes, the difference between Trump and Clinton is somewhat like the difference between a non-silent Calvin Coolidge and an FDR without the patriotism. (With apologies to the memory of Coolidge, my favorite president.)
Now, what did IAT say about my racism, or lack thereof? For years I proudly posted these results at the bottom of my “About” page and in the accompanying moral profile:
The study you just completed is an Implicit Association Test (IAT) that compares the strength of automatic mental associations. In this version of the IAT, we investigated positive and negative associations with the categories of “African Americans” and “European Americans”.
The idea behind the IAT is that concepts with very closely related (vs. unrelated) mental representations are more easily and quickly responded to as a single unit. For example, if “European American” and “good” are strongly associated in one’s mind, it should be relatively easy to respond quickly to this pairing by pressing the “E” or “I” key. If “European American” and “good” are NOT strongly associated, it should be more difficult to respond quickly to this pairing. By comparing reaction times on this test, the IAT gives a relative measure of how strongly associated the two categories (European Americans, African Americans) are to mental representations of “good” and “bad”. Each participant receives a single score, and your score appears below.
Your score on the IAT was 0.07.
Positive scores indicate a greater implicit preference for European Americans relative to African Americans, and negative scores indicate an implicit preference for African Americans relative to European Americans.
Your score appears in the graph below in green. The score of the average Liberal visitor to this site is shown in blue and the average Conservative visitor’s score is shown in red.
It should be noted that my slightly positive score probably was influenced by the order in which choices were presented to me. Initially, pleasant concepts were associated with photos of European-Americans. I became used to that association, and so found that it affected my reaction time when I was faced with pairings of pleasant concepts and photos of African-Americans. The bottom line: My slight preference for European-Americans probably is an artifact of test design.
In other words, I believed that my very low score, despite the test set-up, “proved” that I am not a racist. But thanks (or no thanks) to John Ray and Jesse Singal, I must conclude, sadly, that I have no “official” proof of my non-racism.
I suspect that I am not a racist. I don’t despise blacks as a group, nor do I believe that they should have fewer rights and privileges than whites. (Neither do I believe that they should have more rights and privileges than whites or persons of Asian or Ashkenazi Jewish descent — but they certainly do when it comes to college admissions, hiring, and firing.) It isn’t racist to understand that race isn’t a social construct (except in a meaningless way) and that there are general differences between races (see many of the posts listed here). That’s just a matter of facing facts, not ducking them, as leftists are wont to do.
What have I learned from the IAT? I must have very good reflexes. A person who processes information rapidly and then almost instantly translates it into a physical response should be able to “beat” the IAT. And that’s probably what I did in the Trump vs. Clinton test, if not in the racism test. I’m a fast typist and very quick at catching dropped items before they hit the floor. (My IQ, or what’s left of it, isn’t bad either; go here and scroll down to the section headed “Intelligence, Temperament, and Beliefs”.)
Perhaps the IAT for racism could be used to screen candidates for fighter-pilot training. Only “non-racists” would be admitted. Anyone who isn’t quick enough to avoid the “racist” label isn’t quick enough to win a dogfight.
The Iraq War has been called many things, “immoral” being among the leading adjectives for it. Was it altogether immoral? Was it immoral to remain in favor of the war after it was (purportedly) discovered that Saddam Hussein didn’t have an active program for the production of weapons of mass destruction? Or was the war simply misdirected from its proper — and moral — purpose: the service of Americans’ interests by stabilizing the Middle East? I address those and other questions about the war in what follows.
The sole justification for the United States government is the protection of Americans’ interests. Those interests are spelled out broadly in the Preamble to the Constitution: justice, domestic tranquility, the common defense, the general welfare, and the blessings of liberty.
Contrary to leftist rhetoric, the term “general welfare” in the Preamble (and in Article I, Section 8) doesn’t grant broad power to the national government to do whatever it deems to be “good”. “General welfare” — general well-being, not the well-being of particular regions or classes — is merely one of the intended effects of the enumerated and limited powers granted to the national government by conventions of the States.
One of the national government’s specified powers is the making of war. In the historical context of the adoption of the Constitution, it is clear the the purpose of the war-making power is to defend Americans and their legitimate interests: liberty generally and, among other things, the free flow of trade between American and foreign entities. The war-making power carries with it the implied power to do harm to foreigners in the course of waging war. I say that because the Framers, many of whom fought for independence from Britain, knew from experience that war, of necessity, must sometimes cause damage to the persons and property of non-combatants.
In some cases, the only way to serve the interests of Americans is to inflict deliberate damage on non-combatants. That was the case, for example, when U.S. air forces dropped atomic bombs on Hiroshima and Nagasaki to force Japan’s surrender and avoid the deaths and injuries of perhaps a million Americans. Couldn’t Japan have been “quarantined” instead, once its forces had been driven back to the homeland? Perhaps, but at great cost to Americans. Luckily, in those days American leaders understood that the best way to ensure that an enemy didn’t resurrect its military power was to defeat it unconditionally and to occupy its homeland. You will have noticed that as a result, Germany and Japan are no longer military threats to the U.S., whereas Iraq remained one after the Gulf War of 1990-1991 because Saddam wasn’t deposed. Russia, which the U.S. didn’t defeat militarily — only symbolically — is resurgent militarily. China, which wasn’t even defeated symbolically in the Cold War, is similarly resurgent, and bent on regional if not global hegemony, necessarily to the detriment of Americans’ interests. To paraphrase: There is no substitute for unconditional military victory.
That is a hard and unfortunate truth, but it eludes many persons, especially those of the left. They suffer under dual illusions, namely, that the Constitution is an outmoded document and that “world opinion” trumps the Constitution and the national sovereignty created by it. Neither illusion is shared by Americans who want to live in something resembling liberty and to enjoy the advantages pertaining thereto, including prosperity.
The invasion of Iraq in 2003 by the armed forces of the U.S. government (and those of other nations) had explicit and implicit justifications. The explicit justifications for the U.S. government’s actions are spelled out in the Authorization for Use of Military Force Against Iraq of 2002 (AUMF). It passed the House by a vote of 296 – 133 and the Senate by a vote of 77 – 23, and was signed into law by President George W. Bush on October 16, 2002.
There are some who focus on the “weapons of mass destruction” (WMD) justification, which figures prominently in the “whereas” clauses of the AUMF. But the war, as it came to pass when Saddam failed to respond to legitimate demands spelled out in the AUMF, had a broader justification than whatever Saddam was (or wasn’t) doing with weapons of mass destruction (WMD). The final “whereas” puts it succinctly: it is in the national security interests of the United States to restore international peace and security to the Persian Gulf region.
An unstated but clearly understood implication of “peace and security in the Persian Gulf region” was the security of the region’s oil supply against Saddam’s capriciousness. The mantra “no blood for oil” to the contrary notwithstanding, it is just as important to defend the livelihoods of Americans as it is to defend their lives — and in many instances it comes to the same thing.
In sum, I disregard the WMD rationale for the Iraq War. The real issue is whether the war secured the stability of the Persian Gulf region (and the Middle East in general). And if it didn’t, why did it fail to do so?
One can only speculate about what might have happened in the absence of the Iraq War. For instance, how many more Iraqis might have been killed and tortured by Saddam’s agents? How many more terrorists might have been harbored and financed by Saddam? How long might it have taken him to re-establish his WMD program or build a nuclear weapons program? Saddam, who started it all with the invasion of Kuwait, wasn’t a friend of the U.S. or the West in general. The U.S. isn’t the world’s policeman, but the U.S. government has a moral obligation to defend the interests of Americans, preemptively if necessary.
By the same token, one can only speculate about what might have happened if the U.S. government had prosecuted the war differently than it did, which was “on the cheap”. There weren’t enough boots on the ground to maintain order in the way that it was maintained by the military occupations in Germany and Japan after World War II. Had there been, there wouldn’t have been a kind of “civil war” or general chaos in Iraq after Saddam was deposed. (It was those things, as much as the supposed absence of a WMD program that turned many Americans against the war.)
Speculation aside, I supported the invasion of Iraq, the removal of Saddam, and the rout of Iraq’s armed forces with the following results in mind:
A firm military occupation of Iraq, for some years to come.
The presence in Iraq and adjacent waters and airspace of U.S. forces in enough strength to control Iraq and deter misadventures by other nations in the region (e.g., Iran and Syria) and prospective interlopers (e.g., Russia).
Israel’s continued survival and prosperity under the large shadow cast by U.S. forces in the region.
Secure production and shipment of oil from Iraq and other oil-producing nations in the region.
All of that would have happened but for (a) too few boots on the ground (later remedied in part by the “surge”); (b) premature “nation-building”, which helped to stir up various factions in Iraq; (c) Obama’s premature surrender, which he was shamed into reversing; and (d) Obama’s deal with Iran, with its bundles of cash and blind-eye enforcement that supported Iran’s rearmament and growing boldness in the region. (The idea that Iraq, under Saddam, had somehow contained Iran is baloney; Iran was contained only until its threat to go nuclear found a sucker in Obama.)
In sum, the war was only a partial success because (once again) U.S. leaders failed to wage it fully and resolutely. This was due in no small part to incessant criticism of the war, stirred up and sustained by Democrats and the media.
In view of the foregoing, the correct answer is: the U.S. government, or those of its leaders who approved, funded, planned, and executed the war with the aim of bringing peace and security to the Persian Gulf region for the sake of Americans’ interests.
The moral high ground was shared by those Americans who, understanding the war’s justification on grounds broader than WMD, remained steadfast in support of the war despite the tumult and shouting that arose from its opponents.
There were Americans whose support of the war was based on the claim that Saddam had ore was developing WMD, and whose support ended or became less ardent when WMD seemed not to be in evidence. I wouldn’t presume to judge them harshly for withdrawing their support, but I would judge them myopic for basing it on solely on the WMD predicate. And I would judge them harshly if they joined the outspoken opponents of the war, whose opposition I address below.
What about those Americans who supported the war simply because they believed that President Bush and his advisers “knew what they were doing” or out of a sense of patriotism? That is to say, they had no particular reason for supporting the war other than a general belief that its successful execution would be a “good thing”. None of those Americans deserves moral approbation or moral blame. They simply had better things to do with their lives than to parse the reasons for going to war and for continuing it. And it is no one’s place to judge them for not having wasted their time in thinking about something that was beyond their ability to influence. (See the discussion of “public opinion” below.)
What about those Americans who publicly opposed the war, either from the beginning or later? I cannot fault all of them for their opposition — and certainly not those who considered the costs (human and monetary) and deemed them not worth the possible gains.
But there were (and are) others whose opposition to the war was and is problematic:
Critics of the apparent absence of an active WMD program in Iraq, who seized on the WMD justification and ignored (or failed to grasp) the war’s broader justification.
Political opportunists who simply wanted to discredit President Bush and his party, which included most Democrats (eventually), effete elites generally, and particularly most members of the academic-media-information technology complex.
An increasingly large share of the impressionable electorate who could not (and cannot) resist a bandwagon.
Reflexive pro-peace/anti-war posturing by the young, who are prone to oppose “the establishment” and to do so loudly and often violently.
The moral high ground isn’t gained by misguided criticism, posturing, joining a bandwagon, or hormonal emotionalism.
Suppose you had concluded that the Iraq War was wrong because the WMD justification seemed to have been proven false as the war went on. Perhaps even than false: a fraud perpetrated by officials of the Bush administration, if not by the president himself, to push Congress and “public opinion” toward support for an invasion of Iraq.
If your main worry about Iraq, under Saddam, was the possibility that WMD would be used against Americans, the apparent falsity of the WMD claim — perhaps fraudulent falsity — might well have turned you against the war. Suppose that there were many millions of Americans like you, whose initial support of the war turned to disillusionment as evidence of an active WMD program failed to materialize. Would voicing your opinion on the matter have helped to end the war? Did you have a moral obligation to voice your opinion? And, in any event, should wars be ended because of “public opinion”? I will try to answer those questions in what follows.
The strongest case to be made for the persuasive value of voicing one’s opinion might be found in the median-voter theorem. According to Wikipedia, the median-voter theorem
“states that ‘a majority rule voting system will select the outcome most preferred by the median voter”….
The median voter theorem rests on two main assumptions, with several others detailed below. The theorem is assuming [sic] that voters can place all alternatives along a one-dimensional political spectrum. It seems plausible that voters could do this if they can clearly place political candidates on a left-to-right continuum, but this is often not the case as each party will have its own policy on each of many different issues. Similarly, in the case of a referendum, the alternatives on offer may cover more than one issue. Second, the theorem assumes that voters’ preferences are single-peaked, which means that voters have one alternative that they favor more than any other. It also assumes that voters always vote, regardless of how far the alternatives are from their own views. The median voter theorem implies that voters have an incentive to vote for their true preferences. Finally, the median voter theorem applies best to a majoritarian election system.
The article later specifies seven assumptions underlying the theorem. None of the assumptions is satisfied in the real world of American politics. Complexity never favors the truth of any proposition; it simply allows the proposition to be wrong in more ways if all of the assumptions must be true, as is the case here.
There is a weak form of the theorem, which says that
the median voter always casts his or her vote for the policy that is adopted. If there is a median voter, his or her preferred policy will beat any other alternative in a pairwise vote.
That still leaves the crucial assumption that voters are choosing between two options. This is superficially true in the case of a two-person race for office or a yes-no referendum. But, even then, a binary option usually masks non-binary ramifications that voters take into account.
In any case, it is trivially true to say that the preference of the median voter foretells the outcome of an election in a binary election, if the the outcome is decided by majority vote and there isn’t a complicating factor like the electoral college. One could say, with equal banality, that the stronger man wins the weight-lifting contest, the outcome of which determines who is the stronger man.
Why am I giving so much attention to the median-voter theorem? Because, according to a blogger whose intellectual prowess I respect, if enough Americans believe a policy of the U.S. government to be wrong, the policy might well be rescinded if the responsible elected officials (or, presumably, their prospective successors) believe that the median voter wants the policy rescinded. How would that work?
The following summary of the blogger’s case is what I gleaned from his original post on the subject and several comments and replies. I have inserted parenthetical commentary throughout.
The pursuit of the Iraq War after the WMD predicate for it was (seemingly) falsified — hereinafter policy X — was immoral because X led unnecessarily to casualties, devastation, and other costs. (As discussed above, there were other predicates for X and other consequences of X, some of them good, but they don’t seem to matter to the blogger.)
Because X was immoral (in the blogger’s reckoning), X should have been rescinded.
Rescission would have (might have?/should have?) occurred through the operation of the median-voter theorem if enough persons had made known their opposition to X. (How might the median-voter theorem have applied when X wasn’t on a ballot? See below.)
Any person who had taken the time to consider X (taking into account only the WMD predicate and unequivocally bad consequences) could only have deemed it immoral. (The blogger originally excused persons who deemed X proper, but later made a statement equivalent to the preceding sentence. This is a variant of “heads, I win; tails, you lose”.)
Having deemed X immoral, a person (i.e., a competent, adult American) would have been morally obliged to make known his opposition to X. Even if the person didn’t know of the spurious median-voter theorem, his opposition to X (which wasn’t on a ballot) would somehow have become known and counted (perhaps in a biased opinion poll conducted by an entity opposed to X) and would therefore have helped to move the median stance of the (selectively) polled fragment of the populace toward opposition to X, whereupon X would be rescinded, according to the median-voter theorem. (Or perhaps vociferous opposition, expressed in public protests, would be reported by the media — especially by those already opposed to X — as indicative of public opinion, whether or not it represented a median view of X.)
Further, any competent, adult American who didn’t bother to take the time to evaluate X would have been morally complicit in the continuation of X. (This must be the case because the blogger says so, without knowing each person’s assessment of the slim chance that his view of the matter would affect X, or the opportunity costs of evaluating X and expressing his view of it.)
So the only moral course of action, according to the blogger, was for every competent, adult American to have taken the time to evaluate X (in terms of the WMD predicate), to have deemed it immoral (there being no other choice given the constraint just mentioned), and to have made known his opposition to the policy. (This despite the fact that most competent, adult Americans know viscerally or from experience that the median-voter theorem is hooey — more about that below — and that it would therefore have been a waste of their time to get worked up about a policy that wasn’t unambiguously immoral. Further, they were and are rightly reluctant to align themselves with howling mobs and biased media — even by implication, as in a letter to the editor — in protest of a policy that wasn’t unambiguously immoral.)
Then, X (which wasn’t on a ballot) would have been rescinded, pursuant to the median-voter theorem (or, properly, the outraged/vociferous-pollee/protester-biased pollster/media theorem). (Except that X wasn’t, in fact, rescinded despite massive outpourings of outrage by small fractions of the populace, which were gleefully reflected in biased polls and reported by biased media. Nor was it rescinded by implication when President Bush was up for re-election — he won. It might have been rescinded by implication when the Bush was succeeded by Obama — an opponent of X — but there were many reasons other than X for Obama’s victory: mainly the financial crisis, McCain’s lame candidacy, and a desire by many voters to signal — to themselves, at least — their non-racism by voting for Obama. And X wasn’t doing all that badly at the time of Obama’s election because of the troop “surge” authorized by Bush. Further, Obama’s later attempt to rescind X had consequences that caused him to reverse his attempted rescission, regardless of any lingering opposition to X.)
What about other salient, non-ballot issues? Does “public opinion” make a difference? Sometimes yes, sometimes no. Obamacare, for example, was widely opposed until it was enacted by Congress and signed into law by Obama. It suddenly became popular because much of the populace wants to be on the “winning side” of an issue. (So much for the moral value of public opinion.) Similarly, abortion was widely deemed to be immoral until the Supreme Court legalized it. Suddenly, it began to become acceptable according to “public opinion”. I could go on an on, but you get the idea: Public opinion often follows policy rather than leading it, and its moral value is dubious in any event.
But what about cases where government policy shifted in the aftermath of widespread demonstrations and protests? Did demonstrations and protests lead to the enactment of the Civil Rights Acts of the 1960s? Did they cause the U.S. government to surrender, in effect, to North Vietnam? No and no. From where I sat — and I was a politically aware, voting-age, adult American of the “liberal” persuasion at the time of those events — public opinion had little effect on the officials who were responsible for the Civil Rights Acts or the bug-out from Vietnam.
The civil-rights movement of the 1950s and 1960s and the anti-war movement of the 1960s and 1970s didn’t yield results until years after their inception. And those results didn’t (at the time, at least) represent the views of most Americans who (I submit) were either indifferent or hostile to the advancement of blacks and to the anti-patriotic undertones of the anti-war movement. In both cases, mass protests were used by the media (and incited by the promise of media attention) to shame responsible officials into acting as media elites wanted them to.
Further, it is a mistake to assume that the resulting changes in law (writ broadly to include policy) were necessarily good changes. The stampede to enact civil-rights laws in the 1960s, which hinged not so much on mass protests but on LBJ”s “white guilt” and powers of persuasion, resulted in the political suppression of an entire region, the loss of property rights, and the denial of freedom of association. (See, for example, Christopher Caldwell’s “The Roots of Our Partisan Divide“, Imprimis, February 2020.)
The bug-out from Vietnam foretold the U.S. government’s fecklessness in the Iran hostage crisis; the withdrawal of U.S. forces from Lebanon after the bombing of Marine barracks there; the failure of G.H.W. Bush to depose Saddam when it would have been easy to do so; the legalistic response to the World Trade Center bombing; the humiliating affair in Somalia; Clinton’s failure to take out Osama bin Laden; Clinton’s tepid response to Saddam’s provocations; nation-building (vice military occupation) in Iraq; and Obama’s attempt to pry defeat from the jaws of something resembling victory in Iraq.
All of that, and more, is symptomatic of the influence that “liberal” elites came to exert on American foreign and defense policy after World War II. Public opinion has been a side show, and protestors have been useful idiots to the cause of “liberal internationalism”, that is, the surrender of Americans’ economic and security interests for the sake of various rapprochements toward “allies” who scorn America when it veers ever so slightly from the road to serfdom, and enemies — Russia and China — who have never changed their spots, despite “liberal” wishful thinking. Handing America’s manufacturing base to China in the name of free trade is of a piece with all the rest.
It is irresponsible to call a policy immoral without evaluating all of its predicates and consequences. One might as well call the Allied leaders of World War II immoral because they chose war — with all of its predictably terrible consequences — rather than abject surrender.
It is fatuous to ascribe immorality to anyone who was supportive of or indifferent to the war. One might as well ascribe immorality to the economic and political ignoramuses who failed to see that FDR’s policies would prolong the Great Depression, that Social Security and its progeny (Medicare and Medicaid) would become entitlements that paved the way for the central government’s commandeering of vast portions of the economy, or that the so-called social safety net would discourage work and permanently depress economic growth in America.
If I were in the business of issuing moral judgments about the Iraq War, I would condemn the strident anti-war faction for its perfidy.
… of irritating persons to be taken out and shot,
And who never would be missed — who never would be missed!
There’s the pestilential nuisances who shout into their phones,
Baring inner secrets at the volume of trombones —
All people who wear stubbly beards and iridescent tats —
All children who are petulant and whiny little brats —
All drivers who in changing lanes do so without a glance —
And others who stare at green lights as if in lost a trance —
They’d none of ‘em be missed–they’d none of ‘em be missed!
CHORUS. He’s got ‘em on the list — he’s got ‘em on the list;
And they’ll none of ‘em be missed — they’ll none of
‘em be missed.
There’s the rap and hip-hop devotee, and the others of his ilk,
And the break-dance enthusiast — I’ve got him on the list!
And the people who eat a sushi roll and puff it in your face,
They never would be missed — they never would be missed!
Then the idiot who praises, with enthusiastic tone,
Films that don’t have endings, and all races but his own;
And the “lady” in the leotard, who looks just like a guy,
And who doesn’t need to marry, but would rather like to
try;
And that singular anomaly, the wealthy socialist —
I don’t think he’d be missed — I’m sure he’d not he missed!
CHORUS. He’s got him on the list — he’s got him on the list;
And I don’t think he’ll be missed — I’m sure
he’ll not be missed!
And that jurisprudential malcontent, who just now is rather rife,
The loose constructionist — I’ve got him on the list!
All perfumed fellows, girly men, and dykes who seek a “wife” —
They’d none of ‘em be missed — they’d none of ‘em be missed.
And apologetic statesmen of a compromising kind,
Such as — What d’ye call him — Thing’em-bob, and
likewise — Never-mind,
And ‘St–’st–’st–and What’s-his-name, and also You-know-who —
The task of filling up the blanks I’d rather leave to you.
But it really doesn’t matter whom you put upon the list,
For they’d none of ‘em be missed — they’d none of ‘em be
missed!
CHORUS. You may put ‘em on the list–you may put ‘em on the list;
And they’ll none of ‘em be missed — they’ll none of
‘em be missed!
__________
Adapted from W.S. Gilbert’s lyrics for “I’ve Got a Little List,” which is sung by the character Ko-Ko in Gilbert and Sullivan’s The Mikado (1885). The original lyrics, with annotations, may be found here. The last seven lines of the final verse are unchanged, Gilbert’s observations about “statesmen” being timeless.
Among the many reasons for my hatred of flying is that I am usually seated behind someone who fails to heed the notice to return his or her seat-back to the upright position. This is a mild annoyance, compared with the severe annoyances and outright dangers that go with driving in Austin. Austiners (a moniker that I prefer to the pretentiousness of “Austinites”) exhibit a variety of egregious driving habits, the number of which exceeds the number of Willie (The Actor) Sutton‘s convictions for bank robbery.
Without further ado, I give you driving in Austin:
First on the list, because I see it so often in my neck of Austin, is driving in the middle of an unstriped, residential street, even as another vehicle approaches. This practice might be excused as a precautionary because Austiners often exit parked cars by opening doors and stepping out, heedless of traffic. But middle-of-the-road driving occurs spontaneously and is of a piece with the following self-centered habits.
Next is waiting until the last split-second to turn onto a street. This practice — which prevails along Florida’s Gulf Coast because of the age of the population there — is indulged in by drivers of all ages in Austin. It is closely related to the habit of ignoring stop signs, not just by failing to stop at them but also (and quite typically) failing to look before not stopping. Ditto — and more dangerously — red lights.
Not quite as dangerous, but mightily annoying, is the Austin habit of turning abruptly without giving a signal. And when the turn is to the right, it often is accompanied by a loop to the left, which thoroughly confuses the driver of the following vehicle and can cause him to veer into danger.
Loopy driving reaches new heights when an Austiner changes lanes or crosses lanes of traffic without looking. A signal, rarely given, occurs after the driver has made his or her move, and it means “I’m changing/crossing lanes because it’s my God-given right to do so whenever I feel like it, and it’s up to other drivers to avoid hitting my vehicle.”
The imperial prerogative — I drive where I please — also manifests itself in the form of crossing the center line while taking a curve. That this is done by drivers of all types of vehicle, from itsy-bitsy cars to hulking SUVs, indicates that the problem is sloppy driving habits, not unresponsive steering mechanisms. Other, closely related practices are taking a corner by cutting across the oncoming lane of traffic and zipping through a parking lot as if no child, other pedestrian, or vehicle might suddenly appear in the traffic lane.
At the other end of the spectrum, but just as indicative of thoughtlessness is the practice of yielding the right of way when it’s yours. This perverse courtesy only confuses the driver who doesn’t have the right of way and causes traffic to back up (needlessly) behind the yielding driver.
Then there is the seeming inability of most Austiners to park approximately in the middle of a head-in parking space and parallel to the stripes that delineate it. The ranks of the parking-challenged seem to be filled with yuppie women in small BMWs, Infinitis, and Lexi; older women in almost any kind of vehicle; and (worst of all) drivers of SUVs – of which “green” Austin has far more than its share on its antiquated street grid. It should go without saying that most of Austin’s SUV drivers are obnoxious, tail-gating jerks when they’re on the road.
Contributing to the preceding practices — and compounding the dangers of the many dangerous ones — is the evidently inalienable right of an Austiner to talk on a cell phone while driving, everywhere and (it seems) always. Yuppie women in SUVs are the worst offenders, and the most dangerous of the lot because of their self-absorption and the number of tons they wield with consummate lack of skill. Austin, it should also go without saying, has more than its share of yuppie women.
None of the above is unique to Austin. But inconsiderate and dangerous driving habits seem much more prevalent in Austin than in other places where I have driven — even including the D.C. area, where I spent 37 years.
My theory is that the prevalence of bad-driving behavior in Austin — where “liberalism” is hard-left and dominant — reflects the essentially anti-social character of “liberalism”. Despite the lip-service that “liberals” give to such things as compassion, community, and society, they worship the state and use its power to do their will — without thought or care for the lives and livelihoods thus twisted and damaged.
Loquitur Veritatem (LV): Apropos my previous post, I wish you would quit beating around the bush. If you want something, you have to spell it out. Don’t be coy, Cass, tell us how you would amend the Constitution to ensure that all internet users are exposed to points of view that they would otherwise eschew.
Cass Sunstein (CS): Let’s start with the First Amendment, which deals with freedom of speech and of the press, among other things. I’m suggesting that we simply recognize that not all speech is protected and use that fact to force the purveyors of extreme points of view to acknowledge opposing points of view.
LV: Tell us how you would restate the First Amendment so that it does the right thing.
CS: I would add the following codicil: Congress, in order to promote a more efficacious deliberative democracy, may require persons to acknowledge opposing points of view when they communicate on a subject. Further, Congress may require communications media to assist in that endeavor and to transmit points of view other than those which they might willingly transmit.
LV: So, in the name of political freedom you would curtail freedom?
CS: I don’t think of it that way. We’re all more free, in an intellectual way, when we’re exposed to a diversity of experiences and points of view. Besides, freedom is something we receive from government; government may therefore withdraw some freedom from us when it’s for our good.
LV: Let’s assume, for the sake of this discussion, that people desire political freedom, and the social and economic freedoms that flow from it. Would we really be more free if government forced us to hear, or at least take part in the transmission of, views with which we disagree, or would we simply be encumbered with more rules about how to live our lives?
CS: That’s a negative way of looking at it.
LV: Let me draw an analogy from fiction. Have you read Portnoy’s Complaint?
CS: You aren’t about to slur my ethnicity, are you?
LV: No, not at all. It’s just that the novel’s protagonist, Alex Portnoy, has an experience that reminds me of your proposed codicil to the First Amendment. His mother stood over him with a knife in an effort to make him eat his dinner. Do you think government should act like Alex Portnoy’s mother?
CS: Well, she didn’t need to pull a knife on Alex, but she obviously needed to exert her maternal authority.
LV: You don’t think Alex would have voluntarily eaten his dinner in a day or two rather than starve?
CS: Why take chances? Alex’s mother obviously suffered from anxiety caused by Alex’s refusal to eat his dinner.
LV: But Alex’s mother — being older and larger than Alex, though evidently not wiser — might have reflected on the ramifications of her threat. She didn’t really save Alex from starvation, but she did cause him to disrespect and hate her.
CS: What does that have to do with my version of the First Amendment?
LV: It has a lot to do with what happens to the cohesiveness of society, which you seem to value, when government forces people to behave in certain ways. Consider blacks, for example, who in many cases have been disrespected because they are seen as “affirmative action” doctors, lawyers, teachers, etc. But let’s move on. What about the rules that would require the acknowledgement of opposing points of view? Who would make those rules? In particular, with respect to web sites, who would select those “sites that deal with substantive issues in a serious way”? And who would identify “highly partisan” web sites that “must carry” icons pointing to those “sites that deal with substantive issues in a serious way”?
CS: An agency authorized by Congress to do such things.
LV: Let’s assume it’s the FCC, whose members are appointed by the president, subject to confirmation by the Senate. The FCC is essentially a political body, composed of some mix of Democrats and Republicans.
CS: That’s inevitably the case with any regulatory agency.
LV: Right you are. So the FCC, or any agency newly created for the purpose, wouldn’t be neutral about such issues as what constitutes an opposing point of view, which sites deal with substantive issues in a serious way, and which sites are highly partisan.
CS: You have to rely on the judgment of those appointed to perform the task of making such evaluations.
LV: But not the judgment — or preferences — of purveyors of news and views?
CS: No, because they’re likely to be wedded to their positions and not open to opposing ideas.
LV: Unlike the political appointees on the FCC and the minions who would actually devise and execute the agency’s rules?
CS: Well, those political appointees would be scrutinized by Congress, and they would be responsible for the actions of their subordinates.
LV: Members of Congress, of course, are always balanced and neutral in its views, and which never try to inflict particular points of view on regulatory agencies. Ditto the political appointees and their subordinates, who are experts at “gaming” their bosses and who outlast them by decades.
CS: You’re trying to get me to say that my version of the First Amendment would impose the judgment of politicians and bureaucrats on the news and views of corporate and individual communicators.
LV: Isn’t that exactly what would happen?
CS: But we’re better off when our duly elected representatives and their agents make such decisions. That’s how deliberative democracy is supposed to work.
LV: Oh, we elect them to tell us how to live our lives?
CS: If that’s what it takes to make us better citizens, yes.
LV: You think coercion of that sort would make us a more cohesive society and would make us more appreciative of points of view that differ from our own?
CS: It’s worth a try.
LV: And where do you stop?
CS: What do you mean?
LV: How do you know when society is sufficiently cohesive and that an acceptable fraction of its members have become appreciative of differing points of view? What do you do if communicators simply refused to cooperate with your program?
CS: Well, as to your first question, the FCC would simply monitor the content of broadcasts and web sites. As to your second question, the FCC might shut down uncooperative outlets or place them in the hands of an appointed operator, much as bankruptcy courts use court-appointed receivers to hand the affairs of bankrupt businesses. In the extreme, the FCC might have to resort to criminal sanctions — fines and imprisonment. But that probably wouldn’t happen more than a few times before communicators began to comply with the law.
LV: What you really mean is that the punishment wouldn’t stop until communicators began to comply with the views of the agency’s bureaucrats and those members of Congress who have leverage on the agency because of their oversight roles. Suppose the FCC were composed entirely of members who had a peculiar regard for the original meaning of the Constitution. Suppose, further, that we had, at the same time, a president who felt the same way about the Constitution, and that Congress was in the hands of a sympathetic majority. Now, in the course of monitoring web sites the FCC comes across your essay on “The Future of Free Speech” and deems it an extremist screed, subversive of the Constitution. What do you suppose would happen?
CS: The FCC should order The Little Magazine to post a link to your commentary on my essay. Or it might order The Little Magazine to remove my essay from its site.
LV: Suppose the FCC did neither. Suppose the FCC gave the matter some thought and concluded that it would do nothing about your essay. Instead, it would hew to the original meaning of the Constitution and let you bloviate to your heart’s content.
CS: I would turn myself in to the FCC and demand to be sanctioned to the letter of the law.
LV: Oh, really? Can I count on that? I just want to be sure that you’re willing to live by the rules that you would impose on others.
CS: Most assuredly.
LV: Thank you very much for your (imaginary) time. That’s all for now. But don’t worry, I’ll be keeping an eye on you.
Cass Sunstein’s blatherings at The Volokh Conspiracy about FDR’s “Second Bill of Rights” (addressed here, here, and here) made me want to find out more about his understanding of the proper role of government. I Googled the eminent professor and hit upon “The Future of Free Speech“, which appeared in The Little Magazine, a South Asian journal. Hold your nose and read the whole thing or spare yourself and get the gist of Sunstein’s argument in these excerpts:
My purpose here is to cast some light on the relationship between democracy and new communications technologies. I do so by emphasising the most striking power provided by emerging technologies: the growing power of consumers to “filter” what it is that they see. In the extreme case, people will be fully able to design their own communications universe. They will find it easy to exclude, in advance, topics and points of view that they wish to avoid. I will also provide some notes on the constitutional guarantee of freedom of speech.
An understanding of the dangers of filtering permits us to obtain a better sense of what makes for a well-functioning system of free expression. Above all, I urge that in a heterogeneous society, such a system requires something other than free, or publicly unrestricted, individual choices. On the contrary, it imposes two distinctive requirements. First, people should be exposed to materials that they would not have chosen in advance…. Second, many or most citizens should have a range of common experiences. Without shared experiences, a heterogeneous society will have a much more difficult time addressing social problems; people may even find it hard to understand one another…. [emphasis added]
Imagine … a system of communications in which each person has unlimited power of individual design…. Our communications market is moving rapidly toward this apparently utopian picture….
A distinctive feature of [the Supreme Court’s public forum doctrine] is that it creates a right of speakers’ access, both to places and to people. Another distinctive feature is that the public forum doctrine creates a right, not to avoid governmentally imposed penalties on speech, but to ensure government subsidies of speech…. Thus the public forum represents one place in which the right to free speech creates a right of speakers’ access to certain areas and also demands public subsidy of speakers….
Group polarisation is highly likely to occur on the Internet. Indeed, it is clear that the Internet is serving, for many, as a breeding ground for extremism, precisely because like-minded people are deliberating with one another, without hearing contrary views….
The most reasonable conclusion is that it is extremely important to ensure that people are exposed to views other than those with which they currently agree, in order to protect against the harmful effects of group polarisation on individual thinking and on social cohesion….
The phenomenon of group polarisation is closely related to the widespread phenomenon of ‘social cascades’. No discussion of social fragmentation and emerging communications technologies would be complete without a discussion of that phenomenon….
[O]ne group may end up believing something and another the exact opposite, because of rapid transmission of information within one group but not the other. In a balkanised speech market, this danger takes on a particular form: different groups may be led to dramatically different perspectives, depending on varying local cascades.
I hope this is enough to demonstrate that for citizens of a heterogeneous democracy, a fragmented communications market creates considerable dangers. There are dangers for each of us as individuals; constant exposure to one set of views is likely to lead to errors and confusions. And to the extent that the process makes people less able to work cooperatively on shared problems, there are dangers for society as a whole.
In a heterogeneous society, it is extremely important for diverse people to have a set of common experiences….
The points thus far raise questions about whether a democratic order is helped or hurt by a system of unlimited individual choice with respect to communications. It is possible to fear that such a system will produce excessive fragmentation, with group polarisation as a frequent consequence. It is also possible to fear that such a system will produce too little by way of solidarity goods, or shared experiences….
If the discussion thus far is correct, there are three fundamental concerns from the democratic point of view. These include:
• the need to promote exposure to materials, topics, and positions that people would not have chosen in advance, or at least enough exposure to produce a degree of understanding and curiosity;
• the value of a range of common experiences;
• the need for exposure to substantive questions of policy and principle, combined with a range of positions on such questions.Of course, it would be ideal if citizens were demanding, and private information providers were creating, a range of initiatives designed to alleviate the underlying concerns…. But to the extent that they fail to do so, it is worthwhile to consider government initiatives designed to pick up the slack….
1. Producers of communications might be subject … to disclosure requirements…. On a quarterly basis, they might be asked to say whether and to what extent they have provided educational programming for children, free airtime for candidates, and closed captioning for the hearing impaired. They might also be asked whether they have covered issues of concern to the local community and allowed opposing views a chance to be heard…. Websites might be asked to say if they have allowed competing views a chance to be heard….
2. Producers of communications might be asked to engage in voluntary self-regulation…. [T]here is growing interest in voluntary self-regulation for both television and the Internet…. Any such code could, for example, call for an opportunity for opposing views to speak, or for avoiding unnecessary sensationalism, or for offering arguments rather than quick ‘sound-bytes’ whenever feasible.
3. The government might subsidise speech, as, for example, through publicly subsidised programming or Websites…. Perhaps government could subsidise a ‘public.net’ designed to promote debate on public issues among diverse citizens — and to create a right of access to speakers of various sorts.
4. If the problem consists in the failure to attend to public issues, the government might impose “must carry” rules on the most popular Websites, designed to ensure more exposure to substantive questions. Under such a program, viewers of especially popular sites would see an icon for sites that deal with substantive issues in a serious way…. Ideally, those who create Websites might move in this direction on their own. If they do not, government should explore possibilities of imposing requirements of this kind, making sure that no program draws invidious lines in selecting the sites whose icons will be favoured….
5. The government might impose “must carry” rules on highly partisan Websites, designed to ensure that viewers learn about sites containing opposing views…. Here too the ideal situation would be voluntary action. But if this proves impossible, it is worth considering regulatory alternatives….
This is “libertarian paternalism” on steroids, as it should be given Sunstein’s seminal role in that morally bankrupt endeavor. There many reasons to reject Sunstein’s scheme out of hand. Not the least of the reasons is its administrative cost and intrusiveness.
But the fundamental reason to reject the scheme its authoritarianism. It would effectively bring the broadcast media and the internet under control by a government bureaucracy. Any bureaucracy that is empowered to insist upon “completeness”, “fairness”, and “balance” in the exposition of ideas is thereby empowered to define and enforce its conception of those attributes. It is easy to imagine how a bureaucracy that is dominated by power-crazed zealots who espouse socialism, gender fluidity, “equity”, etc., etc., would deploy its power.
In an earlier post I said that Cass Sunstein is to the integrity of constitutional law as Pete Rose was to the integrity of baseball. It’s worse than that: Sunstein’s willingness to abuse constitutional law in the advancement of a statist agenda reminds me of Hitler’s abuse of German law to advance his repugnant agenda.
In part 1 and part 2 I addressed Cass Sunstein’s laudatory exposition of FDR’s so-called Second Bill of Rights in two posts at The Volokh Conspiracy. CS continued in that vein with and another post that invoked economist Amartya Sen:
Randy [Barnett] asks whether the Second Bill should be seen as protecting “natural rights.” To say the least, the natural rights tradition has multiple strands; a good contemporary version is elaborated by Amartya Sen (see his Development as Freedom).
Here’s Dr. Sen (the 1998 Nobel laureate in Economics and a professor at Trinity College, Cambridge) to explain what he means by economic freedom (from an online essay entitled “Development as Freedom”):
We … live in a world with remarkable deprivation, destitution, and oppression….
Overcoming these problems is a central part of the exercise of development. We have to recognize the role of different freedoms in countering these afflictions. Indeed, individual agency is, ultimately, central to addressing these deprivations. On the other hand, the freedom of agency that we have is inescapably constrained by our social, political, and economic opportunities. We need to recognize the centrality of individual freedom and the force of social influences on the extent and reach of individual freedom. To counter the problems we face, we have to see individual freedom as a social commitment….
I view the expansion of freedom both as the primary end and as the principal means of development. Development consists of removing various types of unfreedoms that leave people with little choice and little opportunity of exercising their reasoned agency….
Development requires the removal of major sources of unfreedom: poverty as well as tyranny, poor economic opportunities as well as systemic social deprivation, neglect of public facilities as well as intolerance or overactivity of repressive states.
What’s wrong with this picture? Sen, Sunstein, and their ilk — clever arguers, all — equate economic freedom (delivered in this country through make-work jobs, welfare, the minimum wage, social security, subsidized housing, free medical care, legalized extortion of employers through unionization, etc., etc.) with political freedom (or liberty as it’s better known). The two things are incommensurate. Indeed, they are incompatible.
In order for some persons to enjoy the kind of economic freedom envisioned by FDR and his acolytes, government must impose what Sen would call economic unfreedom on other persons, through taxation and regulation. “Robbing Peter to pay Paul” still says it best.
Political freedom (liberty) works the other way around. One person’s political freedom — the freedom to speak out, to publish a newspaper, to cast a vote, and so on — doesn’t diminish another person’s political freedom.
True economic freedom flows from political freedom. True economic freedom encompasses such things as staying in school, studying, and graduating honorably; finding and keeeping a job, without paying off a union or invoking “minority” status; starting a business of one’s own and running it freely, without extorting or cheating others; and saving for one’s old age in real investments (not the Social Security Ponzi scheme). These are just a few of the many economic freedoms that government has circumscribed in its typically Orwellian effort to improve us by making us less free.
More importantly, from the Sunstein-Sen point of view, FDR-style economic freedom reduces the range of options available to individuals by significantly diminishing the economy (see “The Bad News about Economic Growth”). If the economy hadn’t been stunted by FDR-style economic freedom, and if FDR-style economic freedom hadn’t discouraged the habit of private charity, the poor, the infirm, and the aged, and the various “minority” groups of this land would be far better off than they are today.
The irony of Sen(seless) economics would be amusing if it weren’t tragic.
When last seen (at this blog), Cass Sunstein (CS) was offering a paean to FDR’s so-called Second Bill of Rights, namely the right to be cossetted from cradle to grave at the expense of others.
In a subsequent extrusion at The Volokh Conspiracy, CS talked about “constitutive commitments” — better known as backdoor amendments to the Constitution. He opened with this:
It’s standard to distinguish between constitutional requirements and mere policies. An appropriation for Head Start is a policy, which can be changed however Congress wishes; by contrast, the principle of free speech overrides whatever Congress seeks to do. But there’s something important, rarely unnoticed, and in between — much firmer than mere policies, but falling short of constitutional requirements. These are constitutive commitments. (We’re still talking, or at least not not talking, about FDR’s Second Bill of Rights.)
Constitutive commitments have a special place in the sense that they’re widely accepted and can’t be eliminated without a fundamental change in national understandings…. Current examples include the right to some kind of social security program; the right not to be fired by a private employer because of your skin color or your sex; the right to protection through some kind of antitrust law.
That’s what happens when the constitution is amended by judicial acquiescence in legislative malfeasance. The national program of social security is blatantly unconstitutional and a ripoff of the first order (see here and here). The “right” not to be fired because of skin color or gender amounts to the “right” to hold a job regardless of competence. The “right” to the “protection” of anti-trust laws (when all we need is enforcement of laws against fraud, deception, and theft) amounts to a license for government to undermine the dynamism of free markets.
CS then reverts to his main theme, which is FDR’s so-called Second Bill of Rights:
[FDR] wasn’t proposing a formal constitutional change; he didn’t want to alter a word of the founding document. He was proposing to identify a set of constitutive commitments. One possible advantage of that strategy is that it avoids a role for federal judges; another possible advantage is that it allows a lot of democratic debate, over time, about what the constitutive commitments specifically entail.
In other words, FDR wanted to amend the constitution by extra-constitutional means. Instead of avoiding a role for federal judges, however, FDR (and his successors) got their way with the help of a cowed and complaisant Supreme Court.
The past 90 years of governance in the U.S. have shown that leaving the application of constitutional principles to “democratic debate” is like leaving your liquor cabinet unlocked and your car keys on the table when your house is thronged with teen-agers.
Cass Sunstein is to the integrity of constitutional law as Pete Rose was to the integrity of baseball.
Cass Sunstein, with Richard Thaler, launched “libertarian paternalism”. I have much to say about LP and Thaler in “‘Libertarian Paternalism’ Revisited”. This post and the several to follow it will focus on other aspects of Sunstein’s infamous career as a “public intellectual”.
Way back in 2004, when CS was guest-blogging at The Volokh Conspiracy, his maiden effort was “The Greatest Generation“. Here are some relevant passages:
On January 11, 1944, the United States was involved in its longest conflict since the Civil War. The war effort was going well. Victory was no longer in serious doubt. The real question was the nature of the peace. At noon, Roosevelt sent the text of his most ambitious State of the Union address to Congress. Ill with a cold, Roosevelt did not make the customary trip to Capitol Hill to appear in person. Instead he spoke to the nation via radio – the first and only time a State of the Union address was also a Fireside Chat….
Roosevelt began by emphasizing that the “supreme objective for the future” — the objective for all nations — was captured “in one word: Security.” Roosevelt argued that the term “means not only physical security which provides safety from attacks by aggressors,” but includes as well “economic security, social security, moral security.” Roosevelt insisted that “essential to peace is a decent standard of living for all individual men and women and children in all nations. Freedom from fear is eternally linked with freedom from want.”
Roosevelt looked back, and not entirely approvingly, to the framing of the Constitution. At its inception, the nation had grown “under the protection of certain inalienable political rights—among them the right of free speech, free press, free worship, trial by jury, freedom from unreasonable searches and seizures.”
But over time, these rights had proved inadequate. Unlike the Constitution’s framers, “we have come to a clear realization of the fact that true individual freedom cannot exist without economic security and independence.” As Roosevelt saw it, “necessitous men are not free men,” not least because those who are hungry and jobless “are the stuff out of which dictatorships are made.” Recalling the New Deal, he cut to the chase: The nation had “accepted, so to speak, a second Bill of Rights under which a new basis of security and prosperity can be established for all—regardless of station, race, or creed.”…
Having catalogued … eight rights [expansions of the welfare state], Roosevelt said that “we must be prepared to move forward, in the implementation of these rights.” Roosevelt asked “the Congress to explore the means for implementing this economic bill of rights—for it is definitely the responsibility of the Congress to do so.”
… The leader of the Greatest Generation had a distinctive project, running directly from the New Deal to the war on Fascism — a project that he believed to be radically incomplete. We don’t honor him, and we don’t honor those who elected him, if we forget what that project was all about.
I know quite well what that project was all about. It was about turning Americans into wards of the welfare state — not intentionally, but in effect. And there were plenty of contemporary critics who knew what it was all about and tried in vain to warn their countrymen.
I know as much as anyone my age can know about the Great Depression and the fears that it spawned in Americans. My parents and their many siblings were young adults during the Depression, and all of them had to go to work at an early age (when they could find work) because their families were poor. Knowing the members of my parents’ generation as well as I did, I reject the notion that “true individual freedom cannot exist without economic security and independence”. Economic security and independence are always relative matters. I had little economic security when I was 21, but I had plenty of freedom, as did my parents when they were 21. Freedom (in a society that has free political institutions) doesn’t depend on economic security, it depends on inner security (self-reliance) — a trait that many Americans of later generations lack because they have developed the habit of looking to government, instead of themselves, for the solutions to their problems. You are not free if you have sold your soul to the devil in exchange for a bit of gold.
It is fatuous to say that those who are hungry and jobless “are the stuff out of which dictatorships are made.” The United States didn’t become a dictatorship (despite what many Republicans said about FDR). Britain didn’t become a dictatorship, and on, and on. The notable exceptions (Germany, Russia, Italy, and Japan) arose from other, pre-Depression causes. Nevertheless, FDR finally got his way — posthumously — as Truman, Johnson, and others completed most of the work of the New Deal.
The New Deal was born of fear. FDR succumbed to that fear. Ironically, FDR said it best: “the only thing we have to fear is fear itself — nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance.” It was fear that caused FDR to do exactly the wrong thing. Instead of letting the economy work its way out of the Depression, as it would have sooner than it did under FDR’s “stewardship,” he began the long descent into American socialism by turning the tinkerers loose on the economy. (Most of them were — and still are — lawyers and academics with no real idea about the business of business.) At the same time, he seduced most of the masses into dependence on government. The cycle of power and dependence begun by FDR has only gained strength over the years.
I have owned and managed businesses in the regulatory-welfare state of “economic freedom” that is FDR’s legacy. I’m here to tell you that Americans were made worse off by the New Deal and are being made even worse off by its progeny. That’s FDR’s legacy, and I most decidedly do not want to honor it.
To be continued.
Human beings have been trying for eons to beat back nature and bend it to accommodate and serve their needs. Such efforts have ranged from the use of fire to prevent hypothermia to the development of vaccines to fend off contagious diseases. Other things range from the building of dams to collect water and irrigate fields to the cooling of buildings for comfort.
The cooling of buildings exemplifies the kind of endeavor that copes with nature in a way that enables humans to be more productive (e.g., when a cooled building is a factory or office in which workers become more effective than they would be in the absence of cooling). It’s a small step, conceptually, from coping with nature for the sake of enabling productive endeavors to “exploiting” nature for the same purpose (e.g., extraction of oil and iron ore to build and operate machines and automobiles).
So, human beings have in many, many ways not only coped with nature but also transcended it. Some accomplishments (e.g., airborne flight, space flight, robotic interplanetary exploration, and the use of space-borne telescopes) are literally (physically) transcendental.
All of which has given rise to the illusory human conceit — in “advanced” nations, at least — of independence from nature. Nasty encounters with wildlife, tornados, hurricanes, etc., dispel the conceit — but only temporarily.
The conceit of independence from nature feeds into another illusory conceit, namely, that government, with its ability to command resources, is capable of defeating nature and human nature alike. This conceit is not only illusory but also tragic. It is the Achilles heel of human striving. It leads, and has often led, to impoverishment, famine, genocide, and war.
In a recent incarnation, the all-powerful-government conceit caused the birth and spread of a deadly pestilence, namely, COVID-19. As if that weren’t bad enough, “omniscient and omnipotent” governments made things worse by issuing warnings and edicts (masking, distancing, isolation, lockdowns, “mask-shaming”, “vaccination-shaming”, etc.) that needlessly wrought vast social and economic devastation.
To seem to be effective, and thus to retain power, it is the instinct of most office-holders and senior bureaucrats to do something. And doing something, as noted above, can have worse consequences than doing nothing and letting people strive together voluntarily in the service of their own interests. In the case of COVID-19, that is exactly what should have been done.
You have probably read recent reports about how the draconian approach taken by U.S. officials was extremely counterproductive. Here are some relevant excerpts from a Washington Monthly article:
While most countries imposed draconian restrictions, there was an exception: Sweden. Early in the pandemic, Swedish schools and offices closed briefly but then reopened. Restaurants never closed. Businesses stayed open. Kids under 16 went to school.
That stood in contrast to the U.S. By April 2020, the CDC and the National Institutes of Health recommended far-reaching lockdowns that threw millions of Americans out of work. A kind of groupthink set in. In print and on social media, colleagues attacked experts who advocated a less draconian approach. Some received obscene emails and death threats. Within the scientific community, opposition to the dominant narrative was castigated and censored, cutting off what should have been vigorous debate and analysis.
In this intolerant atmosphere, Sweden’s “light touch,” as it is often referred to by scientists and policy makers, was deemed a disaster. “Sweden Has Become the World’s Cautionary Tale,” carped The New York Times. Reuters reported, “Sweden’s COVID Infections Among Highest in Europe, With ‘No Sign Of Decrease.’” Medical journals published equally damning reports of Sweden’s folly.
But Sweden seems to have been right. Countries that took the severe route to stem the virus might want to look at the evidence found in a little-known 2021 report by the Kaiser Family Foundation. The researchers found that among 11 wealthy peer nations, Sweden was the only one with no excess mortality among individuals under 75. None, zero, zip.
That’s not to say that Sweden had no deaths from COVID. It did. But it appears to have avoided the collateral damage that lockdowns wreaked in other countries. The Kaiser study wisely looked at excess mortality, rather than the more commonly used metric of COVID deaths. This means that researchers examined mortality rates from all causes of death in the 11 countries before the pandemic and compared those rates to mortality from all causes during the pandemic. If a country averaged 1 million deaths per year before the pandemic but had 1.3 million deaths in 2020, excess mortality would be 30 percent….
The Kaiser results might seem surprising, but other data have confirmed them. As of February, Our World in Data, a database maintained by the University of Oxford, shows that Sweden continues to have low excess mortality, now slightly lower than Germany, which had strict lockdowns. Another study found no increased mortality in Sweden in those under 70. Most recently, a Swedish commission evaluating the country’s pandemic response determined that although it was slow to protect the elderly and others at heightened risk from COVID in the initial stages, its laissez-faire approach was broadly correct….
One of the most pernicious effects of lockdowns was the loss of social support, which contributed to a dramatic rise in deaths related to alcohol and drug abuse. According to a recent report in the medical journal JAMA, even before the pandemic such “deaths of despair” were already high and rising rapidly in the U.S., but not in other industrialized countries. Lockdowns sent those numbers soaring.
The U.S. response to COVID was the worst of both worlds. Shutting down businesses and closing everything from gyms to nightclubs shielded younger Americans at low risk of COVID but did little to protect the vulnerable. School closures meant chaos for kids and stymied their learning and social development. These effects are widely considered so devastating that they will linger for years to come. While the U.S. was shutting down schools to protect kids, Swedish children were safe even with school doors wide open. According to a 2021 research letter, there wasn’t a single COVID death among Swedish children, despite schools remaining open for children under 16….
Of the potential years of life lost in the U.S., 30 percent were among Blacks and another 31 percent were among Hispanics; both rates are far higher than the demographics’ share of the population. Lockdowns were especially hard on young workers and their families. According to the Kaiser report, among those who died in 2020, people lost an average of 14 years of life in the U.S. versus eight years lost in peer countries. In other words, the young were more likely to die in the U.S. than in other countries, and many of those deaths were likely due to lockdowns rather than COVID.
And that isn’t all. There’s also this working paper from the National Bureau of Economic Research, which concludes:
The first estimates of the effects of COVID-19 on the number of business owners from nationally representative April 2020 CPS data indicate dramatic early-stage reductions in small business activity. The number of active business owners in the United States plunged from 15.0 million to 11.7 million over the crucial two-month window from February to April 2020. No other one-, two- or even 12-month window of time has ever shown such a large change in business activity. For comparison, from the start to end of the Great Recession the number of business owners decreased by 730,000 representing only a 5 percent reduction. In general, business ownership is relatively steady over the business cycle (Fairlie 2013; Parker 2018). The loss of 3.3 million business owners (or 22 percent) was comprised of large drops in important subgroups such as owners working roughly two days per week (28 percent), owners working four days a week (31 percent), and incorporated businesses (20 percent).
And that was two years ago, before the political panic had spawned a destructive tsunami of draconian measures.
Such measures — in addition to being socially and economically destructive — made the pandemic worse by creating the conditions for the evolution of more contagious strains of the coronavirus. If the first stage of the coronavirus had been allowed to run rampant, herd immunity would have been achieved. The most vulnerable among us would have died or suffered at length before recovering (and then, perhaps, only partially). But that would have happened in any case.
Widespread exposure to the disease would have meant the natural immunization of most of the populace through exposure to the coronavirus and the development of antibodies through that exposure — which, for most of the populace, isn’t lethal or debilitating.
In the end, millions of people will have been made poorer, deprived of beneficial human interactions, and suffered and died needlessly because politicians and bureaucrats couldn’t (and can’t) resist the urge to do something — especially when something means trying to conquer nature and suppress human nature.
Related Reading:
“Great Barrington Declaration”
Brendan O’Neill, “The Truth about COVID McCarthyism”, Spiked, December 19, 2022
Scott W. Atlas, “Sins Against Children”, The New Criterion (Dispatch Blog), January 4, 2023
Tom Jefferson et al., “Physical Interventions to Interrupt or Reduce the Spread of Respiratory Viruses”, Cochrane Library, January 30, 2023. From the authors’ conclusions:
There is uncertainty about the effects of face masks. The low to moderate certainty of evidence means our confidence in the effect estimate is limited, and that the true effect may be different from the observed estimate of the effect. The pooled results of RCTs [randomized controlled trials] did not show a clear reduction in respiratory viral infection with the use of medical/surgical masks. There were no clear differences between the use of medical/surgical masks compared with N95/P2 respirators in healthcare workers when used in routine care to reduce respiratory viral infection. Hand hygiene is likely to modestly reduce the burden of respiratory illness, and although this effect was also present when ILI [influenza-like illness] and laboratory‐confirmed influenza were analysed separately, it was not found to be a significant difference for the latter two outcomes. Harms associated with physical interventions were under‐investigated.
I became aware of “libertarian paternalism” in 2005, when I read this dismissive post by Don Boudreaux at Cafe Hayek. From 2005 to 2019, I wrote more than a dozen posts about “libertarian paternalism”, which Wikipedia calls
the idea that it is both possible and legitimate for private and public institutions to affect behavior while also respecting freedom of choice, as well as the implementation of that idea. The term was coined by behavioral economist Richard Thaler and legal scholar Cass Sunstein in a 2003 article in the American Economic Review.
This post reprises the key points of my earlier ones. I give more attention here to Thaler than to Sunstein. I will devote a separate post to Sunstein because he is (or was) truly dangerous to liberty for ideas other than “libertarian paternalism”.
Thaler and Sunstein say that “libertarian paternalism” (sneer quotes removed, but implied, hereinafter) is intended to help individuals make better decisions by having corporations and governments shape choices more artfully. Blogger Zimran Ahmed Winterspeak defended the concept because he
spoke to Thaler about this and read the monograph he [Thaler] wrote with Sunstein.
“Libertarian Paternalism” is noting that people often just take whatever default choice is offered and therefore working hard to come up with good default choices. This does not limit choice because you don’t need to stick with the default. But since *something* has to be the default, you might as well put effort into making it something good.
I don’t think it’s quite that easy to defend libertarian paternalism, which strikes me as another paving brick on the road to hell.
Consider an example that’s used to explain libertarian paternalism (and which will recur at later points in this post). Some workers choose “irrationally” when they decline to sign up for an employer’s 401(k) plan. The paternalists characterize the “do not join” option as the default option. (In my experience, there is no default option: An employee must make a deliberate choice between joining a 401(k) or not joining it.) To help employees make the “right” choice, libertarian paternalists would find a way to herd employees into 401(k) plans (perhaps by law). In one variant of this bit of paternalism, an employee is automatically enrolled in a 401(k) and isn’t allowed to opt out for some months, by which time he or she has become used to the idea of being enrolled and declines to opt out.
The underlying notion is that people don’t always choose what’s “best” for themselves. Best according to whom? According to libertarian paternalists, of course, who tend to equate “best” with wealth maximization. They simply disregard or dismiss the truly rational preferences of those who must live with the consequences of their decisions. Richard Thaler may want you to save your money when you’re only 22, but you may have other, more urgent, things to do with your money, such as paying off a college loan while affording a decent place to live and buying a car that gets you to work faster than riding a bus.
Libertarian paternalism incorporates two fallacies. One is what I call the “rationality fallacy,” the other is the fallacy of centralized planning.
As for the rationality fallacy, I once wrote this:
There is simply a lot more to maximizing satisfaction than maximizing wealth. That’s why some people choose to have a lot of children, when doing so obviously reduces the amount they can save. That’s why some choose to retire early rather than stay in stressful jobs. Rationality and wealth maximization are two very different things, but a lot of laypersons and too many economists are guilty of equating them.
Nevertheless, many economists (like Thaler) do equate rationality and wealth maximization, which leads them to propose schemes for forcing people to act more “rationally”. Such schemes, of course, are nothing more than centralized planning, dreamt up by self-anointed wise men who seek to impose their preferences on the rest of us. As I said in a different connection,
The problem with [rules aimed at shaping economic behavior] is that someone outside the system must make the rules to be followed by those inside the system.
And that’s precisely where [central] planning and regulation always fail. At some point not very far down the road, the rules will not yield the outcomes that spontaneous behavior would yield. Why? Because better rules cannot emerge spontaneously from rule-driven behavior….
Of course, the whole point … is to produce outcomes that are desired by planners.
And to hell with what the individual thinks is in his or her own best interest.
Libertarian paternalism consists of paternalism and a rather subtle form of socialism. There’s no libertarianism in it, no matter what its proponents may say.
As Michael Munger put it in an essay at The Library of Economics and Liberty,
The boundary we fight over today divides what is decided collectively for all of us from what is decided by each of us. You might think of it as a property line, dividing what is mine from what is ours. And all along that property line is a contested frontier in a war of ideas and rhetoric.
For political decisions, “good” simply means what most people think is good, and everyone has to accept the same thing. In markets, the good is decided by individuals, and we each get what we choose. This matters more than you might think. I don’t just mean that in markets you need money and in politics you need good hair and an entourage. Rather, the very nature of choices, and who chooses, is different in the two settings. P.J. O’Rourke has a nice illustration of the way that democracies choose.
Imagine if all of life were determined by majority rule. Every meal would be a pizza. Every pair of pants, even those in a Brooks Brothers suit, would be stone-washed denim. Celebrity diets and exercise books would be the only thing on the shelves at the library. And—since women are a majority of the population, we’d all be married to Mel Gibson. (Parliament of Whores, 1991, p. 5).
O’Rourke was writing in 1991. Today, we might all be married to Ashton Kutcher, instead. But you get the idea: Politics makes the middle the master. The average person chooses not just for herself, but for everyone else, too. . . .
The thing to keep in mind is that market processes, working through diverse private choice and individual responsibility, are a social choice process at least as powerful as voting. And markets are often more accurate in delivering not just satisfaction, but safety. We simply don’t recognize the power of the market’s commands on our behalf. As Ludwig von Mises famously said, in Liberty and Property, “The market process is a daily repeated plebiscite, and it ejects inevitably from the ranks of profitable people those who do not employ their property according to the orders given by the public.”
Paternalism — when it is sponsored or enforced by government — deprives people of the ability to think for themselves, to benefit from their wise decisions, and to learn from their mistakes.
Bryan Caplan came at the same point from a different angle regarding Thaler and Sunstein’s proposal to help consumers make “rational” choices about mortgages, Caplan observed that
government long ago took up the burden of helping consumers, and the result is a mess.
The problem with behavioral economics is that it’s more sophisticated than standard econ, but not nearly sophisticated enough. Thaler and Sunstein may have a more realistic view of borrowers than the average economist, but they have an even less realistic view of the political process. As I argue in The Myth of the Rational Voter:
Before we emphasize the benefits of government intervention, let us distinguish intervention designed by a well-intentioned economist from intervention that appeals to noneconomists, and reflect that the latter predominate. You do not have to be dogmatic to take a staunchly promarket position. You just have to notice that the “sophisticated” emphasis on the benefits of intervention mistakes theoretical possibility for empirical likelihood.
Additional regulation of mortgages isn’t going to help real human beings cope with complexity. Democracy already gave us a pile of inane “pro-consumer” regulation, and reform will probably just give us more of the same.
So what would I recommend? Abandon the vain effort to protect consumers from themselves, and switch to a message simple enough for real humans to understand:
1. You’re an adult; if you screw up it’s your problem.
2. If you’re baffled by the complexities of mortgage markets (or anything else), stick with the simple, standard options that you actually understand.
Glen Whitman weighed in with two scathing posts at Agoraphilia. In the first of the two posts, Whitman wrote:
[Thaler] continues to disregard the distinction between public and private action.
Some critics contend that behavioral economists have neglected the obvious fact that bureaucrats make errors, too. But this misses the point. After all, wouldn’t you prefer to have a qualified, albeit human, technician inspect your aircraft’s engines rather than do it yourself?
The owners of ski resorts hire experts who have previously skied the runs, under various conditions, to decide which trails should be designated for advanced skiers. These experts know more than a newcomer to the mountain. Bureaucrats are human, too, but they can also hire experts and conduct research.Here we see two of Thaler’s favorite stratagems deployed at once. First, he relies on a deceptively innocuous, private, and non-coercive example to illustrate his brand of paternalism. Before it was cafeteria dessert placement; now it’s ski-slope markings. Second, he subtly equates private and public decision makers without even mentioning their different incentives. In this case, he uses “bureaucrats” to refer to all managers, regardless of whether they manage private or public enterprises.
The distinction matters. The case of ski-slope markings is the market principle at work. Skiers want to know the difficulty of slopes, and so the owners of ski resorts provide it. They have a profit incentive to do so. This is not at all coercive, and it is no more “paternalist” than a restaurant identifying the vegetarian dishes.
Public bureaucrats don’t have the same incentives at all. They don’t get punished by consumers for failing to provide information, or for providing the wrong information. They don’t suffer if they listen to the wrong experts. They face no competition from alternative providers of their service. They get to set their own standards for “success,” and if they fail, they can use that to justify a larger budget.
And Thaler knows this, because these are precisely the arguments made by the “critics” to whom he is responding. His response is just a dodge, enabled by his facile use of language and his continuing indifference – dare I say hostility? – to the distinction between public and private.
In the second of the two posts, Whitman said:
The advocates of libertarian paternalism have taken great pains to present their position as one that does not foreclose choice, and indeed even adds choice. But this is entirely a matter of presentation. They always begin with non-coercive and privately adopted measures, such as the ski-slope markings in Thaler’s NY Times article. And when challenged, they resolutely stick to these innocuous examples (see this debate between Thaler and Mario Rizzo, for example). But if you read Sunstein & Thaler’s actual publications carefully, you will find that they go far beyond non-coercive and private measures. They consciously construct a spectrum of “libertarian paternalist” policies, and at one end of this spectrum lies an absolutely ban on certain activities, such as motorcycling without a helmet. I’m not making this up!…
[A]s Sunstein & Thaler’s published work clearly indicates, this kind of policy [requiring banks to offer “plain vanilla” mortgages] is the thin end of the wedge. The next step, as outlined in their articles, is to raise the cost of choosing other options. In this case, the government could impose more and more onerous requirements for opting out of the “plain vanilla” mortgage: you must fill out extra paperwork, you must get an outside accountant, you must have a lawyer present, you must endure a waiting period, etc., etc. Again, this is not my paranoid imagination at work. S&T have said explicitly that restrictions like these would count as “libertarian paternalism” by their definition….
The problem is that S&T’s “libertarian paternalism” is used almost exclusively to advocate greater intervention, not less. I have never, for instance, seen S&T push for privatization of Social Security or vouchers in education. I have never seen them advocate repealing a blanket smoking ban and replacing it with a special licensing system for restaurants that want to allow their customers to smoke. If they have, I would love to see it.
In their articles, S&T pay lip service to the idea that libertarian paternalism lies between hard paternalism and laissez faire, and thus that it could in principle be used to expand choice. But look at the actual list of policies they’ve advocated on libertarian paternalist grounds, and see where their real priorities lie.
S&T are typical “intellectuals,” in that they presume to know how others should lead their lives — a distinctly non-libertarian attitude. It is, in fact, a hallmark of modern “liberalism” (i.e., authoritarian leftism). Elsewhere, I had this to say about the founders of modern “liberalism” — John Stuart Mill, Thomas Hill Green, and Leonard Trelawney Hobhouse:
[W]e are met with (presumably) intelligent persons who believe that their intelligence enables them to peer into the souls of others, and to raise them up through the blunt instrument that is the state.
And that is precisely the mistake that lies at heart of what we now call “liberalism” or “progressivism.” It is the three-fold habit of setting oneself up as an omniscient arbiter of economic and social outcomes, then castigating the motives and accomplishments of the financially successful and socially “well placed,” and finally penalizing financial and social success through taxation and other regulatory mechanisms (e.g., affirmative action, admission quotas, speech codes, “hate crime” legislation”). It is a habit that has harmed the intended beneficiaries of government intervention, not just economically but in other ways, as well….
The other ways, of course, include the diminution of social liberty, which is indivisible from economic liberty.
As I have said, Thaler’s idea of rational behavior seems to be behavior that maximizes one’s wealth. The surest route to wealth-maximization — for the Thalers of this world — is to evaluate alternative courses of action by discounting projected streams of revenues (income) or costs (expenses). Consider the following passage from an old paper of Thaler’s:
A discount rate is simply a shorthand way of defining a firm’s, organization’s, or person’s time value of money. This rate is always determined by opportunity costs. Opportunity costs, in turn, depend on circumstances. Consider the following example: An organization must choose between two projects which yield equal effectiveness (or profits in the case of a firm). Project A will cost $200 this year and nothing thereafter. Project B will cost $205 next year and nothing before or after. Notice that if project B is selected the organization will have an extra $200 to use for a year. Whether project B is preferred simply depends on whether it is worth $5 to the organization to have those $200 to use for a year. That, in turn, depends on what the organization would do with the money. If the money would just sit around for the year, its time value is zero and project A should be chosen. However, if the money were put in a 5 percent savings account, it would earn $10 in the year and thus the organization would gain $5 by selecting project B. (Center for Naval Analyses, “Discounting and Fiscal Constraints: Why Discounting is Always Right,” Professional Paper 257, August 1979, pp. 1-2)
More generally, the preferred alternative — among alternatives conferring equal benefits (effectiveness, output, utility, satisfaction) — is the one whose cost stream has the lowest present value:
the value on a given date of a future payment or series of future payments, discounted to reflect the time value of money and other factors such as investment risk.
It is my view that economists seize on discounting as a way of evaluating options because it is a trivial exercise to compute the present value of a stream of outlays (or receipts). I should say that discounting seems like a trivial exercise because the difficult tasks — choosing a time horizon, choosing a discount rate, and translating outlays into future benefits — are assumed away.
Consider the choices facing a government decision-maker. In Thaler’s simplified version of reality, a government decision-maker (manager) faces a choice between two projects that (ostensibly) would deliver equal benefits (effectiveness, output), even though their costs would be incurred at different times. Specifically, the manager must choose between project A, at a cost of $200 in year 1, and equally-effective project B, at a cost of $205 in year 2. Thaler claims that the manager can choose between the two projects by discounting their costs:
A [government] manager . . . cannot earn bank interest on funds withheld for a year. . . . However, there will generally exist other ways for the manager to “invest” funds which are available. Examples include cost-saving expenditures, conservation measures, and preventive maintenance. These kinds of expenditures, if they have positive rates of return, permit a manager to invest money just as if he were putting the money in a savings account.
. . . Suppose a thorough analysis of cost-saving alternatives reveals that [in year 2] a maintenance project will be required at a cost of $215. Call this project D. Alternatively the project can be done [in year 1] (at the same level of effectiveness) for only $200. Call this project C. All of the options are displayed in table 1.
(op. cit, pp. 3-4)
Thaler believes that his example clinches the argument for discounting because the choice of project B (an expenditure of $205 in year 2) enables the manager to undertake project C in year 1, and thereby to “save” $10 in year 2. But Thaler’s “proof” is deeply flawed:
If a maintenance project is undertaken in year 1, it will pay off sooner than if it is undertaken in year 2 but, by the same token, its benefits will diminish sooner than if it is undertaken in year 2.
More generally, different projects cannot, by definition be equally effective. Projects A and B may be about equally effective by a particular measure of effectiveness, but because they are different things they will differ in other respects, and those differences could be crucial in choosing between A and B.
Specifically, projects A and B might be equally effective when compared quantitatively in the context of an abstract scenario, but A might be more effective in an unquantifiable but crucial respect. For example, the earlier expenditure on A might be viewed by a potential enemy as a more compelling deterrent than the later expenditure on B because it would demonstrate more clearly the government’s willingness and ability to mount a strong defense against the potential enemy.
The “correct” discount rate depends on the options available to a particular manager of a particular government activity. Yet Thaler insists on the application of a uniform discount rate by all government managers (op. cit., p. 6). By Thaler’s own example, such a practice could lead a manager to choose the wrong option.
For a decision to rest on the use of a particular discount rate, there must be great certainty about the future costs and benefits of alternative courses of action. But there seldom is. The practice of discounting therefore promotes an illusion of certainty — a potentially dangerous illusion, in the case of national defense.
The fundamental problem is that Thaler presumes to place himself in the position of the decision-maker. But every decision-maker — from a senior government executive to a young person starting his first job — has a unique set of objectives, options, uncertainties, and risk preferences. Because Thaler cannot locate himself in a decision-maker’s unique situation, he can exercise his penchant for arrogance only by insisting that each and every decision-maker adhere to a simplistic rule of thumb — one that obtains results favored by Thaler.
In the context of personal decision-making — which is the focal point of libertarian paternalism — the act of discounting serves wealth-maximization (a favored paternalistic objective). But, as I have said,
[t]here is simply a lot more to maximizing satisfaction than maximizing wealth. That’s why some people choose to have a lot of children, when doing so obviously reduces the amount they can save. That’s why some choose to retire early rather than stay in stressful jobs. Rationality and wealth maximization are two very different things, but a lot of laypersons and too many economists are guilty of equating them.
Thaler popped up in the April 2010 edition of Cato Unbound, “Slippery Slopes and the New Paternalism”, which was about libertarian paternalism”and whether it deserved to be called libertarian. Thaler was a key contributor to the colloquy and a fierce defender of his ideas, which have had their fullest exposition in Nudge: Improving Decisions About Health, Wealth, and Happiness.
Thaler’s method of defending his position was to insist, repeatedly, that it is “libertarian”, even as he oozed paternalism. Consider one of his entries (“The Argument Clinic“) in the colloquium, where he wrote the following:
[S]ince the word paternalism is what seems to give [the colloquium’s lead essayist Glen] Whitman fits, let’s re-label our policy “Best Guess”. “Best Guess” is the policy of choosing the choice architecture that is your best guess of what the participants would choose for themselves if they had the time and expertise to make an informed choice.
If that isn’t pure, presumptive arrogance, I don’t know what is. It conveys a presumption of omniscience on the part of the “best guesser”, along with a presumption that the “best guesser” ought to be making decisions for others.
Here’s another passage from Thaler’s post:
In many domains we [paternalists] can drastically improve on what is customary. Consider organ donations. In most states in the United States, to make your donation available you have to take some action such as sign the back of your driver’s license and get two witnesses to sign it. In some countries such as Spain they have switched to an “opt out” system called presumed consent. In Nudge we endorse a third approach, in this domain called “mandated choice.” It also happens to be used in my home state of Illinois.
Under this plan, when you go in to get your drivers license picture retaken every few years, you are asked whether you want to be a donor or not. You must say yes or no to get a license. About two thirds of drivers are saying yes, and lots of lives will be prolonged as a result. This is a great example of libertarian Best Guess in action. Although a large majority of people say in polls that they would want their organs harvested, many never get around to opting in, and a vocal minority in the United States object strenuously to the idea of presumed consent. So it is worthwhile to find a policy that gets many of the benefits of presumed consent without while honoring the preferences of those who object to having to opt out. Mandated choice has some other advantages in this context, namely that families are less likely to overrule the choices of the donor if that choice has been made actively rather than passively.
Let me count the assumptions: (1) Organ donation is the government’s business. (2) The government should deny a driver’s license to a person does not wish to say whether or not he wishes to be an organ donor. (3) This oppression of an individual is justified by the supposed fact that “a large majority of people . . . say that they would want their organs harvested.” Why give the government yet another excuse to intrude into private matters? The obvious answer to that question is that Thaler can’t resist the urge to lead others toward the decisions that he wants them to make. If, when you renew your driver’s license, you’re asked if you want to be an organ donor, your likely (politically correct) response is to say “yes,” even if you don’t really want to be an organ donor. This is not freedom of choice; it is subtle coercion.
Thaler stretches hard to discredit Whitman’s objections to libertarian paternalism; for example:
One of the examples we discuss in Nudge is an innovation by the city of Chicago on a dangerous curve on Lake Shore Drive. The city painted horizontal lines across the road that get closer and closer together as the driver approaches the apex of the curve. As we recently posted on our blog, this innovation has reduced accidents by 36%. Does Whitman think this is bad because it was implemented by the government? Should only private toll roads be allowed to think creatively? And notice that the “customary” signage in this location, which included a reduction in the speed limit to 20 mph, was less effective than the nudge.
What does this have to do the subject at hand? The government of Chicago is already in place as the paternalistic provider of Chicago’s streets — having usurped voluntary private decisions about the placement, construction, upkeep, and regulation of those streets. Given that the government is the provider of Chicago’s streets, it has assumed the duty of making those streets “safe” for their users, without the benefit of market feedback about users’ preferences as to the the tradeoff between safety and other attributes (e.g., speed). The government merely adopted an innovation (the horizontal lines), which replaced (or supplemented) another innovation (a speed-limit sign). Horizontal lines are no more or less paternalistic than speed-limit signs, merely different in their effectiveness along one dimension of street-users’ preferences.
In the examples that I have given, Thaler simply assumes that government is “the answer”. Instead of arguing that decisions are best made by private, voluntary actors, he too readily accepts the role of government and, instead, seeks ways to embed it more deeply in citizens’ lives by making it seem more effective. That is one path down the slippery slope toward serfdom — a slope that Thaler denies, even as he pours intellectual lubricant on it.
Thaler’s invocation of the Lake Shore Drive innovation is especially revealing. Only a hardened paternalist would stretch so far (and fail) to find something non-paternalistic about one of America’s most paternalistic — and fallible — institutions: the government of Chicago.
Thaler stepped into it again in this NYT article, where he wrote this:
Want to give affluent households a present worth $700 billion over the next decade? In a period of high unemployment and fiscal austerity, this idea may seem laughable. Amazingly, though, it is getting traction in Washington.
I am referring, of course, to the current debate about whether to extend all, or just some, of the tax cuts of President George W. Bush cuts that are due to expire at year-end. They’re expiring because the only way they could be enacted initially was by pretending that they were temporary….
There is another possible argument for including the rich in these tax cuts, one based on “fairness.” By this reasoning, the wealthy are entitled to low tax rates because they have temporarily had them, and it would now be unfair to take them back.
But by that same argument, unemployment insurance should never expire, and every day should be your birthday. “Temporary” has no meaning if it bestows a permanent right.
By Thaler’s convoluted logic, the money one earns is a gift from government, and those who pay taxes have no greater claim on their own money than those to whom the government hands it. How is this “libertarian,” by any reasonable interpretation of that word?
Then there is Thaler’s defense of the individual mandate that was at the heart of Obamacare. Thaler attacked the “slippery slope” argument against the mandate. Annon Simon nailed Thaler:
Richard Thaler’s NYT piece from a few days ago, Slippery-Slope Logic, Applied to Health Care, takes conservatives to task for relying on a “slippery slope” fallacy to argue that Obamacare’s individual mandate should be invalidated. Thaler believes that the hypothetical broccoli mandate — used by opponents of Obamacare to show that upholding the mandate would require the Court to acknowledge congressional authority to do all sorts of other things — would never be adopted by Congress or upheld by a federal court. This simplistic view of the Obamacare litigation obscures legitimate concerns over the amount of power that the Obama administration is claiming for the federal government. It also ignores the way creative judges can use previous cases as building blocks to justify outcomes that were perhaps unimaginable when those building blocks were initially formed….
[N]ot all slippery-slope claims are fallacious. The Supreme Court’s decisions are often informed by precedent, and, as every law student learned when studying the Court’s privacy cases, a decision today could be used by a judge ten years from now to justify outcomes no one had in mind.
In 1965, the Supreme Court in Griswold v. Connecticut, referencing penumbras and emanations, recognized a right to privacy in marriage that mandated striking down an anti-contraception law.
Seven years later, in Eisenstadt v. Baird, this right expanded to individual privacy, because after all, a marriage is made of individuals, and “[i]f the right of privacy means anything, it is the right of the individual . . . to be free from unwarranted governmental intrusion into matters so fundamentally affecting a person as the decision whether to bear or beget a child.”
By 1973 in Roe v. Wade, this precedent, which had started out as a right recognized in marriage, had mutated into a right to abortion that no one could really trace to any specific textual provision in the Constitution. Slippery slope anyone?
This also happened in Lawrence v. Texas in 2003, where the Supreme Court struck down an anti-sodomy law. The Court explained that the case did not involve gay marriage, and Justice O’Connor’s concurrence went further, distinguishing gay marriage from the case at hand. Despite those pronouncements, later decisions enshrining gay marriage as a constitutionally protected right have relied upon Lawrence. For instance, Goodridge v. Department of Public Health (Mass. 2003) cited Lawrence 9 times, Varnum v. Brien (Iowa 2009) cited Lawrence 4 times, and Perry v. Brown (N.D. Cal, 2010) cited Lawrence 9 times.
However the Court ultimately rules, there is no question that this case will serve as a major inflection point in our nation’s debate about the size and scope of the federal government. I hope it serves to clarify the limits on congressional power, and not as another stepping stone on the path away from limited, constitutional government. (“The Supreme Court’s Slippery Slope,” National Review Online, May 17, 2012)
Simon could have mentioned Wickard v. Filburn (1942), in which the Supreme Court brought purely private, intrastate activity within the reach of Congress’s power to regulate interstate commerce. The downward slope from Wickard v. Filburn to today’s intrusive regulatory regime has been been not merely slippery but precipitous. And Chief Justice John Roberts did a great disservice to liberty by upholding the individual mandate. Perhaps he was operating under the influence of Thaler.
Next up is Thaler’s book, Misbehaving: The Making of Behavioral Economics, from which he drew “Unless You Are Spock, Irrelevant Things Matter in Economic Behavior” (The New York Times, May 8, 2015). The article displays three of Thaler’s pet tricks:
He misrepresents classical microeconomics.
He assumes (implicitly) that everyone should make economic decisions from an omniscient, end-of-life perspective.
He substitutes his economic desiderata for the free choices of millions of persons.
Regarding Thaler’s misrepresentation of classical microeconomics, consider these passages from his article:
Economists [who adhere to traditional microeconomic theory] discount any factors that would not influence the thinking of a rational person. These things are supposedly irrelevant. But unfortunately for the theory, many supposedly irrelevant factors do matter.
Economists create this problem with their insistence on studying mythical creatures often known as Homo economicus. I prefer to call them “Econs”— highly intelligent beings that are capable of making the most complex of calculations but are totally lacking in emotions. Think of Mr. Spock in “Star Trek.” In a world of Econs, many things would in fact be irrelevant.
No Econ would buy a larger portion of whatever will be served for dinner on Tuesday because he happens to be hungry when shopping on Sunday. Your hunger on Sunday should be irrelevant in choosing the size of your meal for Tuesday. An Econ would not finish that huge meal on Tuesday, even though he is no longer hungry, just because he had paid for it. To an Econ, the price paid for an item in the past is not relevant in making the decision about how much of it to eat now.
An Econ would not expect a gift on the day of the year in which she happened to get married, or be born. What difference do these arbitrary dates make?…
Of course, most economists know that the people with whom they interact do not resemble Econs. In fact, in private moments, economists are often happy to admit that most of the people they know are clueless about economic matters. But for decades, this realization did not affect the way most economists did their work. They had a justification: markets. To defenders of economics orthodoxy, markets are thought to have magic powers.
This reads more like the confession of an Econ than an accurate description of the principles of microeconomics. Even in those benighted days when I learned the principles of “micro” — just a few years ahead of Thaler — it was understood that the assumption of rationality was an approximation of the tendency of individuals to try to make themselves better off by making choices that would do so, given their tastes and preferences and the information that they possess at the time or could obtain at a cost commensurate with the value of the decision at hand.
Yes, there are Econs, but they’re usually economists who also know full well that the mass of people don’t behave like Econs (as Thaler admits), and for whom the postulate of utter rationality is, as I’ve suggested, shorthand for an imprecise tendency. The fact that most human beings aren’t Econs doesn’t vitiate the essential truth of the traditional theory of choice. What seems to bother Thaler is that most people aren’t Econs; their tastes and preferences seem irrational to him, and it’s his (self-appointed) role in life to force them to make “correct” decisions (i.e., the decisions he would make).
I’ll say more about that. But I can’t let Thaler’s views about markets pass without comment. He continued with this:
There is a version of this magic market argument that I call the invisible hand wave…. Words and phrases such as high stakes, learning and arbitrage are thrown around to suggest some of the ways that markets can do their magic, but it is my claim that no one has ever finished making the argument with both hands remaining still.
Hand waving is required because there is nothing in the workings of markets that turns otherwise normal human beings into Econs. For example, if you choose the wrong career, select the wrong mortgage or fail to save for retirement, markets do not correct those failings. In fact, quite the opposite often happens. It is much easier to make money by catering to consumers’ biases than by trying to correct them.
This is a perverted description of the role of markets. And it betrays the peculiar vantage point from which Thaler views economic decision-making. Markets provide information, much of which reflects decisions already made by others. Markets, in other words, enable persons who are contemplating decisions to learn from the decisions of others — whether those others view their decisions as bad, good, or indifferent. But it’s up to persons who are contemplating decisions to take advantage of the information provided by markets.
Moreover, markets don’t merely “cater to consumers’ biases”. Markets enable businesses to shape consumers’ tastes and preferences by presenting them with information about the availability and advantages of their products and services. Markets transmit information in two directions, not just from consumers to producers.
What about people who make “bad” choices, such as choosing the “wrong” career, selecting the “wrong” mortgage, or failing to save for retirement? That’s Thaler the Nudge talking. He wants to save people from such fates. While he’s at it, perhaps he can also save them from choosing the wrong spouse or the wrong number of children.
I say that because when Thaler writes about “wrong” choices in such matters, he writes as if people can and should make their minute-by-minute, hour-by-hour, day-by-day, week-by-week, and year-by-year decisions by reckoning (like an Econ) how those decisions will affect their “score” when they reach the finish line of life, or some other arbitrary point in time. What about all those points in between, don’t they count, too? And who knows when the finish line will arrive? Given such quandaries and uncertainties, how are the irrational masses supposed to cope? Well, they don’t — or so Thaler would like to believe. So it follows that Thaler must cope for them, but only when it comes to his pet projects (e.g., automatic enrollment in 401(k) plans). He’s silent about the myriad other decisions that real people face.
Why should Thaler care if X chooses the “wrong” career, takes a mortgage he can’t afford, doesn’t save “enough” for retirement, chooses the “wrong” spouse, or has “too many” children? It’s paternalistic thinking like Thaler’s that leads politicians to concoct programs that transfer the cost of bad choices from those who make them to those who are just trying to live their lives without making them. I expect that Thaler would respond by saying that government is already in the business of making such transfers, so the best thing is to reduce the need for them. No, the best thing is to make individuals responsible for the consequences of their choices, and let them — and others — learn from the consequences. The best thing is to dismantle the dependency-creating, handout-giving functions of government. And a behavioral economist like Thaler is just the kind of person who could mount a strong economic case against those functions — if he were of a mind to do so.
Thaler doesn’t seem to be of a mind to do so because what he really wants is for people to make the “right” decisions, by his lights. Why? Because he knows what’s best for all of us. Returning to a favorite topic, he wrote:
Consider defined-contribution retirement plans like 401(k)’s. Econs would have no trouble figuring out how much to save for retirement and how to invest the money, but mere humans can find it quite tough. So knowledgeable employers have incorporated three [features] in their plan design: they automatically enroll employees (who can opt out), they automatically increase the saving rate every year, and they offer a sensible default investment choice like a target date fund. These features significantly improve the outcomes of plan participants…. [TEA: This assumes that everyone should care more about retirement income than about anything else, at the margin.]
These retirement plans also have a supposedly relevant factor: Contributions and capital appreciation are tax-sheltered until retirement. This tax break was created to induce people to save more….
[The authors of a recent study] conclude: “…Automatic enrollment or default policies that nudge individuals to save more could have larger impacts on national saving at lower social cost.”
Get it? One of the objectives of nudging people to participate in 401(k) plans is to raise the national saving rate. Saving should be a voluntary thing, and the national saving rate should emerge from voluntary decisions. It shouldn’t be dictated by those, like Thaler, who view a higher national saving rate as a holy grail, to be advanced by policies that effectively dictate the “choices” that people make. But that’s Thaler for you: Imposing his economic desiderata on others.
I was therefore irked when I learned of Thaler’s selection as the 2017 Noblel laureate in economics. (It’s actually the Swedish National Bank’s Prize in Economic Sciences in Memory of Alfred Nobel, not one of the original prizes designated in Alfred Nobel’s will.) The award led James R. Rogers to write about Thaler and behavioral economics:
[M]edia treatments of Thaler’s work, and of behavioral economics more generally, suggest that it provides a much-deserved comeuppance to conventional microeconomics. Well . . . Not quite….
… Economists, and rational choice theorists more generally, have a blind spot, [Thaler] argues, for just how often their assumptions about human behavior are inconsistent with real human behavior. That’s an important point.
Yet here’s where spin matters: Does Thaler provide a correction to previous economics, underscoring something everyone always knew but just ignored as a practical matter, or is Thaler’s work revolutionary, inviting a broad and necessary reconceptualization of standard microeconomics?…
… No. He has built a career by correcting a blind spot in modern academic economics. But his insight provides us with a “well, duh” moment rather than a “we need totally to rewrite modern economics” moment that some of his journalistic (and academic) supporters suggest it provides….
Thaler’s work underscores that the economist’s rationality postulates cannot account for all human behavior. That’s an important point. But I don’t know that many, or even any, economists very much believed the opposite in any serious way. [“Did Richard Thaler Really Shift the Paradigm in Economics?“, Library of Law and Liberty, October 11, 2017]
That’s what I said.
A non-economist, law professor Ilya Somin, joined the chorus:
Thaler and many other behavioral economics scholars argue that government should intervene to protect people against their cognitive biases, by various forms of paternalistic policies. In the best-case scenario, government regulators can “nudge” us into correcting our cognitive errors, thereby enhancing our welfare without significantly curtailing freedom.
But can we trust government to be less prone to cognitive error than the private-sector consumers whose mistakes we want to correct? If not, paternalistic policies might just replace one form of cognitive bias with another, perhaps even worse one. Unfortunately, a recent study suggests that politicians are prone to severe cognitive biases too – especially when they consider ideologically charged issues….
Even when presented additional evidence to help them correct their mistakes, Dahlmann and Petersen found that the politicians tended to double down on their errors rather than admit they might have been wrong….
Politicians aren’t just biased in their evaluation of political issues. Many of them are ignorant, as well. For example, famed political journalist Robert Kaiser found that most members of Congress know little about policy and “both know and care more about politics than about substance.”….
But perhaps voters can incentivize politicians to evaluate evidence more carefully. They can screen out candidates who are biased and ill-informed, and elect knowledgeable and objective decision-makers. Sadly, that is unlikely to happen, because the voters themselves also suffer from massive political ignorance, often being unaware of even very basic facts about public policy.
Of course, the Framers of the Constitution understood all of this in 1787. And they wisely acted on it by placing definite limits on the power of the central government. The removal of those limits, especially during and since the New Deal, is a constitutional tragedy.
Deirdre McCloskey, an economist, takes a similar view in “The Applied Theory of Bossing“:
Thaler is distinguished but not brilliant, which is par for the course. He works on “behavioral finance,” the study of mistakes people make when they talk to their stock broker. He can be counted as the second winner for “behavioral economics,” after the psychologist Daniel Kahneman. His prize was for the study of mistakes people make when they buy milk….
Once Thaler has established that you are in myriad ways irrational it’s much easier to argue, as he has, vigorously—in his academic research, in popular books, and now in a column for The New York Times—that you are too stupid to be treated as a free adult. You need, in the coinage of Thaler’s book, co-authored with the law professor and Obama adviser Cass Sunstein, to be “nudged.” Thaler and Sunstein call it “libertarian paternalism.”*…
Wikipedia lists fully 257 cognitive biases. In the category of decision-making biases alone there are anchoring, the availability heuristic, the bandwagon effect, the baseline fallacy, choice-supportive bias, confirmation bias, belief-revision conservatism, courtesy bias, and on and on. According to the psychologists, it’s a miracle you can get across the street.
For Thaler, every one of the biases is a reason not to trust people to make their own choices about money. It’s an old routine in economics. Since 1848, one expert after another has set up shop finding “imperfections” in the market economy that Smith and Mill and Bastiat had come to understand as a pretty good system for supporting human flourishing….
How to convince people to stand still for being bossed around like children? Answer: Persuade them that they are idiots compared with the great and good in charge. That was the conservative yet socialist program of Kahneman, who won the 2002 Nobel as part of a duo that included an actual economist named Vernon Smith…. It is Thaler’s program, too.
Like with the psychologist’s list of biases, though, nowhere has anyone shown that the imperfections in the market amount to much in damaging the economy overall. People do get across the street. Income per head since 1848 has increased by a factor of 20 or 30….
The amiable Joe Stiglitz says that whenever there is a “spillover” — my ugly dress offending your delicate eyes, say — the government should step in. A Federal Bureau of Dresses, rather like the one Saudi Arabia has. In common with Thaler and Krugman and most other economists since 1848, Stiglitz does not know how much his imagined spillovers reduce national income overall, or whether the government is good at preventing the spill. I reckon it’s about as good as the Army Corps of Engineers was in Katrina.
Thaler, in short, melds the list of psychological biases with the list of economic imperfections. It is his worthy scientific accomplishment. His conclusion, unsupported by evidence?
It’s bad for us to be free.
Exactly.
Consider the biography of the nudger-in-chief at The Library of Economics and Liberty. In it, the reader is treated to such “wisdom” as this:
Economists generally assume that more choices are better than fewer choices. But if that were so, argues Thaler, people would be upset, not happy, when the host at a dinner party removes the pre-dinner bowl of cashews. Yet many of us are happy that it’s gone. Purposely taking away our choice to eat more cashews, he argues, makes up for our lack of self-control.
Notice the sleight of hand by which the preferences of a few (including Thaler, presumably) are pushed front and center: “many of us are happy”. Who is “us”? And what about the preferences of everyone else, who may well comprise a majority? Thaler is happy because the the host has taken an action of which he (Thaler) approves, because he (Thaler) wants to tell the rest of us what makes us happy.
There’s more:
Thaler … noticed another anomaly in people’s thinking that is inconsistent with the idea that people are rational. He called it the “endowment effect.” People must be paid much more to give something up (their “endowment”) than they are willing to pay to acquire it. So, to take one of his examples from a survey, people, when asked how much they are willing to accept to take on an added mortality risk of one in one thousand, would give, as a typical response, the number $10,000. But a typical response by people, when asked how much they would pay to reduce an existing risk of death by one in one thousand, was $200.
Surveys are meaningless. Talk is cheap (see #5 here).
Even if the survey results are somewhat accurate, in that there is a significant gap between the two values, there is a rational explanation for such a gap. In the first instance, a person is (hypothetically) accepting an added risk, one that he isn’t already facing. In the second instance, the existing risk may be one that the person being asked considers to be very low, as applied to himself. The situations clearly aren’t symmetrical, so it’s unsurprising that the price of accepting a new risk is higher than the payment for reducing a possible risk.
That’s enough of Thaler. More than enough.
Democrats don’t really “believe all women”, at least insofar as the women in question are claiming that they have been sexually assaulted by Democrat politicians. First, there was Bill Clinton. Then, there was Joe Biden. It follows that women are to be believed only when they accuse Republican office-seekers, or persons nominated to office by Republicans.
The foregoing is obvious and has been noted many times by conservative writers. So I won’t dwell on it here.
What I want to know is why women should be believed automatically in the first place. Is there something about women that causes them to utter the truth unfailingly? Are women in fact less prone to lying than men? The evidence is mixed — if you can call psychological studies “evidence”. And we know what such studies are worth, which is to say not much.
There are some reasons to believe a person unreservedly; for example:
The person isn’t trying to sell you something, where the something might be a used car, a house, or a story that will advance that person’s interest (including revenge against particular person of class of persons).
You have known that person for a very long time and have never known the person to attempt deception, other than to tell a “white lie” to spare another person’s feelings (e.g., you’re not fat) or to get a child to do the right thing (e.g., Santa Claus is watching you).
You are engaged in a business relationship with the person and it is a sure thing that he will suffer financially if he is being less than honest about his side of the deal.
Accusations of sexual assault don’t fit the bill, unless you know have known the accuser for a long time and trust her (or him) because of her (or his) record of veracity. But accusations should be taken seriously and investigated.
Take Christine Blasey Ford (please). Her story was incredible from the beginning because of its vagueness, lack of corroboration, her known animus toward conservatives, and Brett Kavanaugh’s track record with respect to women.
Alternatively, there’s Tara Reade, Biden’s long-forgotten accuser. (Perhaps she’ll show up in Hunter’s laptop.) Her story isn’t incredible because of its specificity, partial corroboration, Reade’s long-standing political views (a rather left-wing Democrat), and Joe Biden’s track record with respect to women.
Nevertheless, I continue to withhold judgement about Reade’s story — unlike most Democrats (who refuse to credit it) and too many Republicans (who are eager to believe it). (Contrarily, the evidene of Joe’s graft mounts daily.)
Ronald Reagan used to say (quoting Lenin and Stalin) “trust, but verify”. I say “verify, then trust”.
Capitalism, when it isn’t being used as a “dirty word” by “socialist democrats” (the correct rendering, and an oxymoron at that), simply entails three connected things:
There is private ownership of the means of production — capital — which consists of the hardware, software, and processes used to produce goods and services.
There are private markets in which capital, goods, and services are bought by users, which are (a) firms engaged in the production and sale of capital, goods, and services and (b) consumers of the finished products.
The owners of capital, like the owners of labor that is applied to capital (i.e., “workers” ranging from CEOs and high-powered scientists to store clerks and ditch-diggers), are compensated according to the market valuation of the worth of their contributions to the production of goods and services. The market valuation depends ultimately on the valuation of the finished products by the final consumers of those products.
For simplicity, I omitted the messy details of the so-called mixed economy — like that of the U.S. — in which governments are involved in producing some goods and services that could be produced privately, regulating what may be offered in private markets, regulating the specifications of the goods and services that are offered in private markets, regulating the compensation of market participants, and otherwise distorting private markets through myriad taxes and social-welfare schemes — including many that don’t directly involve government spending, except to enforce them (e.g., anti-discrimination laws and environmental regulations).
None of what I have just said is the tragic aspect of capitalism to which the title of this post refers. Yes, government interventions in market are extremely costly, and some of them have tragic consequences (e.g., the mismatch effect of affirmative action, which causes many blacks to fail in college and in the workplace; the withholding of beneficial drugs by the FDA; and the vast waste of resources in the name of environmentalism and climate change). But all of that belongs under the heading of tragic government.
One tragedy of capitalism, which I have touched on before, is that it leads to alienation:
This much of Marx’s theory of alienation bears a resemblance to the truth:
The design of the product and how it is produced are determined, not by the producers who make it (the workers)….
[T]he generation of products (goods and services) is accomplished with an endless sequence of discrete, repetitive, motions that offer the worker little psychological satisfaction for “a job well done.”
These statements are true not only of assembly-line manufacturing. They’re also true of much “white collar” work — certainly routine office work and even a lot of research work that requires advanced degrees in scientific and semi-scientific disciplines (e.g., economics). They are certainly true of “blue collar” work that is rote, and in which the worker has no ownership stake….
The life of the hunter-gatherer, however fraught, is less rationalized than the kind of life that’s represented by intensive agriculture, let alone modern manufacturing, transportation, wholesaling, retailing, and office work.
The hunter-gatherer isn’t a cog in a machine, he is the machine: the shareholder, the co-manager, the co-worker, and the consumer, all in one. His work with others is truly cooperative. It is like the execution of a game-winning touchdown by a football team, and unlike the passing of a product from stage to stage in an assembly line, or the passing of a virtual piece of paper from computer to computer.
The hunter-gatherer’s social milieu was truly societal [and hunter-gatherer bands had an upper limit of 150 persons]….
Nor is the limit of 150 unique to hunter-gatherer bands. [It is also found in communal societies like Hutterite colonies, which spin off new colonies when the limit of 150 is reached.]
What all of this means, of course, is that for the vast majority of people there’s no going back. How many among us are willing — really willing — to trade our creature comforts for the “simple life”? Few would be willing when faced with the reality of what the “simple life” means; for example, catching or growing your own food, dawn-to-post-dusk drudgery, nothing resembling culture as we know it (high or low), and lives that are far closer to nasty, brutish, and short than today’s norms.
There is also an innate tension between capitalism and morality, as I say here:
Conservatives rightly defend free markets because they exemplify the learning from trial and error that underlies the wisdom of voluntarily evolved social norms — norms that bind a people in mutual trust, respect, and forbearance.
Conservatives also rightly condemn free markets — or some of the produce of free markets — because that produce is often destructive of social norms.
Thanks to a pointer from my son, I have since read Edward Feser’s “Hayek’s Tragic Capitalism” (Claremont Review of Books, April 30, 2019), which takes up the tension between capitalism and conservatism:
Precisely because they arise out of an impersonal process, market outcomes are amoral. Hayek thought it unwise to defend capitalism by emphasizing the just rewards of hard work, because there simply is no necessary connection between virtue of any kind, on the one hand, and market success on the other. Moreover, the functioning of the market economy depends on adherence to rules of behavior that abstract from the personal qualities of individuals. In particular, it depends on treating most of one’s fellow citizens not as members of the same tribe, religion, or the like, but as abstract economic actors—property owners, potential customers or clients, employers or employees, etc. It requires allowing these actors to pursue whatever ends they happen to have, rather than imposing some one overarching collective end, after the fashion of the central planner.
Hayek did not deny that all of this entailed an alienating individualism. On the contrary, he emphasized it, and warned that it was the deepest challenge to the stability of capitalism, against which defenders of the market must always be on guard. This brings us to his account of the moral defects inherent in human nature. To take seriously the thesis that human beings are the product of biological evolution is, for Hayek, to recognize that our natural state is to live in small tribal bands of the sort in which our ancestors were shaped by natural selection. Human psychology still reflects this primitive environment. We long for solidarity with a group that shares a common purpose and provides for its members based on their personal needs and merits. The impersonal, amoral, and self-interested nature of capitalist society repels us. We are, according to Hayek, naturally socialist.
The trouble is that socialism is, again, simply impossible in modern societies, with their vast populations and unimaginably complex economic circumstances. Socialism is practical only at the level of the small tribal bands in which our psychology was molded. Moreover, whereas in that primitive sort of context, everyone shares the same tribal identity and moral and religious outlook, in modern society there is no one tribe, religion, or moral code to which all of its members adhere. Socialism in the context of a modern society would therefore also be tyrannical as well as unworkable, since it would require imposing an overall social vision with which at most only some of its members agree. A socialist society cannot be a diverse society, and a diverse society cannot be socialist.
Socialism in large societies requires direction from on high, direction that cannot fail to be inefficient and oppressive.
Returning to Feser:
… Hayek — who had, decades before, penned a famous essay titled “Why I Am Not a Conservative” — went in a strongly Burkean conservative direction [in his last books]. Just as market prices encapsulate economic information that is not available to any single mind, so too, the later Hayek argued, do traditional moral rules that have survived the winnowing process of cultural evolution encapsulate more information about human well-being than the individual can fathom. Those who would overthrow traditional morality wholesale and replace it with some purportedly more rational alternative exhibit the same hubris as the socialist planner who foolishly thinks he can do better than the market.
Unsurprisingly, he took the institution of private property to be a chief example of the benefits of traditional morality. But he also came to emphasize the importance of the family as a stabilizing institution in otherwise coldly individualist market societies, and—despite his personal agnosticism—of religion as a bulwark of the morality of property and the family. He lamented the trend toward “permissive education” and “freeing ourselves from repressions and conventional morals,” condemned the ’60s counter-culture as “non-domesticated savages,” and placed Sigmund Freud alongside Karl Marx as one of the great destroyers of modern civilization.
Hayek was committed, then, to a kind of fusionism—the project of marrying free market economics to social conservatism. Unlike the fusionism associated with modern American conservatism, though, Hayek’s brand had a skeptical and tragic cast to it. He thought religion merely useful rather than true, and defended bourgeois morality as a painful but necessary corrective to human nature rather than an expression of it. In his view, human psychology has been cobbled together by a contingent combination of biological and cultural evolutionary processes. The resulting aggregate of cognitive and affective tendencies does not entirely cohere, and never will.
Feser than summarizes three critiques of Hayek’s fusionism, one by Irving Kristol, one by Roger Scruton, and one by Andrew Gamble, in Hayek: The Iron Cage of Liberty (1996). Gamble’s critique, according to Feser, is that Hayek
never adequately faced up to the dangers posed by corporate power. Most people cannot be entrepreneurs, and even those who can cannot match the tremendous advantages afforded by the deep pockets, legal resources, and other assets of a corporation. Vast numbers of citizens in actually existing capitalist societies simply must work for a corporation if they are going to work at all. But that entails an economic dependency of individuals on centralized authority, of a kind that is in some ways analogous to what Hayek warned of in his critique of central planning. As with socialism, conformity to the values of centralized authority becomes, in effect, a precondition of the very possibility of feeding oneself. By way of example, we may note that the political correctness Hayek would have despised is today more effectively and directly imposed on society by corporate Human Resources departments than by government.
Feser concludes with this:
None of this implies a condemnation of capitalism per se. The problem is one of fetishizing capitalism, of making market imperatives the governing principles to which all other aspects of social order are subordinate. The irony is that this is a variation on the same basic error of which socialism is guilty—what Pope John Paul II called “economism,” the reduction of human life to its economic aspect. Even F.A. Hayek, a far more subtle thinker than other defenders of the free economy, ultimately succumbed to this tendency. Too many modern conservatives have followed his lead. They have been so fixated on socialism and its economic irrationality that they have lost sight of other, ultimately more insidious, threats to Western civilization—including economism itself. To paraphrase G.K. Chesterton, a madman is not someone who has lost his economic reason, but someone who has lost everything but his economic reason.
Alan Jacobs offers an orthogonal view in his essay, “After Technopoly” (The New Atlantis, Spring 2019):
The apparent captain of technopoly [the universal and virtually inescapable rule of our everyday lives by those who make and deploy technology] is what [Michael] Oakeshott calls a “rationalist”…. [T]hat captain can achieve his political ends most readily by creating people who are not rationalists. The rationalists of Silicon Valley don’t care whom you’re calling out or why, as long as you’re calling out someone and doing it on Twitter….
Oakeshott wrote “The Tower of Babel” at roughly the same time as his most famous essay, “Rationalism in Politics” (1947), with which it shares certain themes. At that moment rationalism seemed, and indeed was, ascendant. Rejecting the value of habit and tradition — and of all authority except “reason” — the rationalist is concerned solely with the present as a problem to be solved by technique; politics simply is social engineering….
Oakeshott foresaw the coming of a world — to him a sadly depleted world — in which everyone, or almost everyone, would be a rationalist.
But that isn’t what happened. What happened was the elevation of a technocratic elite into a genuine technopoly, in which transnational powers in command of digital technologies sustain their nearly complete control by using the instruments of rationalism to ensure that the great majority of people acquire their moral life by habituation. This habituation, of course, is not the kind Oakeshott hoped for but a grossly impoverished version of it, one in which we do not adopt our affections and conduct from families, friends, and neighbors, but rather from the celebrity strangers who populate our digital devices.
In sum, capitalism is an amoral means to material ends. It is not the servant of society, properly understood. Nor is it the servant of conservative principles, which include (inter alia) the preservation of traditional morality, both as an end and as a binding and civilizing force.
One aspect of capitalism is that it enables the accumulation of great wealth and power. The “robber barons” of the late 19th century and early 20th century accumulated great wealth by making possible the production of things (e.g., oil and steel) that made life materially better for Americans rich and poor.
Though the “robber barons” undoubtedly wielded political power, they did so in an age when mass media consisted of printed periodicals (newspapers and magazines). But newspapers and magazines never dominated the attention of the public in the way that radio, movies, television, and electronically transmitted “social media” do today. Moreover, there were far more printed periodicals then than now, and they offered competing political views (unlike today’s periodicals, which are mainly left of center, when not merely frivolous.)
Which is to say that the “robber barons” may have “bought and sold” politicians, but they weren’t in the business of — or very effective at — shaping public opinion. (f they had been, they wouldn’t have been targets of incessant attacks by populist politicians, and anti-trust legislation wouldn’t have been enacted to great huzzahs from the public.
Today’s “robber barons”, by contrast, have accumulated their wealth by providing products and services that enable them to shape public opinion. Joel Kotkin puts it this way:
In the past, the oligarchy tended to be associated with either Wall Street or industrial corporate executives. But today the predominant and most influential group consists of those atop a handful of mega-technology firms. Six firms—Amazon, Apple, Facebook, Google, Microsoft, and Netflix—have achieved a combined net worth equal to one-quarter of the nasdaq, more than the next 282 firms combined and equal to the GDP of France. Seven of the world’s ten most valuable companies come from this sector. Tech giants have produced eight of the twenty wealthiest people on the planet. Among the nation’s billionaires, all those under forty live in the state of California, with twelve in San Francisco alone. In 2017, the tech industry produced eleven new billionaires, mostly in California….
Initially many Americans, even on the left, saw the rise of the tech oligarchy as both transformative and positive. Observing the rise of the technology industry, the futurist Alvin Toffler prophesied “the dawn of a new civilization,”2 with vast opportunities for societal and human growth. But today we confront a reality more reminiscent of the feudal past—with ever greater concentrations of wealth, along with less social mobility and material progress.
Rather than Toffler’s tech paradise, we increasingly confront what the Japanese futurist Taichi Sakaiya, writing three decades ago, saw as the dawn of “a high-tech middle ages.”3 Rather than epitomizing American ingenuity and competition, the tech oligarchy increasingly resembles the feudal lords of the Middle Ages. With the alacrity of the barbarian warriors who took control of territory after the fall of the Roman Empire, they have seized the strategic digital territory, and they ruthlessly defend their stake.
Such concentrations of wealth naturally seek to concentrate power. In the Middle Ages, this involved the control of land and the instruments of violence. In our time, the ascendant tech oligarchy has exploited the “natural monopolies” of web-based business. Their “super-platforms” depress competition, squeeze suppliers, and reduce opportunities for potential rivals, much as the monopolists of the late nineteenth century did. Firms like Google, Facebook, and Microsoft control 80 to 90 percent of their key markets and have served to further widen class divides not only in the United States but around the world.
Once exemplars of entrepreneurial risk-taking, today’s tech elites are now entrenched monopolists. Increasingly, these firms reflect the worst of American capitalism—squashing competitors, using indentured servants from abroad for upwards of 40 percent of their Silicon Valley workforce, fixing wages, and avoiding taxes—while creating ever more social anomie and alienation.
The tech oligarchs are forging a post-democratic future, where opportunity is restricted only to themselves and their chosen few. As technology investor Peter Thiel has suggested, democracy—based on the fundamental principles of individual responsibility and agency—does not fit comfortably with a technocratic mindset that believes superior software can address and modulate every problem. [“America’s Drift Toward Feudalism“, American Affairs Journal, Winter 2019]
I can’t deny that rise of the tech oligarchs and their willingness and ability to move public opinion leftward probably influenced my view of capitalism. Not that there’s anything wrong with that. It is evidence that, contra Keynes, I am not the slave of some defunct economist.
Will public opinion shift enough to cause the containment of today’s “robber barons”? I doubt it. Most Republican politicians are trapped by their pro-capitalist rhetoric. Most Democrat politicians are trapped by their ideological alignment with the the “barons” and the affluent classes that are dependent on and allied with them.
There is an unfortunate tendency in some circles to dismiss the idea that race is a social construct. Well, it is one. But as I will point out, it is also a useful one.
Science, generally, is a social construct. Everything that human beings do and “know” is a social construct, in that human behavior and “knowledge” are products of acculturation and the irrepressible urge to name and classify things.
Whence that urge? You might say that it’s genetically based. But our genetic inheritance is inextricably twined with social constructs — preferences for, say, muscular men and curvaceous women, and so on. What we are depends not only on our genes but also on the learned preferences that shape the gene pool. There’s no way to sort them out, despite claims (from the left) that human beings are blank slates and claims (from loony libertarians) that genes count for everything.
All of that, however true it may be (and I believe it to be true), is a recipe for solipsism, nay, for Humean chaos. The only way out of this morass, as I see it, is to admit that human beings (or most of them) possess a life-urge that requires them to make distinctions: friend vs. enemy, workable vs. non-workable ways of building things, etc.
Race is among those useful distinctions for reasons that will be obvious to anyone who has actually observed the behaviors of groups that can be sorted along racial lines instead of condescending to “tolerate” or “celebrate” differences (a luxury that is easily indulged in the safety of ivory towers and upscale enclaves). Those lines may be somewhat arbitrary, for, as many have noted there are more genetic differences within a racial classification than between racial classifications. Which is a fatuous observation, in that there are more genetic differences among, say, the apes than there are between what are called apes and what are called human beings.
In other words, the usual “scientific” objection to the concept of race is based on a false premise, namely, that all genetic differences are equal. If one believes that, one should be just as willing to live among apes as among human beings. But human beings do not choose to live among apes (though a few human beings do choose to observe them at close quarters). Similarly, human beings — for the most part — do not choose to live among people from whom they are racially distinct, and therefore (usually) socially distinct.
Why? Because under the skin we are not all alike. Under the skin there are social (cultural) differences that are causally correlated with genetic differences.
Race may be a social construct, but — like engineering — it is a useful one.
For several decades I preferred feature films to TV fare. My preference has flipped in the past several years, as the quality of feature films has declined and TV fare has become more watchable with the advent of Amazon Prime Video and the access it affords to excellent offerings from Britain and Scandinavia. The following piece is an assessment of the more than 2,400 feature films that I had seen as of four years ago. The subsequent addition of a few dozen feature films to my inventory hasn’t changed my assessment.
According to the lists of movies that I keep at the Internet Movie Database (IMDb), I watched 2,444 feature films released from 1920 to 2018. That number does not include such forgettable fare as the grade-B westerns, war movies, and Bowery Boys comedies that I saw on Saturdays, at two-for-a-nickel, during my pre-teen years.
I have assigned ratings (on IMDb’s 10-point scale*) to 2,141 of the 2,444 films. (By the time I got around to assigning ratings at IMDb when I joined in 2001, I didn’t remember 303 films well enough to rate them.) I have given 691 (32 percent) of the 2,141 films a rating of 8, 9, or 10. The proportion of high ratings does not indicate low standards on my part; rather, it indicates the care with which I have tried to choose films for viewing. (More about that, below.)
I call the 691 highly rated films my favorites. I won’t list all them here, but I will mention some of them — and their stars — as I assess the ups-and-downs (mostly downs) in the art of film-making.
I must first admit two biases that have shaped my selection of favorite movies. First, my list of films and favorites is dominated by American films starring American actors. But that dominance is merely numerical. For artistic merit and great acting, I turn to foreign films as often as possible.
A second bias is my general aversion to silent features and early talkies. Most of the directors and actors of the silent era relied on “stagy” acting to compensate for the lack of sound — a style that persisted into the early 1930s. There were exceptions, of course. Consider Charlie Chaplin, whose genius as a director and comic actor made a virtue of silence; my list of favorites from the 1920s and early 1930s includes three of Chaplin’s silent features: The Gold Rush (1925), The Circus (1928), and City Lights (1931). Perhaps a greater comic actor (and certainly a more physical one) than Chaplin was Buster Keaton, with six films on my list of favorites of the same era: Our Hospitality (1923), The Navigator (1924), Sherlock Jr. (1924), The General (1926), The Cameraman (1928), and Steamboat Bill Jr. (1928). Harold Lloyd, in my view, ranks with Keaton for sheer laugh-out-loud physical humor. My seven Lloyd favorites from his pre-talkie oeuvre are Grandma’s Boy (1922), Dr. Jack (1922), Safety Last! (1923), Girl Shy (1923), Hot Water (1924), For Heaven’s Sake (1926), and Speedy (1928). My list of favorites includes only nine other films from the years 1920-1931, among them F.W. Murnau’s Nosferatu the Vampire (1922) and Fritz Lang’s Metropolis (1927) — the themes of which (supernatural and futuristic, respectively) enabled them to transcend the limitations of silence — and such early talkies as Whoopee! (1930), and Dracula (1931).
In summary, I can recall having seen only 51 feature films that were released in 1920-1931. Of the 51, I have rated 50, and 25 of them (50 percent) rank among my favorites. But given the relatively small number of films from 1920-1931 in my personal catalog, I will say no more here about that era. I will focus, instead, on movies released from 1932 to the present — which I consider the “modern” era of film-making.
My inventory of modern films comprises 2,393 titles, 2,091 of which I have rated, and 666 of those (32 percent) at 8, 9, or 10 on the IMDb scale. But those numbers mask vast differences in the quality of modern films, which were produced in three markedly different eras:
Golden Age (1932-1942) — 238 films seen, 208 rated, 117 favorites (56 percent)
Abysmal Years (1943-1965) — 370 films seen, 289 rated, 110 favorites (38 percent)
Vile Epoch (1966-present) — 1,785 films seen, 1,594 rated, 439 favorites (28 percent)
There is a so-called Golden Age of Hollywood, but it is defined by the structure of the industry, not the quality of output. What made my Golden Age golden, and why did films go from golden to abysmal to vile? Read on.
To understand what made the Golden Age golden, let’s consider what makes a great movie: a novel or engaging plot; dialogue that is fresh (and witty, if the film calls for it); strong performances (acting, singing, and/or dancing); a “mood” that draws the viewer in; excellent production values (locations, cinematography, sets, costumes, etc.); and historical or topical interest. (A great animated feature may be somewhat weaker on plot and dialogue if the animations and sound track are first-rate.) The Golden Age was golden largely because the advent of sound fostered creativity — plots could be advanced through dialogue, actors could deliver real dialogue, and singers and orchestras could deliver real music. It took a few years to fully realize the potential of sound, but movies hit their stride just as the country was seeking respite from the cares of a deep and lingering economic depression.
Studios vied with each other to entice movie-goers with new plots (or plots that seemed new when embellished with sound), fresh and often wickedly witty dialogue, and — perhaps most important of all — captivating performers. The generation of superstars that came of age in the 1930s consisted mainly of handsome men and beautiful women, blessed with distinctive personalities, and equipped by their experience on the stage to deliver their lines vibrantly and with impeccable locution.
What were the great movies of the Golden Age, and who starred in them? Here’s a sample of the titles: 1932 — Grand Hotel; 1933 — Dinner at Eight, Flying Down to Rio, Morning Glory; 1934 — It Happened One Night, The Thin Man, Twentieth Century; 1935 — Mutiny on the Bounty, A Night at the Opera, David Copperfield; 1936 — Libeled Lady, Mr. Deeds Goes to Town, Show Boat; 1937 — The Awful Truth, Captains Courageous, Lost Horizon; 1938 — The Adventures of Robin Hood, Bringing up Baby, Pygmalion; 1939 — Destry Rides Again, Gunga Din, The Hunchback of Notre Dame, The Wizard of Oz, The Women; 1940 — The Grapes of Wrath, His Girl Friday, The Philadelphia Story; 1941 — Ball of Fire, The Maltese Falcon, Suspicion; 1942 — Casablanca, The Man Who Came to Dinner, Woman of the Year.
And who starred in the greatest movies of the Golden Age? Here’s a goodly sample of the era’s superstars, a few of whom came on the scene toward the end: Jean Arthur, Fred Astaire, John Barrymore, Lionel Barrymore, Ingrid Bergman, Humphrey Bogart, James Cagney, Claudette Colbert, Ronald Colman, Gary Cooper, Joan Crawford, Bette Davis, Irene Dunne, Nelson Eddy, Errol Flynn, Joan Fontaine, Henry Fonda, Clark Gable, Cary Grant, Jean Harlow, Olivia de Havilland, Katharine Hepburn, William Holden, Leslie Howard, Allan Jones, Charles Laughton, Carole Lombard, Myrna Loy, Jeanette MacDonald, Joel McCrea, Merle Oberon, Laurence Olivier, William Powell, Ginger Rogers, Rosalind Russell, Norma Shearer, Barbara Stanwyck, James Stewart, and Spencer Tracy. There were other major stars, and many popular supporting players, but it seems that a rather small constellation of superstars commanded a disproportionate share of the leading roles in the best movies of the Golden Age.
Why did movies go into decline after 1942’s releases? World War II certainly provided an impetus for the end of the Golden Age. The war diverted resources from the production of major theatrical films; grade-A features gave way to low-budget fare. And some of the superstars of the Golden Age went off to war. (Two who remained civilians — Leslie Howard and Carole Lombard — were killed during the war.) With the resumption of full production in 1946, the surviving superstars who hadn’t retired were fading, though their presence still propelled many films of the Abysmal Years.
Stars come and go, however, as they have done since Shakespeare’s day. The decline into the Abysmal Years and Vile Epoch have deeper causes than the dimming of old stars:
The Golden Age had deployed all of the themes that could be used without explicit sex, graphic violence, and crude profanity — none of which become an option for American movie-makers until the mid-1960s.
Prejudice got significantly more play after World War II, but it’s a theme that can’t be used very often without becoming trite. And trite it has become, now that movies have become vehicles for decrying prejudice against every real or imagined “victim” group under the sun.
Other attempts at realism (including film noir) resulted mainly in a lot of turgid trash laden with unrealistic dialogue and shrill emoting — keynotes of the Abysmal Years.
Hollywood productions often sank to the level of TV, apparently in a misguided effort to compete with that medium. The use of garish technicolor — a hallmark of the 1950s — highlighted the unnatural neatness and cleanliness of settings that should have been rustic if not squalid. Sound tracks became lavishly melodramatic and deafeningly intrusive.
The transition from abysmal to vile coincided with the cultural “liberation” of the mid-1960s, which saw the advent of the “f” word in mainstream films. Yes, the Vile Epoch brought more more realistic plots and better acting (thanks mainly to the Brits). But none of that compensates for the anti-social rot that set in around 1966: drug-taking, drinking and smoking are glamorous; profanity proliferates to the point of annoyance; sex is all about lust and little about love; violence is gratuitous and beyond the point of nausea; corporations and white, male Americans with money are evil; the U.S. government (when Republican-controlled) is in thrall to that evil; etc., etc. etc.
To be sure, there have been outbreaks of greatness since the Golden Age. During the Abysmal Years, for example, aging superstars appeared in such greats as Life With Father (Dunne and Powell, 1947), Key Largo (Bogart and Lionel Barrymore, 1948), Edward, My Son (Tracy, 1949), The African Queen (Bogart and Hepburn, 1951), High Noon (Cooper, 1952), Mr. Roberts (Cagney, Fonda, and Powell, 1955), The Old Man and the Sea (Tracy, 1958), Anatomy of a Murder (Stewart, 1959), North by Northwest (Grant, 1959), Inherit the Wind (Tracy, 1960), Long Day’s Journey into Night (Hepburn, 1962), Advise and Consent (Fonda and Laughton, 1962), The Best Man (Fonda, 1964), and Othello (Olivier, 1965). A new generation of stars appeared in such greats as The Lavender Hill Mob (Alec Guinness, 1951), Singin’ in the Rain (Gene Kelly, 1952), The Bridge on the River Kwai (Guiness, 1957), The Hustler (Paul Newman, 1961), Lawrence of Arabia (Peter O’Toole, 1962), and Dr. Zhivago (Julie Christie, 1965).
Similarly, the Vile Epoch — in spite of its seaminess — has yielded many excellent films and new stars. Some of the best films (and their stars) are A Man for All Seasons (Paul Scofield, 1966), Midnight Cowboy (Dustin Hoffman, 1969), MASH (Alan Alda, 1970), The Godfather (Robert DeNiro, 1972), Papillon (Hoffman, Steve McQueen, 1973), One Flew over the Cuckoo’s Nest (Jack Nicholson, 1975), Star Wars and its sequels (Harrison Ford, 1977, 1980, 1983), The Great Santini (Robert Duvall, 1979), The Postman Always Rings Twice (Nicholson, Jessica Lange, 1981), The Year of Living Dangerously (Sigourney Weaver, Mel Gibson, 1982), Tender Mercies (Duvall, 1983), A Room with a View (Helena Bonham Carter, Daniel Day Lewis 1985), Mona Lisa (Bob Hoskins, 1986), Fatal Attraction (Glenn Close, 1987), 84 Charing Cross Road (Anne Bancroft, Anthony Hopkins, Judi Dench, 1987), Dangerous Liaisons (John Malkovich, Michelle Pfeiffer, 1988), Henry V (Kenneth Branagh, 1989), Reversal of Fortune (Close and Jeremy Irons, 1990), Dead Again (Branagh, Emma Thompson, 1991), The Crying Game (1992), Much Ado about Nothing (Branagh, Thompson, Keanu Reeves, Denzel Washington, 1993), Trois Couleurs: Bleu (Juliette Binoche, 1993), Richard III (Ian McKellen, Annette Bening, 1995), Beautiful Girls (Natalie Portman, 1996), Comedian Harmonists (1997), Tango (1998), Girl Interrupted (Winona Ryder, 1999), Iris (Dench, 2000), High Fidelity (John Cusack, 2000), Chicago (Renee Zellweger, Catherine Zeta-Jones, Richard Gere, 2002), Master and Commander: The Far Side of the World (Russell Crowe, 2003), Finding Neverland (Johnny Depp, Kate Winslet, 2004), Capote (Philip Seymour Hoffman, 2005), The Chronicles of Narnia: The Lion, the Witch, and the Wardrobe (2005), The Painted Veil (Edward Norton, Naomi Watts, 2006), Breach (Chris Cooper, 2007), The Curious Case of Benjamin Button (Brad Pitt, 2008), The King’s Speech (Colin Firth, 2010), Saving Mr. Banks (Thomson, Tom Hanks, 2013), and Brooklyn (Saoirse Ronan, 2015).
But every excellent film produced during the Abysmal Years and Vile Epoch has been surrounded by outpourings of dreck, schlock, and bile. The generally tepid effusions of the Abysmal Years were succeeded by the excesses of the Vile Epoch: films that feature noise, violence, sex, and drugs for the sake of noise, violence, sex, and drugs; movies whose only “virtue” is their appeal to such undiscerning groups as teeny-boppers, wannabe hoodlums, resentful minorities, and reflexive leftists; movies filled with “bathroom” and other varieties of “humor” so low as to make the Keystone Cops seem paragons of sophisticated wit.
In sum, movies have become progressively worse since the end of the Golden Age — and I have the numbers to prove it.
First, I should establish that I am picky about the films that I choose to watch:

Note: These averages are for films designated by IMDb as English-language (which includes foreign-language films with English subtitles or dubbing): about 84,000 in all as of August 24, 2019.
The next graph illustrates three points:
I watched just as many (or more) films of the 1930s than of the 1940s. So my higher ratings of films of the 1930s than those of the 1940s aren’t due to greater selectivity in choosing films from the 1930s. Further, there is a steady downward trend in my ratings, which began long before the “bulge” in my viewing of movies released from mid-1980s to about 2010. The downward trend continued despite the relative paucity of titles released after 2010. (It is plausible, however, that the late uptick is due to heightened selectivity in choosing recent releases.)
IMDb users, on the whole, have overrated the films of the early 1940s to mid-1980s and mid-1990s to the present. The ratings for films released since the mid-1990s — when IMDb came on the scene — undoubtedly reflect the dominance of younger viewers who “grew up” with IMDb, who prefer novelty to quality, and who have little familiarity with earlier films. I have rated almost 1,600 films that were released in 1996-2018, and also 1,200 films from 1932-1995.)
My ratings, based on long experience and exacting standards, indicate that movies not only are not better than ever, but are generally getting worse as the years roll on. The recent uptick in my ratings can be attributed to selectivity — I have seen (and rated) only 23 feature films that were released in 2015-2018.

Another indication that movies are generally getting worse is the increasing frequency of what I call unwatchable films. These are films that I watched just long enough to evaluate as trash, which earns them my rating of 1 (the lowest allowed by IMDb). The trend is obvious:

The graph represents 61 films, most of which earned good ratings by IMDb users:

You have been warned.
Will the Vile Epoch End? I’d bet against it, but I’ll keep watching (occasionally) nonetheless. There’s an occasional nugget of gold in the sea of mud.
There are those who mistake mud for gold nuggets. Two examples are Birdman, which won the Oscar for Best Picture of 2014. I tried to watch it, but it failed to rise above trendy quirkiness, foul language, and stilted (though improvised) dialogue. I turned it off. It’s the only Best Picture winner, of those that I’ve watched, that I couldn’t sit through. I will not waste even a minute on a more recent Best Picture winner, The Shape of Water.
My general point is that there are many, many films that are better than the Best Picture. Business Insider offers a ranking of Best-Picture winners in “All 91 Oscar Best-Picture Winners, Ranked from Worst to Best by Movie Critics“, which covers releases through 2018. Business Insider bases its ranking on critics’ reviews, as summarized at Rotten Tomatoes. But the Business Insider piece doesn’t help the viewer who’s in search of a better film than those that have been voted Best Picture.
I am here to help, with the aid of ratings given by users at Internet Movie Database (IMDb). IMDb user ratings aren’t a sure guide to artistic merit — as the latter is judged by members of the Academy of Motion Picture Arts and Sciences (AMPAS), or by movie critics. But members of AMPAS and movie critics are notoriously wrong-headed about artistic merit. The aforementioned The Shape of Water exemplifies their wrong-headedness:
This Guillermo del Toro film has gotten rave reviews from critics, with a Rotten Tomatoes rating of 93%, and lots of awards-season buzz. And while some elements of the film are praiseworthy, … the film turns out to be little more than a collection of manipulative and ludicrous set-ups for social-justice lectures lacking any nuance or wit. The Shape of Water assumes its audience to be idiots, which makes this the kind of painful and unoriginal exercise that is all but certain to win awards throughout this winter in Hollywood….
… The Shape of Water never allows the audience to get the message of tolerance from the central allegory of the love between Elisa and the creature. Instead, del Toro and the writers fill up every square inch with contrivances and lectures.
And those lectures come with all of the subtlety of a jackhammer. Giles lost his job in the advertising business for unexplained reasons, but which seem to be connected to his sexual orientation. He tries to reach out to a waiter at his favorite diner, who rejects him just as the waiter also gets a chance to demonstrate his racism by refusing service to a black couple, both of which are completely gratuitous to the film or to Amphibian Man’s fate. Shannon’s Strickland spouts religious nonsense to justify cruelty, and sexually oppresses his wife in another gratuitous scene, sticking his gangrenous fingers over her mouth to keep her from expressing pleasure…. The bad guys are the US space program (!) and the military, while the most sympathetic character apart from the four main protagonists is a Soviet spy. Strickland dismisses Elisa and Zelda as suspects, angrily lamenting his decision to “question the help,” just in case the class-warfare argument escaped the audience to that point. Oh, he’s also a major-league sexual harasser in the workplace. And so on. [Ed Morrissey, “The Shape of Water: Subtle As a Jackhammer and Almost As Intelligent“, Hot Air, March 5, 2018]
In any event, IMDb user ratings are a good guide to audience appeal, which certainly doesn’t preclude artistic merit. (I would argue that audience appeal is a better gauge of artistic merit than critical consensus.) For example, I have seen 10 of the 14 top-rated Oscar winners listed in the Business Insider article, but only 5 of the winners that I have seen are among my 14 top-rated Oscar winners.
The first table below lists all of the Best Picture winners among films released through 2018, ranked according to the average rating given each film by IMDb users. The second table lists the 100 highest-rated features released through 2018. (The list includes films that have been rated by at least 4,000 users, which is the approximate number for Cavalcade, the least-viewed of Oscar-winning pictures.) Only 16 of the 92 Oscar-winning films (highlighted in red) are among the top 100. (Lawrence of Arabia would be among the top 100, but IMDb categorizes it as a UK film.)
In short, there are many better-than-Best Pictures to choose from. (Keep reading to see a list of my very-favorite films.)


Below is a list of my very-favorite films, the ones that I’ve rated 9 or 10 out of 10.

* This is my interpretation of IMDb’s 10-point scale:
1 = So bad that I quit watching after a few minutes.
2 = I watched the whole thing, but wish that I hadn’t.
3 = Barely bearable; perhaps one small, redeeming feature (e.g., a cast member).
4 = Just a shade better than a 3 — a “gut feel” grade.
5 = A so-so effort; on a par with typical made-for-TV fare.
6 = Good, but not worth recommending to anyone else; perhaps because of a weak cast, too-predictable plot, cop-out ending, etc.
7 = Enjoyable and without serious flaws, but once was enough.
8 = Superior on at least three of the following dimensions: mood, plot, dialogue, music (if applicable), dancing (if applicable), quality of performances, production values, and historical or topical interest; worth seeing twice but not a slam-dunk great film.
9 = Superior on several of the above dimensions and close to perfection; worth seeing at least twice.
10 = An exemplar of its type; can be enjoyed many times.
I observed, in November 2020, that there is no connection between CO2 emissions and the amount of CO2 in the atmosphere. This suggests that emissions have little or no effect on the concentration of CO2. A recent post at Watts Up With That? notes that emissions hit a record high in 2021. What the post doesn’t address is the relationship between emissions and the concentration of CO2 in the atmosphere.
See for yourself. Here’s the WUWT graph of emissions from energy combustion and industrial processes:

Here’s the record of atmospheric CO2:

It’s obvious that CO2 has been rising monotonically, with regular seasonal variations, while emissions have been rising irregularly — even declining and holding steady at times. This relationship (or lack thereof) supports the thesis that the rise in atmospheric CO2 is the result of warming, not its cause.
For example, Dr. Roy Spencer, in a post at his eponymous blog, writes:
[T]he greatest correlations are found with global (or tropical) surface temperature changes and estimated yearly anthropogenic emissions. Curiously, reversing the direction of causation between surface temperature and CO2 (yearly changes in SST [dSST/dt] being caused by increasing CO2) yields a very low correlation.
That is to say, temperature changes seem to drive CO2 levels, not the other way around (which is the conventional view).
Sources for CO2 levels:
https://gml.noaa.gov/ccgg/trends/gl_data.html
https://gml.noaa.gov/ccgg/trends/data.html
Related reading: Clyde Spencer, “Anthropogenic CO2 and the Expected Results from Eliminating It” [zero, zilch, zip, nada], Watts Up With That?, March 22, 2022