Not-So-Random Thoughts (XXVI)

“Not-So-Random Thoughts” is an occasional series in which I highlight writings by other commentators on varied subjects that I have addressed in the past. Other entries in the series can be found at these links: I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI, XXII, XXIII, XXIV, and XXV. For more in the same style, see “The Tenor of the Times” and “Roundup: Civil War, Solitude, Transgenderism, Academic Enemies, and Immigration“.

CONTENTS

Free Trade Rethought

The Death Penalty

State Actors in Action

Red vs. Blue

Serfdom in Our Future?


FREE TRADE RETHOUGHT

My position on so-called free trade:

  • Estimate the amount by which the price of a foreign product or service is reduced by the actions of foreign governments or their proxies.
  • Add that amount to the price as a tariff.
  • Regularly review and adjust the schedule of tariffs.

All other trade would be unencumbered, excepting:

  • the importation of products and services otherwise restricted by U.S. law (e.g., tanks, artillery pieces)
  • the exportation of products and services that are used directly in the development, manufacture, and operation of sensitive military systems (e.g., fighter aircraft, anti-missile defenses).

Selective tariffs, based on actual costs of production, would encourage the efficient use of resources and protect American workers who would otherwise be victimized by unfair trade. But that’s it. Sweeping tariffs on imports — just to “protect” American workers — do more than protect them. They also penalize American consumers, most of whom are also workers.

William Upton, writing in light of current events (“Make America Autarkic Again“, The American Mind, March 13, 2020), would go a lot further:

In our over-globalized world, a policy of total autarky is infeasible. But a degree of autarky should be recognized as self-evidently in America’s national interest.

Autarky, for those unfamiliar, was an economic and industrial policy of self-reliance wherein a nation need not rely on international trade for its economic survival. This is not to say that said nation rejected international trade or isolated itself from the global economic order, rather that it merely could survive on its own if necessary….

[Oren] Cass notes that sound industrial policy has allowed nations like Germany and Japan to retain strong manufacturing sectors. Cass also emphasizes the pivotal importance of manufacturing, not just for the economy, but for American communities:

[M]anufacturing is unique for the complexity of its supply chains and the interaction between innovation and production. One of the most infuriating face-palms of modern economics is the two-step that goes like this: First, wave away concern as other countries with aggressive industrial policies … attract our critical supply chains overseas, explaining that it doesn’t matter where things get made. Second, wait for people to ask “why can’t we make this or that here,” and explain that of course we can’t because all of the supply chains and expertise are entrenched elsewhere. It’s enough to make one slam one’s head into the podium.

There may be something to it.


THE DEATH PENALTY

I was surprised to read the assessment by Theodore Dalrymple, a former prison doctor, of the death penalty (“The Death Penalty’s Demise and the Withering of Authority“, Law & Liberty, February 11, 2020). On the one hand:

If I had been a prison doctor while the death penalty was still imposed in Britain, I should have had the somewhat awkward task of certifying murderers fit for execution….  It was not permitted to execute madmen even if they had been sane at the time of their crime; but with the ever-widening and loosening of psychiatric diagnosis, I should no doubt have been tempted always to find a medical reason to postpone the execution sine die. I would have found it hard to sign what would have amounted to a medical death warrant, all the more so with the man before my very eyes. Nor would I have much relished attending the execution itself, to certify that the execution had worked….

But while I should not have wanted to participate in an execution, I was nevertheless viscerally in favour of the death penalty because it seemed to me that there were crimes (though by no means all of them murder) so heinous, so despicable, that no other penalty was adequate to express society’s outrage at, or repudiation of, them. Moreover — though quite late in my career — I discovered evidence that suggested that the death penalty did in fact act as a deterrent to murder, something which has long been contested or outright denied by abolitionists.

But on the other hand:

Does its deterrent effect, then, establish the case for the death penalty, at least in Britain? No, for two reasons. First, effectiveness of a punishment is not a sufficient justification for it. For example, it might well be that the death penalty would deter people from parking in the wrong place, but we would not therefore advocate it. Second, the fact is that in all jurisdictions, no matter how scrupulously fair they try to be, errors are sometime made, and innocent people have been put to death. This seems to me the strongest, and perhaps decisive, argument against the death penalty.

And on the third hand:

Although, on balance, I am against the death penalty, I do not assume that those who are in favour of it are necessarily moral primitives, which abolitionists often give the impression of believing. For most of our history, the rightness of the death penalty has been taken for granted, and it cannot be that we are the first decent, reflective people ever to have existed. The self-righteousness of the Europeans in this respect disgusts me when they set themselves up to judge others. France, for example, abolished the death penalty only in 1981 – AD 1981, that is, not 1981 BC. When the death penalty in Britain was abolished in 1965 after many decades of campaigning by abolitionists, more than 90 per cent of the population was still in favour of it. Almost certainly it believed, if not necessarily in a fully coherent way, that to abolish the death penalty was to weaken the authority of society and to lessen the majesty of the law. It was also to weaken the prohibition against killing and, though involving the taking of a life (the murderer’s), also lessened the sanctity of life….

In Britain, one of the effects of the abolition of the death penalty, the downward pressure on all prison sentences, has been little remarked. Punishment has to be roughly proportional to the gravity of the crime (exact proportionality cannot be achieved), but if murder attracts only 15 years’ imprisonment de facto, what sentences can be meted out to those who commit lesser, but still serious, crimes? Moreover, the charge of murder is often reduced to the lesser crime of manslaughter, in which sentences – as a consequence – are often derisory….

It is scarcely any wonder that in the years since the abolition of the death sentence, Britain has gone from being a well-ordered, non-violent, law-abiding society to being a society with the highest rate of violent crime in Western Europe. Of course, the abolition of the death penalty was not the only cause, for crime was rising in any case, but it brought its contribution to the festival of disorder that followed.

It seems to me that Dalrymple ends up arguing in favor of the death penalty. He is correct about its deterrent effect (same post). He is wrong to give heavy weight to the possibility of error. And he overlooks a conclusive argument in its favor: there is one less criminal who might be let loose to commit more crimes. All of those points and more are covered in these posts:

Does Capital Punishment Deter Homicide?
Libertarian Twaddle about the Death Penalty
A Crime Is a Crime
Crime and Punishment
Saving the Innocent?
Saving the Innocent?: Part II
More Punishment Means Less Crime
More About Crime and Punishment
More Punishment Means Less Crime: A Footnote
Clear Thinking about the Death Penalty
Let the Punishment Fit the Crime
Another Argument for the Death Penalty
Less Punishment Means More Crime
Crime, Explained
Why Stop at the Death Penalty?
Crime Revisited


STATE ACTORS IN ACTION

Once upon a time I made a case for rescuing the First Amendment from its enemies in

the telecommunications, news, entertainment, and education industries [which] have exerted their power to suppress speech because of its content….  The collective actions of these entities — many of them government- licensed and government-funded — effectively constitute a governmental violation of the Constitution’s guarantee of freedom of speech (See Smith v. Allwright, 321 U.S. 649 (1944) and Marsh v. Alabama, 326 U.S. 501 (1946).)

Leo Goldstein (“Google and YouTube Are State Actors“, American Thinker, March 9, 2020) finds a smoking gun

in the FCC Obamanet orders of 2010 and 2015. The 2015 Obamanet Order, officially called Open Internet Order, has explicitly obligated all internet users to pay a tax to Google and YouTube in their ISP and wireless data fees. The Order even mentions Google and YouTube by name. The tax incurs tens of billions of dollars per year. More specifically, the Order said that by paying ISP fees (including mobile wireless), each user also pays for the services that ISP gives to platforms and content providers like YouTube, even if the user doesn’t use them….

Platforms and content providers are misleadingly called “edge providers” here. Thus, every ISP customer in the US is obligated to pay for the traffic generated by Google, Netflix, Facebook, and Twitter, even if he used none of them!

Off with their heads.


RED VS. BLUE

The prolific Joel Kotkin weighs in on the Red States’ economic and electoral advantages:

Even in a state as deeply blue as [California}, Democrats’ disdain for the basic values and interests of their own base could unravel their now seemingly unbridgeable majority. At some point, parents, artists, minorities, small businesspeople and even sex workers will not be mollified sufficiently by a fulsome expression of good intentions. If more voters begin to realize that many of the policies being adopted are injurious, they may even begin to look again at the Republicans, particularly once the toxic President Trump is no longer on the ballot scaring the masses to toe the line. [“Democrats Risk Blowback with Leftward Turn“, newgeography, March 1, 2020]

* * *

The political and cultural war between red and blue America may not be settled in our lifetimes, but it’s clear which side is gaining ground in economic and demographic terms. In everything from new jobs—including new technology employment—fertility rates, population growth, and migration, it’s the red states that increasingly hold the advantage.

Perhaps the most surprising development is on the economic front. Over the past decade, the national media, and much of academia, have embraced the notion that the future belonged to the high-tax, high-regulation economies clustered on the East and West Coasts. The red states have been widely dismissed, in the words of the New York Times, as the land of the “left behind.”

Yet the left-behind are catching up, as economic momentum shifts away from coastal redoubts toward traditionally GOP-leaning states. Just a few years ago, states like California, Massachusetts, and New York held their own, and then some, in measurements of income growth from the Bureau of Economic Analysis. Now the fastest growth is concentrated in the Sunbelt and Great Plains. Texans’ income in the latest 2019 BEA estimates was up 4.2 percent, well above California’s 3.6 percent and twice New York’s 2.1 percent. The largest jumps—and this may matter a lot in 2020—took place in the Dakotas, Nebraska, and Iowa. [“Red v. Blue“, City Journal, February 7, 2020]

But:

[S]ocialism is gaining adherents even in the upper middle-class and among the oligarchy. One critical component lies in detestation of all things Trump even among CEOs, most of whom, according to a recent Chief Executive survey, want him impeached. Corporate America is increasingly embracing the notion of a guaranteed income and is adopting politically correct positions on such things as immigration, particularly in tech and on Wall Street.

But the most important driver for socialism comes from the burgeoning green movement. Long dominated by the elite classes, environmentalists are openly showing themselves as watermelons — green on the outside, red on the inside. For example, the so called “Green New Deal” — embraced by Sanders, Warren and numerous oligarchs — represents, its author Saikat Chakrabarti suggests, not so much a climate as “a how-do-you-change-the entire-economy thing”. Increasingly greens look at powerful government not to grow the economy, but to slow it down, eliminating highly paid blue-collar jobs in fields like manufacturing and energy. The call to provide subsidies and make work jobs appeals to greens worried about blowback from displaced workers and communities.

Combined with the confused and vacillating nature of our business elites, and the economic stagnation felt by many Americans, socialism in the West is on the rise. An ideology that history would seem to have consigned to Leon Trotsky’s “dustbin of history”, could turn the land that once embraced Adam Smith closer to the vision of Karl Marx. [“The West Turns Red?“, newgeography, February 25, 2020]

I have shown the economic superiority of the Red State model. But that isn’t enough to rescue the country from the perpetual allure of socialism. As I say here,

… States and municipalities governed by Democrats will ever more boldly pursue policies that undermine traditional American culture (e.g., unabated encouragement of illegal immigration, accelerated favoritism toward “identity groups”) and which are broadly destructive of the economic and social fabric; for example: persisting in costly, money-losing recycling and composting programs that do nothing for the environment (taking into account the environmental effects of the vehicles and equipment involved); the replacement of fossil-fuel sources of electricity by unreliable and expensive “renewable” sources; encouragement of homelessness by subsidizing it and making it socially acceptable; discouragement of family formation and stability through the continuation and expansion of long-discredited vote-buying welfare programs; openly persecuting conservatives and conservative institutions.

All of that will intensify the divisions between Red and Blue States, and the divisions between Red State governments and the Blue cities within them. But that is a first-order effect.

The second-order effect is to make living in Blue States and cities more onerous for middle-to-low-income earners (and even some among the affluent), who will seek greener (Redder) pastures outside Blue cities and Blue States. But many (most?) of those refugees will not flee because they have come to believe that big government is the cause of their problems. Rather, they (especially the younger, more mobile, and more “socialistic” ones) will flee because they don’t want to suffer the consequences of big government (high taxes, high housing costs, etc.). But, being addicted to the idea that big government is good, and ignorant of the connection between big government and their woes, they will continue to vote for big-government politicians and policies. Thus will Blue States and Blue cites gradually turn Purple and, in many cases, Blue.

You read it here.


SERFDOM IN OUR FUTURE?

I recently mused about Walter Scheidel’s book, The Great Leveler. Kotkin addresses the thesis of that book in “Who Will Prosper After the Plague?” (Tablet, April 13, 2020):

[T]he wreckage [caused by the Black Plague of the 14th century] created new opportunities for those left standing. Abandoned tracts of land could be consolidated by rich nobles, or, in some cases, enterprising peasants, who took advantage of sudden opportunities to buy property or use chronic labor shortages to demand higher wages. “In an age where social conditions were considered fixed,” historian Barbara Tuchman has suggested, the new adjustments seemed “revolutionary.”

What might such “revolutionary” changes look like in our post-plague society? In the immediate future the monied classes in America will take a big hit, as their stock portfolios shrink, both acquisitions and new IPOs get sidetracked and the value of their properties drop. But vast opportunities for tremendous profit available to those with the financial wherewithal to absorb the initial shocks and capitalize on the disruption they cause….

Over time, the crisis is likely to further bolster the global oligarchal class. The wealthiest 1% already own as much as 50% of the world’s assets, and according to a recent British parliamentary study, by 2030, will expand their share to two-thirds of the world’s wealth with the biggest gains overwhelmingly concentrated at the top 0.01%….

The biggest long-term winner of the stay-at-home trend may well be Amazon, which is hiring 100,000 new workers. But other digital industries will profit as well, including food delivery services, streaming entertainment services, telemedicine, biomedicine, cloud computing, and online education. The shift to remote work has created an enormous market for applications, which facilitate video conferencing and digital collaboration like Slack—the fastest growing business application on record—as well as Google Hangouts, Zoom, and Microsoft Teams. Other tech firms, such as Facebook, game makers like Activision Blizzard and online retailers like Chewy, suggests Morgan Stanley, also can expect to see their stock prices soar as the pandemic fades and public acceptance of online commerce and at-home entertainment grows with enforced familiarity.

Growing corporate concentration in the technology sector, both in the United States and Europe, will enhance the power of these companies to dominate commerce and information flows….

The modern-day clerisy consisting of academics, media, scientists, nonprofit activists, and other members of the country’s credentialed bureaucracy also stand to benefit from the pandemic. The clerisy operate as what the great German sociologist Max Weber called “the new legitimizers,” bestowing an air of moral and technocratic authority on the enterprises of their choosing….

Members of the clerisy are likely to be part of the one-quarter of workers in the United States who can largely work at home. Barely 3% of low-wage workers can telecommute but nearly 50% of those in the upper middle class can. While workers at most restaurants and retail outlets face hard times, professors and teachers will continue their work online, as will senior bureaucrats….

The biggest winners in the fallout from the coronavirus are likely to be large corporations, Wall Street, Silicon Valley, and government institutions with strong lobbies. The experience from recent recessions indicates that big banks, whose prosperity is largely asset-based, will do well along with major corporations, while Main Street businesses and ordinary homeowners will fare poorly….

In the Middle Ages, many former citizens, facing a series of disasters from plagues to barbarian invasions, willingly became serfs. Today, the class of permanently propertyless citizens seems likely to grow as the traditional middle class shrinks, and the role of labor is further diminished relative to that of technology and capital.

In contrast to the old unionized workers, many people today, whether their employment is full-time or part-time, have descended into the precariat, a group of laborers with limited control over how long they can work, who often live on barely subsistence wages. Nearly half of gig workers in California live under the poverty line.

Now comes the payoff:

Historically, pandemics have tended to spark class conflict. The plague-ravaged landscape of medieval Europe opened the door to numerous “peasant rebellions.” This in turn led the aristocracy and the church to restrict the movements of peasants to limit their ability to use the new depopulated countryside to their own advantage. Attempts to constrain the ambitions of the commoners often led to open revolts—including against the church and the aristocracy.

… As steady and well-paying jobs disappear, the demands for an ever more extensive welfare state, funded by the upper classes, will multiply.

Like their counterparts in the late 19th century, the lower-class workforce will demand changes. We already see this in the protests by workers at Instacart delivery service, and in Amazon warehouse workers concerned about limited health insurance, low wages, and exposure to the virus.

As the virus threatens to concentrate wealth and power even more, there’s likely to be some sort of reckoning, including from the increasingly hard-pressed yeomanry.

In the years before the great working-class rebellions of the mid-19th century, Alexis de Tocqueville warned that the ruling orders were “sleeping on a volcano.” The same might be seen now as well, with contagion pushing the lava into the streets, and causing new disruptions on a scale of which we can’t predict.

Something like socialism (for non-elites) may emerge for the rubble. It will be the 21th century equivalent of bread and circuses: share just enough of the wealth to keep the proletariat in line.

Something Cheerful

I have trained a Pandora station to play popular songs from the 1920s and early 1930s. I wrote about the music of that era in an earlier post, “It’s Time to Revive 1920s’ Jazz“. Here’s the post in its entirety (with some updated links), plus an addendum about a long-forgotten singer who will brighten your day.

I often wonder why the popular jazz of the 1920s, which faded in the mid-1930s, isn’t still widely popular. It’s rhythmically inventive, driving, and upbeat — as opposed to the monotonous and often dreary, dissonant, and unmelodic droning of what later became known as jazz. (I’m not writing here about the New Orleans style of jazz, which is a genre of its own, and has never died out. If you’re unsure of the distinction, click on the links at the end of this post.)

The jazz of the ’20s (and early-to-mid-’30s) evolved into the swing of the ’30s and 40s. Swing evolved into the ponderous big-band sound of the ’40s and ’50s.

Rhythmically inventive, driving, and upbeat popular music returned in the mid-’50s, with the birth of rock and roll. The Beatles and their ilk put a twist on rock and roll, and the genre evolved into what is known as classic rock — the sound that dominated the mid-’60s to early ’70s. Its variants — some of them close to the classic sound — survive and thrive to this day.

But nothing — with the possible exception of early swing — has yet to rival the musical sophistication of ’20s jazz. Bands led by the likes of Red Allen, Bix Beiderbecke, Johnny Dodds, the early Duke Ellington, Jean Goldkette, Fletcher Henderson, Isham Jones, Vincent Lopez, Jelly Roll Morton, Red Nichols, King Oliver, and Paul Whiteman (to name only a small representation) recorded thousands of foot-stomping tunes (plus innumerable blues, ballads, novelty tunes, other non-jazzy material).

It is de rigeur in some musical circles to deride the offerings of the larger ensembles, such as those led by several of the band leaders mentioned above. But their tight orchestrations delivered as much toe-tapping vitality as anything offered up by smaller groups.

For a feast of ’20s jazz — and much more — go to The Red Hot Jazz Archive, tap your toes, and lighten your spirit. (RealPlayer required.)

One of my favorites, which number in the hundreds, is “Dinah“. Not a jazzy song, you say? Well, dig these variations on a theme:

Cliff Edwards (1925)

Jean Goldkette (1926)

Joe Venuti (1928)

Red Nichols (1929)

Louis Armstrong (1930)

Bing Crosby with the Mills Brothers (1932) (After a ballad-y start, Bing rips into it. Bing as you’ve probably never heard him.)

The Boswell Sisters (1934) (The Bozzies followed Bing’s lead.)

Quintette of the Hot Club of France (1934)

Fats Waller (1935)

And feast your ears on this long anthology of Bix Beiderbecke‘s recordings. Beiderbecke crammed a long lifetime of music into his brief 28 years.

Now for the long-forgotten singer: Annette Hanshaw. Until I set up my Pandora station, which I call Upbeat, I hadn’t heard of her. The only female singers of that day whose works I was familiar with were Ruth Etting, Helen Morgan, Bessie Smith, and Ma Rainey. Smith and Rainey were blues singers. Etting and Morgan were identified (in retrospect, at least) with torch songs: Etting with “Body and Soul“, Morgan with “Bill“. Hanshaw’s renderings of “Body and Soul” and “Bill” are vocally superior to those of Etting and Morgan, though less torchy.

Hanshaw’s sprightly soprano is simply too cheerful to be subdued by songs of longing and regret. It’s not a perfect voice (the lowest notes are weak). But it’s such a happy voice that I must share some links to a few of the dozens of her songs that I’ve heard on my Upbeat station:

Am I Blue?” (a happy blues song with Hanshaw at the mic)

Button Up Your Overcoat” (in her Betty Boop/Helen Kane voice)

Get Out and Get Under the Moon

Happy Days Are Here Again

I Can’t Give You Anything But Love

I Get the Blues When It Rains” (more happy blues)

I Love a Ukelele

I’ve Got a Feeling I’m Falling

Let’s Fall in Love” (as close to a plaintive sound as she gets)

For many many more songs by Annette Hanshaw, go here and here.

P.S. After I had added the paragraphs about Annette Hanshaw, I found John Wilson’s article, “The Influence of Jazz on Modern Singing” (The New York Times, August 20, 1990). It’s a review of a book about jazz singing. This is spot-on (except for the part about Hanshaw’s age, as discussed below):

There is a fascinatingly abbreviated saga of Annette Hanshaw, ”the Louise Brooks of jazz,” as Mr. Friedwald calls her. She was born wealthy: ”no two-bit manager ever cracked the whip and tried to Svengali her into sounding like everybody else.” She became a star through her records, starting in 1926, by using the musical idiom of the 20’s in what Mr. Friedwald terms ”a creative modern way.” Although she had virtually no ambition, despised being a celebrity and was terrified of audiences, in her eight years of recording she made ”the period’s most consistently excellent series of female vocal records outside the blues idiom.” She retired permanently at the age of 28 and lived happily for 50 more years without singing.

Actually Hanshaw retired permanently after a 1937 radio appearance when she was 36. She recorded her final disc in 1934 (“Let’s Fall in Love” is on the A side) — her only recording of that year. The mistake about her age is due to an error someone made somewhere along the line (and which seems to have gone uncorrected by Hanshaw), namely, that she was born in 1910, though her real birth year was 1901. The error persisted in the Times’s 1985 obituary for Hanshaw, and wasn’t rectified until many years after her death. In any event, she did live for 50 years after her final recording, but she was almost 84 when she died, not 74, as the Times‘s obituary says, or 78, as Wilson says.

A Footnote to “Peak Civilization”

I ended that post with this:

Every line of human endeavor reaches a peak, from which decline is sure to follow if the things that caused it to peak are mindlessly rejected for the sake of novelty (i.e., rejection of old norms just because they are old). This is nowhere more obvious than in the arts.

It should be equally obvious to anyone who takes an objective look at the present state of American society and is capable of comparing it with American society of the 1940s and 1950s. For all of its faults it was a golden age. Unfortunately, most Americans now living (Noah Smith definitely included) are too young and too fixated on material things to understand what has been lost — irretrievably, I fear.

My point is underscored by Annebelle Timsit, writing at Quartz:

The endless stretch of a lazy summer afternoon. Visits to a grandparent’s house in the country. Riding your bicycle through the neighborhood after dark. These were just a few of the revealing answers from more than 400 Twitter users in response to a question: “What was a part of your childhood that you now recognize was a privilege to have or experience?”

That question, courtesy of writer Morgan Jerkins, revealed a poignant truth about the changing nature of childhood in the US: The childhood experiences most valued by people who grew up in the 1970s and 1980s are things that the current generation of kids are far less likely to know.

That’s not a reference to cassette tapes, bell bottoms, Blockbuster movies, and other items popular on BuzzFeed listicles. Rather, people are primarily nostalgic for a youthful sense of independence, connectedness, and creativity that seems less common in the 21st century. The childhood privileges that respondents seemed to appreciate most in retrospect fall into four broad categories:

“Riding my bike at all hours of the day into the evening throughout many neighborhoods without being stopped or asked what I was doing there,” was one Twitter user’s answer to Jerkins’ question. Another commenter was grateful for “summer days & nights spent riding bikes anywhere & everywhere with friends, only needing to come home when the streetlights came on,” while yet another recalled “having a peaceful, free-range childhood.” Countless others cited the freedom to explore—with few restrictions—as a major privilege of their childhood.

American children have less independence and autonomy today than they did a few generations ago.

For many of today’s children, that privilege is disappearing. American children have less independence and autonomy today than they did a few generations ago. As parents have become increasingly concerned with safety, fewer children are permitted to go exploring beyond the confines of their own backyard. Some parents have even been prosecuted or charged with neglect for letting their children walk or play unsupervised. Meanwhile, child psychologists say that too many children are being ushered from one structured activity to the next, always under adult supervision—leaving them with little time to play, experiment, and make mistakes.

That’s a big problem. Kids who have autonomy and independence are less likely to be anxious, and more likely to grow into capable, self-sufficient adults. In a recent video for The Atlantic, Julie Lythcott-Haims, author of How to Raise an Adult, argues that so-called helicopter parents “deprive kids the chance to show up in their own lives, take responsibility for things and be accountable for outcomes.”

That message seems to be gaining traction. The state of Utah, for example, recently passed a “free-range” parenting law meant to give parents the freedom to send kids out to play on their own.”

“Bravo!” to the government of Utah.

Transport yourself back three decades from the 1970s and 1980s to the 1940s and 1950s, when I was a child and adoslescent, and the contrast between then and now is even more stark than the contrast noted by Timsit.

And it has a lot to do with the social ruin that has been visited upon America by the spoiled (cosseted) children of capitalism.


Other related posts:

Ghosts of Thanksgiving Past
The Passing of Red Brick Schoolhouses and a Way of Life
An Ideal World
‘Tis the Season for Nostalgia
Another Look into the Vanished Past
Whither (Wither) Classical Liberalism and America?

Peak Civilization?

Here is an oft-quoted observation, spuriously attributed to Socrates, about youth:

The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise. Children are now tyrants, not the servants of their households. They no longer rise when elders enter the room. They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannize their teachers.

Even though Socrates didn’t say it, the sentiment has nevertheless been stated and restated since 1907, when the observation was concocted, and probably had been shared widely for decades, and even centuries, before that. I use a form of it when I discuss the spoiled children of capitalism (e.g., here).

Is there something to it? No and yes.

No, because rebelliousness and disrespect for elders and old ways seem to be part of the natural processes of physical and mental maturation.

Not all adolescents and young adults are rebellious and disrespectful. But many rebellious and disrespectful adolescents and young adults carry their attitudes with them through life, even if less obviously than in youth, as they climb the ladders of various callings. The callings that seem to be most attractive to the rebellious are the arts (especially the written, visual, thespian, terpsichorial, musical, and cinematic ones), the professoriate, the punditocracy, journalism, and politics.

Which brings me to the yes answer, and to the spoiled children of capitalism. Rebelliousness, though in some persons never entirely outgrown or suppressed by maturity, will more often be outgrown or suppressed in economically tenuous conditions, the challenges of which which almost fully occupied their bodies and minds. (Opinionizers and sophists were accordingly much thinner on the ground in the parlous days of yore.)

However, as economic growth and concomitant technological advances have yielded abundance far beyond the necessities of life for most inhabitants of the Western world, the beneficiaries of that abundance have acquired yet another luxury: the luxury of learning about and believing in systems that, in the abstract, seem to offer vast improvements on current conditions. It is the old adage “Idle hands are the devil’s tools” brought up to date, with “minds” joining “hands” in the devilishness.

Among many bad things that result from such foolishness (e.g., the ascendancy of ideologies that crush liberty and, ironically, economic growth) is the loss of social cohesion. I was reminded of this by Noah Smith’s fatuous article, “The 1950s Are Greatly Overrated“.

Smith is an economist who blogs and writes an opinion column for Bloomberg News. My impression of him is that he is a younger version of Paul Krugman, the former economist who has become a left-wing whiner. The difference between them is that Krugman remembers the 1950s fondly, whereas Smith does not.

I once said this about Krugman’s nostalgia for the 1950s, a decade during which he was a mere child:

[The nostalgia] is probably rooted in golden memories of his childhood in a prosperous community, though he retrospectively supplies an economic justification. The 1950s were (according to him) an age of middle-class dominance before the return of the Robber Barons who had been vanquished by the New Deal. This is zero-sum economics and class warfare on steroids — standard Krugman fare.

Smith, a mere toddler relative to Krugman and a babe in arms relative to me, takes a dim view of the 1950s:

For all the rose-tinted sentimentality, standards of living were markedly lower in the ’50s than they are today, and the system was riddled with vast injustice and inequality.

Women and minorities are less likely to have a wistful view of the ’50s, and with good reason. Segregation was enshrined in law in much of the U.S., and de facto segregation was in force even in Northern cities. Black Americans, crowded into ghettos, were excluded from economic opportunity by pervasive racism, and suffered horrendously. Even at the end of the decade, more than half of black Americans lived below the poverty line:

Women, meanwhile, were forced into a narrow set of occupations, and few had the option of pursuing fulfilling careers. This did not mean, however, that a single male breadwinner was always able to provide for an entire family. About a third of women worked in the ’50s, showing that many families needed a second income even if it defied the gender roles of the day:

For women who didn’t work, keeping house was no picnic. Dishwashers were almost unheard of in the 1950s, few families had a clothes dryer, and fewer than half had a washing machine.

But even beyond the pervasive racism and sexism, the 1950s wasn’t a time of ease and plenty compared to the present day. For example, by the end of the decade, even after all of that robust 1950s growth, the white poverty rate was still 18.1%, more than double that of the mid-1970s:

Nor did those above the poverty line enjoy the material plenty of later decades. Much of the nation’s housing stock in the era was small and cramped. The average floor area of a new single-family home in 1950 was only 983 square feet, just a bit bigger than the average one-bedroom apartment today.

To make matters worse, households were considerably larger in the ’50s, meaning that big families often had to squeeze into those tight living spaces. Those houses also lacked many of the things that make modern homes comfortable and convenient — not just dishwashers and clothes dryers, but air conditioning, color TVs and in many cases washing machines.

And those who did work had to work significantly more hours per year. Those jobs were often difficult and dangerous. The Occupational Safety and Health Administration wasn’t created until 1971. As recently as 1970, the rate of workplace injury was several times higher than now, and that number was undoubtedly even higher in the ’50s. Pining for those good old factory jobs is common among those who have never had to stand next to a blast furnace or work on an unautomated assembly line for eight hours a day.

Outside of work, the environment was in much worse shape than today. There was no Environmental Protection Agency, no Clean Air Act or Clean Water Act, and pollution of both air and water was horrible. The smog in Pittsburgh in the 1950s blotted out the sun. In 1952 the Cuyahoga River in Cleveland caught fire. Life expectancy at the end of the ’50s was only 70 years, compared to more than 78 today.

So life in the 1950s, though much better than what came before, wasn’t comparable to what Americans enjoyed even two decades later. In that space of time, much changed because of regulations and policies that reduced or outlawed racial and gender discrimination, while a host of government programs lowered poverty rates and cleaned up the environment.

But on top of these policy changes, the nation benefited from rapid economic growth both in the 1950s and in the decades after. Improved production techniques and the invention of new consumer products meant that there was much more wealth to go around by the 1970s than in the 1950s. Strong unions and government programs helped spread that wealth, but growth is what created it.

So the 1950s don’t deserve much of the nostalgia they receive. Though the decade has some lessons for how to make the U.S. economy more equal today with stronger unions and better financial regulation, it wasn’t an era of great equality overall. And though it was a time of huge progress and hope, the point of progress and hope is that things get better later. And by most objective measures they are much better now than they were then.

See? A junior Krugman who sees the same decade as a glass half-empty instead of half-full.

In the end, Smith admits the irrelevance of his irreverence for the 1950s when he says that “the point of progress and hope is that things get better later.” In other words, if there is progress the past will always look inferior to the present. (And, by the same token, the present will always look inferior to the future when it becomes the present.)

I could quibble with some of Smith’s particulars (e.g., racism may be less overt than it was in the 1950s, but it still boils beneath the surface, and isn’t confined to white racism; stronger unions and stifling financial regulations hamper economic growth, which Smith prizes so dearly). But I will instead take issue with his assertion, which precedes the passages quoted above, that “few of those who long for a return to the 1950s would actually want to live in those times.”

It’s not that anyone yearns for a return to the 1950s as it was in all respects, but for a return to the 1950s as it was in some crucial ways:

There is … something to the idea that the years between the end of World War II and the early 1960s were something of a Golden Age…. But it was that way for reasons other than those offered by Krugman [and despite Smith’s demurrer].

Civil society still flourished through churches, clubs, civic associations, bowling leagues, softball teams and many other voluntary organizations that (a) bound people and (b) promulgated and enforced social norms.

Those norms proscribed behavior considered harmful — not just criminal, but harmful to the social fabric (e.g., divorce, unwed motherhood, public cursing and sexuality, overt homosexuality). The norms also prescribed behavior that signaled allegiance to the institutions of civil society (e.g., church attendance, veterans’ organizations) , thereby helping to preserve them and the values that they fostered.

Yes, it was an age of “conformity”, as sneering sophisticates like to say, even as they insist on conformity to reigning leftist dogmas that are destructive of the social fabric. But it was also an age of widespread mutual trust, respect, and forbearance.

Those traits, as I have said many times (e.g., here) are the foundations of liberty, which is a modus vivendi, not a mystical essence. The modus vivendi that arises from the foundations is peaceful, willing coexistence and its concomitant: beneficially cooperative behavior —  liberty, in other words.

The decade and a half after the end of World War II wasn’t an ideal world of utopian imagining. But it approached a realizable ideal. That ideal — for the nation as a whole — has been put beyond reach by the vast, left-wing conspiracy that has subverted almost every aspect of life in America.

What happened was the 1960s — and its long aftermath — which saw the rise of capitalism’s spoiled children (of all ages), who have spat on and shredded the very social norms that in the 1940s and 1950s made the United States of America as united they ever would be. Actual enemies of the nation — communists — were vilified and ostracized, and that’s as it should have been. And people weren’t banned and condemned by “friends”, “followers”, Facebook, Twitter, etc. etc., for the views that they held. Not even on college campuses, on radio and TV shows, in the print media, or in Hollywood moves.

What do the spoiled children have to show for their rejection of social norms — other than economic progress that is actually far less robust than it would have been were it not for the  interventions of their religion-substitute, the omnipotent central government? Well, omnipotent at home and impotent (or drastically weakened) abroad, thanks to rounds of defense cuts and perpetual hand-wringing about what the “world” might think or some militarily inferior opponents might do if the U.S. government were to defend Americans and protect their interests abroad?

The list of the spoiled children’s “accomplishments” is impossibly long to recite here, so I will simply offer a very small sample of things that come readily to mind:

California wildfires caused by misguided environmentalism.

The excremental wasteland that is San Francisco. (And Blue cities, generally.)

Flight from California wildfires, high taxes, excremental streets, and anti-business environment.

The killing of small businesses, especially restaurants, by imbecilic Blue-State minimum wage laws.

The killing of businesses, period, by oppressive Blue-State regulations.

The killing of jobs for people who need them the most, by ditto and ditto.

Bloated pension schemes for Blue-State (and city) employees, which are bankrupting those States (and cities) and penalizing their citizens who aren’t government employees.

The hysteria (and even punishment) that follows from drawing a gun or admitting gun ownership

The idea that men can become women and should be allowed to compete with women in athletic competitions because the men in question have endured some surgery and taken some drugs.

The idea that it doesn’t and shouldn’t matter to anyone that a self-identified “woman” uses women’s rest-rooms where real women and girls became prey for prying eyes and worse.

Mass murder on a Hitlerian-Stalinist scale in the name of a “woman’s right to choose”, when she made that choice by (in almost every case) engaging in consensual sex.

Disrespect for he police and military personnel who keep them safe in their cosseted existences.

Applause for attacks on the same.

Applause for America’s enemies, which the delusional, spoiled children won’t recognize as their enemies until it’s too late.

Longing for impossible utopias (e.g., “true” socialism) because they promise what is actually impossible in the real world — and result in actual dystopias (e.g., the USSR, Cuba, Britain’s National Health Service).

Noah Smith is far too young to remember an America in which such things were almost unthinkable — rather than routine. People then didn’t have any idea how prosperous they would become, or how morally bankrupt and divided.

Every line of human endeavor reaches a peak, from which decline is sure to follow if the things that caused it to peak are mindlessly rejected for the sake of novelty (i.e., rejection of old norms just because they are old). This is nowhere more obvious than in the arts.

It should be equally obvious to anyone who takes an objective look at the present state of American society and is capable of comparing it with American society of the 1940s and 1950s. For all of its faults it was a golden age. Unfortunately, most Americans now living (Noah Smith definitely included) are too young and too fixated on material things to understand what has been lost — irretrievably, I fear.


I was going to append a list of related posts, but the list would be so long that I can only refer you to “Favorite Posts” — especially those listed in the following sections:

I. The Academy, Intellectuals, and the Left
II. Affirmative Action, Race, and Immigration
IV. Conservatism and Other Political Philosophies
V. The Constitution and the Rule of Law
VI. Economics: Principles and Issues
VIII. Infamous Thinkers and Political Correctness
IX. Intelligence and Psychology
X. Justice
XI. Politics, Politicians, and the Consequences of Government
XII. Science, Religion, and Philosophy
XIII. Self-Ownership (abortion, euthanasia, marriage, and other aspects of the human condition)
XIV. War and Peace

Looking for the Perfect Movie?

You won’t find it at Wikipedia‘s list of films with a 100-percent rating on Rotten Tomatoes, but you might find some movies that are worth watching. The list comprises only films with a critics’ consensus (staff-written summary) or at least 20 reviews. I went down the list and added to it the average rating given each film by users at the Internet Movie Database (IMDb). I also added my ratings of the films that I have seen and rated. (There are several that I have seen but haven’t rated (indicated by*); they’re now too dim in my memory to rate retroactively.)

To be precise, I worked my way down the list, which is organized chronologically, until I got well into the 2000s. I then saw that beginning in the late 1960s, the list became less and less about entertainment and more and more about propagandistic, leftist “documentaries”. I venture to say that every entry after 1999 is of that character. So the table below ends with the most recent movie in the Rotten Tomatoes list that I have seen and rated.

The table lists 198 films, beginning with The Cabinet of Doctor Caligari (1920) and ending with Toy Story 2 (1999). How good are the 198? Well, 195 of them have an IMDb rating of 7.0 or higher; I rarely consider watching a film with a rating below 7.0. Moreover, 71 of the 198 have an IMDb rating of 8.0 or higher; I consider a rating of 8.0 or higher to be a badge of excellence. (You will note that my ratings are generally close to those given by IMDb users, with some notable and glaring exceptions.)

In addition to the following list, you may also wish to consult “Movies” (where, among many things, my rating system is explained) and “A Footnote to ‘Movies’“.

Rotten Tomatoes 100 percent

Fred Rogers: The Anti-Trump

Have you seen A Beautiful Day in the Neighborhood, the quasi-documentary about the life of Fred Rogers of Mister Rogers’ Neighborhood? My children watched the show when they were young, as did tens of millions of other children during its run from 1968 until 2001. Anyway, having now seen the movie I can understand why it was so popular with young children.

I won’t reprise the film or Rogers’s life. (He died of stomach cancer in 2003, three weeks before his 75th birthday.) Just follow the links in the preceding paragraph if you are curious and know almost nothing about the man or the show. I had glimpsed the show in passing, but never watched it. It was literally “kid stuff” as far as I was concerned.

Having now seen the film, and read a bit about Rogers’s life, I applaud him and what he strove to do for children. What was that? It seems to me that it was to help them cope with the kinds of fears and worries that seem to trouble most children: the fear of dying, the fear of scary things, the fear of having one’s parents divorce, the feeling of being somehow responsible if they do divorce, and on and on.

Rogers’s efforts in that direction were laudable and probably helpful. He certainly wasn’t to be condemned for what some accused him of, which was to inculcate in a generation of children the sense that they were worthy of esteem no matter what they did. I don’t know what motivated such accusations. Perhaps it was part of the backlash against Dr. Spock’s “permissiveness”, with which Rogers could be associated. Perhaps it was Rogers’s rather prissy (public) demeanor, which some mistook for homosexuality. Perhaps it was his evident affection for persons as persons, regardless of their race or sexual orientation. Whatever it was — and it was probably those things and more — it was all misplaced aggression against a man who, in an earlier age, might have been proclaimed a saint.

Thus the belated film tribute, which IMDb summarizes thus:

Two-time Oscar®-winner Tom Hanks portrays Mister Rogers in A Beautiful Day in the Neighborhood, a timely story of kindness triumphing over cynicism, based on the true story of a real-life friendship between Fred Rogers and journalist Tom Junod. After a jaded magazine writer (Emmy winner Matthew Rhys) is assigned a profile of Fred Rogers, he overcomes his skepticism, learning about empathy, kindness, and decency from America’s most beloved neighbor.

Tom Hanks may not be Mr. Clean, but he has that image. Top it off by casting him as the personification of “empathy, kindness, and decency” — who was also a Republican and an ordained minister — and what to you have? The anti-Trump, of course. Or the anti-Trump as Trump is widely perceived, which is what matters.

The only mistake made by the Hollywood types who wrote, produced, directed, and acted in the film (almost certainly anti-Trumpers to the last he, she, and it) was to release the movie almost a year before the presidential election of 2020. Unless the Democrat Party puts up a scold like Bernie Sanders or Elizabeth Warren, a mid-2020 release would have underscored the contrast between Trump and the Democrat nominee. If Pete Buttigieg — the Mr. Nice Guy among the wannabe nominees — were to get the nod, the contrast would have been stark. (The mistaken perception of Rogers as homosexual wouldn’t have hurt, either.)

I am by no means being snide about Fred Rogers, who seems to have deserved all of the respect and adulation that came to him in his lifetime, and all that has followed him into death. But the aura of goodness that surrounds the memory of Rogers contrasts starkly with the bad things that are thought and said about Donald Trump because of his persona and rhetoric. (His persona and rhetoric detract, unfortunately, from the good things that he has done and is doing as president.)

Luckily, A Beautiful Day in the Neighborhood will be largely forgotten before votes are cast in next year’s presidential election. But it is still possible that the vast, squishy center of the electorate — people who would rather vote for “nice” than for their own interests — may reject Trump (and the GOP generally) and enable America’s version of the Thousand-Year Reich.

As Bette Davis‘s character famously said in another movie, “Fasten your seatbelts….”

Not-So-Random Thoughts (XXV)

“Not-So-Random Thoughts” is an occasional series in which I highlight writings by other commentators on varied subjects that I have addressed in the past. Other entries in the series can be found at these links: I, II, III, IV, V, VI, VII, VIII, IX, X, XI, XII, XIII, XIV, XV, XVI, XVII, XVIII, XIX, XX, XXI, XXII, XXIII, and XXIV. For more in the same style, see “The Tenor of the Times” and “Roundup: Civil War, Solitude, Transgenderism, Academic Enemies, and Immigration“.

CONTENTS

The Real Unemployment Rate and Labor-Force Participation

Is Partition Possible?

Still More Evidence for Why I Don’t Believe in “Climate Change”

Transgenderism, Once More

Big, Bad Oligopoly?

Why I Am Bunkered in My Half-Acre of Austin

“Government Worker” Is (Usually) an Oxymoron


The Real Unemployment Rate and Labor-Force Participation

There was much celebration (on the right, at least) when it was announced that the official unemployment rate, as of November, is only 3.5 percent, and that 266,000 jobs were added to the employment rolls (see here, for example). The exultation is somewhat overdone. Yes, things would be much worse if Obama’s anti-business rhetoric and policies still prevailed, but Trump is pushing a big boulder of deregulation uphill.

In fact, the real unemployment rate is a lot higher than official figure I refer you to “Employment vs. Big Government and Disincentives to Work“. It begins with this:

The real unemployment rate is several percentage points above the nominal rate. Officially, the unemployment rate stood at 3.5 percent as of November 2019. Unofficially — but in reality — the unemployment rate was 9.4 percent.

The explanation is that the labor-force participation rate has declined drastically since peaking in January 2000. When the official unemployment rate is adjusted to account for that decline (and for a shift toward part-time employment), the result is a considerably higher real unemployment rate.

Arnold Kling recently discussed the labor-force participation rate:

[The] decline in male labor force participation among those without a college degree is a significant issue. Note that even though the unemployment rate has come down for those workers, their rate of labor force participation is still way down.

Economists on the left tend to assume that this is due to a drop in demand for workers at the low end of the skill distribution. Binder’s claim is that instead one factor in declining participation is an increase in the ability of women to participate in the labor market, which in turn lowers the advantage of marrying a man. The reduced interest in marriage on the part of women attenuates the incentive for men to work.

Could be. I await further analysis.


Is Partition Possible?

Angelo Codevilla peers into his crystal ball:

Since 2016, the ruling class has left no doubt that it is not merely enacting chosen policies: It is expressing its identity, an identity that has grown and solidified over more than a half century, and that it is not capable of changing.

That really does mean that restoring anything like the Founders’ United States of America is out of the question. Constitutional conservatism on behalf of a country a large part of which is absorbed in revolutionary identity; that rejects the dictionary definition of words; that rejects common citizenship, is impossible. Not even winning a bloody civil war against the ruling class could accomplish such a thing.

The logical recourse is to conserve what can be conserved, and for it to be done by, of, and for those who wish to conserve it. However much force of what kind may be required to accomplish that, the objective has to be conservation of the people and ways that wish to be conserved.

That means some kind of separation.

As I argued in “The Cold Civil War,” the natural, least stressful course of events is for all sides to tolerate the others going their own ways. The ruling class has not been shy about using the powers of the state and local governments it controls to do things at variance with national policy, effectively nullifying national laws. And they get away with it.

For example, the Trump Administration has not sent federal troops to enforce national marijuana laws in Colorado and California, nor has it punished persons and governments who have defied national laws on immigration. There is no reason why the conservative states, counties, and localities should not enforce their own view of the good.

Not even President Alexandria Ocasio-Cortez would order troops to shoot to re-open abortion clinics were Missouri or North Dakota, or any city, to shut them down. As Francis Buckley argues in American Secession: The Looming Breakup of the United States, some kind of separation is inevitable, and the options regarding it are many.

I would like to believe Mr. Codevilla, but I cannot. My money is on a national campaign of suppression, which will begin the instant that the left controls the White House and Congress. Shooting won’t be necessary, given the massive displays of force that will be ordered from the White House, ostensibly to enforce various laws, including but far from limited to “a woman’s right to an abortion”. Leftists must control everything because they cannot tolerate dissent.

As I say in “Leftism“,

Violence is a good thing if your heart is in the “left” place. And violence is in the hearts of leftists, along with hatred and the irresistible urge to suppress that which is hated because it challenges leftist orthodoxy — from climate skepticism and the negative effect of gun ownership on crime to the negative effect of the minimum wage and the causal relationship between Islam and terrorism.

There’s more in “The Subtle Authoritarianism of the ‘Liberal Order’“; for example:

[Quoting Sumantra Maitra] Domestically, liberalism divides a nation into good and bad people, and leads to a clash of cultures.

The clash of cultures was started and sustained by so-called liberals, the smug people described above. It is they who — firmly believing themselves to be smarter, on the the side of science, and on the side of history — have chosen to be the aggressors in the culture war.

Hillary Clinton’s remark about Trump’s “deplorables” ripped the mask from the “liberal” pretension to tolerance and reason. Clinton’s remark was tantamount to a declaration of war against the self-appointed champion of the “deplorables”: Donald Trump. And war it has been. much of it waged by deep-state “liberals” who cannot entertain the possibility that they are on the wrong side of history, and who will do anything — anything — to make history conform to their smug expectations of it.


Still More Evidence for Why I Don’t Believe in “Climate Change”

This is a sequel to an item in the previous edition of this series: “More Evidence for Why I Don’t Believe in Climate Change“.

Dave Middleton debunks the claim that 50-year-old climate models correctly predicted the susequent (but not steady) rise in the globe’s temperature (whatever that is). He then quotes a talk by Dr. John Christy of the University of Alabama-Huntsville Climate Research Center:

We have a change in temperature from the deep atmosphere over 37.5 years, we know how much forcing there was upon the atmosphere, so we can relate these two with this little ratio, and multiply it by the ratio of the 2x CO2 forcing. So the transient climate response is to say, what will the temperature be like if you double CO2– if you increase at 1% per year, which is roughly what the whole greenhouse effect is, and which is achieved in about 70 years. Our result is that the transient climate response in the troposphere is 1.1 °C. Not a very alarming number at all for a doubling of CO2. When we performed the same calculation using the climate models, the number was 2.31°C. Clearly, and significantly different. The models’ response to the forcing – their ∆t here, was over 2 times greater than what has happened in the real world….

There is one model that’s not too bad, it’s the Russian model. You don’t go to the White House today and say, “the Russian model works best”. You don’t say that at all! But the fact is they have a very low sensitivity to their climate model. When you look at the Russian model integrated out to 2100, you don’t see anything to get worried about. When you look at 120 years out from 1980, we already have 1/3 of the period done – if you’re looking out to 2100. These models are already falsified [emphasis added], you can’t trust them out to 2100, no way in the world would a legitimate scientist do that. If an engineer built an aeroplane and said it could fly 600 miles and the thing ran out of fuel at 200 and crashed, he might say: “I was only off by a factor of three”. No, we don’t do that in engineering and real science! A factor of three is huge in the energy balance system. Yet that’s what we see in the climate models….

Theoretical climate modelling is deficient for describing past variations. Climate models fail for past variations, where we already know the answer. They’ve failed hypothesis tests and that means they’re highly questionable for giving us accurate information about how the relatively tiny forcing … will affect the climate of the future.

For a lot more in this vein, see my pages “Climate Change” and “Modeling and Science“.


Transgenderism, Once More

Theodore Dalrymple (Anthony Daniels, M.D.) is on the case:

The problem alluded to in [a paper in the Journal of Medical Ethics] is, of course, the consequence of a fiction, namely that a man who claims to have changed sex actually has changed sex, and is now what used to be called the opposite sex. But when a man who claims to have become a woman competes in women’s athletic competitions, he often retains an advantage derived from the sex of his birth. Women competitors complain that this is unfair, and it is difficult not to agree with them….

Man being both a problem-creating and solving creature, there is, of course, a very simple way to resolve this situation: namely that men who change to simulacra of women should compete, if they must, with others who have done the same. The demand that they should suffer no consequences that they neither like nor want from the choices they have made is an unreasonable one, as unreasonable as it would be for me to demand that people should listen to me playing the piano though I have no musical ability. Thomas Sowell has drawn attention to the intellectual absurdity and deleterious practical consequences of the modern search for what he calls “cosmic justice.”…

We increasingly think that we live in an existential supermarket in which we pick from the shelf of limitless possibilities whatever we want to be. We forget that limitation is not incompatible with infinity; for example, that our language has a grammar that excludes certain forms of words, without in any way limiting the infinite number of meanings that we can express. Indeed, such limitation is a precondition of our freedom, for otherwise nothing that we said would be comprehensible to anybody else.

That is a tour de force typical of the good doctor. In the span of three paragraphs, he addresses matters that I have treated at length in “The Transgender Fad and Its Consequences” (and later in the previous edition of this series), “Positive Rights and Cosmic Justice“, and “Writing: A Guide” (among other entries at this blog).


Big, Bad Oligopoly?

Big Tech is giving capitalism a bad name, as I discuss in “Why Is Capitalism Under Attack from the Right?“, but it’s still the best game in town. Even oligopoly and its big brother, monopoly, aren’t necessarily bad. See, for example, my posts, “Putting in Some Good Words for Monopoly” and “Monopoly: Private Is Better than Public“. Arnold Kling makes the essential point here:

Do indicators of consolidation show us that the economy is getting less competitive or more competitive? The answer depends on which explanation(s) you believe to be most important. For example, if network effects or weak resistance to mergers are the main factors, then the winners from consolidation are quasi-monopolists that may be overly insulated from competition. On the other hand, if the winners are firms that have figured out how to develop and deploy software more effectively than their rivals, then the growth of those firms at the expense of rivals just shows us that the force of competition is doing its work.


Why I Am Bunkered in My Half-Acre of Austin

Randal O’Toole takes aim at the planners of Austin, Texas, and hits the bullseye:

Austin is one of the fastest-growing cities in America, and the city of Austin and Austin’s transit agency, Capital Metro, have a plan for dealing with all of the traffic that will be generated by that growth: assume that a third of the people who now drive alone to work will switch to transit, bicycling, walking, or telecommuting by 2039. That’s right up there with planning for dinner by assuming that food will magically appear on the table the same way it does in Hogwarts….

[W]hile Austin planners are assuming they can reduce driving alone from 74 to 50 percent, it is actually moving in the other direction….

Planners also claim that 11 percent of Austin workers carpool to work, an amount they hope to maintain through 2039. They are going to have trouble doing that as carpooling, in fact, only accounted for 8.0 percent of Austin workers in 2018.

Planners hope to increase telecommuting from its current 8 percent (which is accurate) to 14 percent. That could be difficult as they have no policy tools that can influence telecommuting.

Planners also hope to increase walking and bicycling from their current 2 and 1 percent to 4 and 5 percent. Walking to work is almost always greater than cycling to work, so it’s difficult to see how they plan to magic cycling to be greater than walking. This is important because cycling trips are longer than walking trips and so have more of a potential impact on driving.

Finally, planners want to increase transit from 4 to 16 percent. In fact, transit carried just 3.24 percent of workers to their jobs in 2018, down from 3.62 percent in 2016. Changing from 4 to 16 percent is a an almost impossible 300 percent increase; changing from 3.24 to 16 is an even more formidable 394 percent increase. Again, reality is moving in the opposite direction from planners’ goals….

Planners have developed two main approaches to transportation. One is to estimate how people will travel and then provide and maintain the infrastructure to allow them to do so as efficiently and safely as possible. The other is to imagine how you wish people would travel and then provide the infrastructure assuming that to happen. The latter method is likely to lead to misallocation of capital resources, increased congestion, and increased costs to travelers.

Austin’s plan is firmly based on this second approach. The city’s targets of reducing driving alone by a third, maintaining carpooling at an already too-high number, and increasing transit by 394 percent are completely unrealistic. No American city has achieved similar results in the past two decades and none are likely to come close in the next two decades.

Well, that’s the prevailing mentality of Austin’s political leaders and various bureaucracies: magical thinking. Failure is piled upon failure (e.g., more bike lanes crowding out traffic lanes, a hugely wasteful curbside composting plan) because to admit failure would be to admit that the emperor has no clothes.

You want to learn more about Austin? You’ve got it:

Driving and Politics (1)
Life in Austin (1)
Life in Austin (2)
Life in Austin (3)
Driving and Politics (2)
AGW in Austin?
Democracy in Austin
AGW in Austin? (II)
The Hypocrisy of “Local Control”
Amazon and Austin


“Government Worker” Is (Usually) an Oxymoron

In “Good News from the Federal Government” I sarcastically endorse the move to grant all federal workers 12 weeks of paid parental leave:

The good news is that there will be a lot fewer civilian federal workers on the job, which means that the federal bureaucracy will grind a bit more slowly when it does the things that it does to screw up the economy.

The next day, Audacious Epigone put some rhetorical and statistical meat on the bones of my informed prejudice in “Join the Crooks and Liars: Get a Government Job!“:

That [the title of the post] used to be a frequent refrain on Radio Derb. Though the gag has been made emeritus, the advice is even better today than it was when the Derb introduced it. As he explains:

The percentage breakdown is private-sector 76 percent, government 16 percent, self-employed 8 percent.

So one in six of us works for a government, federal, state, or local.

Which group does best on salary? Go on: see if you can guess. It’s government workers, of course. Median earnings 52½ thousand. That’s six percent higher than the self-employed and fourteen percent higher than the poor shlubs toiling away in the private sector.

If you break down government workers into two further categories, state and local workers in category one, federal workers in category two, which does better?

Again, which did you think? Federal workers are way out ahead, median earnings 66 thousand. Even state and local government workers are ahead of us private-sector and self-employed losers, though.

Moral of the story: Get a government job! — federal for strong preference.

….

Though it is well known that a government gig is a gravy train, opinions of the people with said gigs is embarrassingly low as the results from several additional survey questions show.

First, how frequently the government can be trusted “to do what’s right”? [“Just about always” and “most of the time” badly trail “some of the time”.]

….

Why can’t the government be trusted to do what’s right? Because the people who populate it are crooks and liars. Asked whether “hardly any”, “not many” or “quite a few” people in the federal government are crooked, the following percentages answered with “quite a few” (“not sure” responses, constituting 12% of the total, are excluded). [Responses of “quite a few” range from 59 percent to 77 percent across an array of demographic categories.]

….

Accompanying a strong sense of corruption is the perception of widespread incompetence. Presented with a binary choice between “the people running the government are smart” and “quite a few of them don’t seem to know what they are doing”, a solid majority chose the latter (“not sure”, at 21% of all responses, is again excluded). [The “don’t know what they’re doing” responses ranged from 55 percent to 78 percent across the same demographic categories.]

Are the skeptics right? Well, most citizens have had dealings with government employees of one kind and another. The “wisdom of crowds” certainly applies in this case.

“Human Nature” by David Berlinski: A Revew

I became fan of David Berlinksi, who calls himself a secular Jew, after reading The Devil’s Delusion: Atheism and Its Scientific Pretensions, described on Berlinkski’s personal website as

a biting defense of faith against its critics in the New Atheist movement. “The attack on traditional religious thought,” writes Berlinski, “marks the consolidation in our time of science as the single system of belief in which rational men and women might place their faith, and if not their faith, then certainly their devotion.”

Here is most of what I say in “Atheistic Scientism Revisited” about The Devil’s Delusion:

Berlinski, who knows far more about science than I do, writes with flair and scathing logic. I can’t do justice to his book, but I will try to convey its gist.

Before I do that, I must tell you that I enjoyed Berlinski’s book not only because of the author’s acumen and biting wit, but also because he agrees with me. (I suppose I should say, in modesty, that I agree with him.) I have argued against atheistic scientism in many blog posts (see below).

Here is my version of the argument against atheism in its briefest form (June 15, 2011):

  1. In the material universe, cause precedes effect.
  2. Accordingly, the material universe cannot be self-made. It must have a “starting point,” but the “starting point” cannot be in or of the material universe.
  3. The existence of the universe therefore implies a separate, uncaused cause.

There is no reasonable basis — and certainly no empirical one — on which to prefer atheism to deism or theism. Strident atheists merely practice a “religion” of their own. They have neither logic nor science nor evidence on their side — and eons of belief against them.

As for scientism, I call upon Friedrich Hayek:

[W]e shall, wherever we are concerned … with slavish imitation of the method and language of Science, speak of “scientism” or the “scientistic” prejudice…. It should be noted that, in the sense in which we shall use these terms, they describe, of course, an attitude which is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed. The scientistic as distinguished from the scientific view is not an unprejudiced but a very prejudiced approach which, before it has considered its subject, claims to know what is the most appropriate way of investigating it. [The Counter Revolution Of Science]

As Berlinski amply illustrates and forcibly argues, atheistic scientism is rampant in the so-called sciences. I have reproduced below some key passages from Berlinski’s book. They are representative, but far from exhaustive (though I did nearly exhaust the publisher’s copy limit on the Kindle edition). I have forgone the block-quotation style for ease of reading, and have inserted triple asterisks to indicate (sometimes subtle) changes of topic. [Go to my post for the excerpts.]

On the strength of The Devil’s Delusion, I eagerly purchased Berlinski’s latest book, Human Nature. I have just finished it, and cannot summon great enthusiasm for it. Perhaps that is so because I expected a deep and extended examination of the title’s subject. What I got, instead, was a collection of 23 disjointed essays, gathered (more or less loosely) into seven parts.

Only the first two parts, “Violence” and “Reason”, seem to address human nature, but often tangentially. “Violence” deals specifically with violence as manifested (mainly) in war and murder. The first essay, titled “The First World War”, is a tour de force — a dazzling (and somewhat dizzying) reconstruction of the complex and multi-tiered layering of the historical precedent, institutional arrangements, and personalities that led to the outbreak of World War I.

Aha, I thought to myself, Berlinkski is warming to his task, and will flesh out the relevant themes at which he hints in the first essay. And in the second and third essays, “The Best of Times” and “The Cause of War”, Berlinski flays the thesis of Steven Pinker’s The Better Angels of Our Nature: Why Violence Has Declined. But my post, “The Fallacy of Human Progress“, does a better job of it, thanks to the several critics and related sources quoted therein.

Berlinski ends the third essay with this observation:

Men go to war when they think that they can get away with murder.

Which is tantamount to an admission that Berlinski has no idea why men go to war, or would rather not venture an opinion on the subject. There is much of that kind of diffident agnosticism throughout the book, which is captured in his reply to an interviewer’s question in the book’s final essay:

Q. Would you share with us your hunches and suspicions about spiritual reality, the trend in your thinking, if not your firm beliefs?

A. No. Either I cannot or I will not. I do not know whether I am unable or unwilling. The question elicits in me a stubborn refusal. Please understand. It is not an issue of privacy. I have, after all, blabbed my life away: Why should I call a halt here? I suppose that I am by nature a counter-puncher. What I am able to discern of the religious experience often comes about reactively. V. S. Naipaul remarked recently that he found the religious life unthinkable.

He does? I was prompted to wonder. Why does he?

His attitude gives rise to mine. That is the way in which I wrote The Devil’s Delusion: Atheism and Its Scientific Pretensions.

Is there anything authentic in my religious nature?

Beats me.

That is a legitimate reply, but — I suspect — an evasive one.

Returning to the book’s ostensible subject, the second part, “Reason”, addresses human nature mainly in a negative way, that is, by pointing out (in various ways) flaws in the theory of evolution. There is no effort to knit the strands into a coherent theme. The following parts stray even further from the subject of the book’s title, and are even more loosely connected.

This isn’t to say that the book fails to entertain, for it often does that. For example, buried in a chapter on language, “The Recovery of Case”, is this remark:

Sentences used in the ordinary give-and-take of things are, of course, limited in their length. Henry James could not have constructed a thousand-word sentence without writing it down or suffering a stroke. Nor is recursion needed to convey the shock of the new. Four plain-spoken words are quite enough: Please welcome President Trump.

(I assume, given Berlinski’s track record for offending “liberal” sensibilities, that the italicized words refer to the shock of Trump’s being elected, and are not meant to disparage Trump.)

But the book also irritates, not only by its failure to deliver what the title seems to promise, but also by Berlinski’s proclivity for using the abstruse symbology of mathematical logic where words would do quite nicely and more clearly. In the same vein — showing off — is the penultimate essay, “A Conversation with Le Figaro“, which reproduces (after an introduction by Berlinksi) of a transcript of the interview — in French, with not a word of translation. Readers of the book will no doubt be more schooled in French than the typical viewer of prime-time TV fare, but many of them will be in my boat. My former fluency in spoken and written French has withered with time, and although I could still manage with effort to decipher the meaning of the transcript, it proved not to be worth the effort so I gave up on it.

There comes a time when once-brilliant persons can summon flashes of their old, brilliant selves but can no longer emit a sustained ray of brilliance. Perhaps that is true of Berlinski. I hope not, and will give him another try if he gives writing another try.

Society, Culture, and America’s Future

There is much lamentation (from the right, at least) about the disintegration of American society, the culture war being waged by the left, and the future of America. I have done more than my share of lamenting. The purpose of this post isn’t to increase this blog’s lamentation quotient (though it probably will do that), but to take a step back and consider the meanings of “society” and “culture” as they apply to America. After having done that, I will consider the implications for the future of America.

Society and culture are intertwined. Society is usually defined as

an enduring and cooperating social group whose members have developed organized patterns of relationships through interaction with one another.

Culture is the collection of customs, rituals, and norms (religious and secular) that give a society its identity, and the observance of which marks individual persons as members of that society; thus:

Culture is the protection and nurturing of an identity that marks out how a given group (national, racial, social or whatever) ritualizes and cultivates its identity, gives it form and significance, and defines individuals as members of that group. Culture is not about what we do but the manner in which we do it and how a group defines itself by embellishing the gifts of nature.

Changes in society lead to changes in culture, and conversely. A good example, but hardly the only one of its kind, is Hitler’s exploitation of aspects of traditional German culture to build unblinking allegiance to Germany and to its leader (führer). The trait of fastidiousness was exploited to support the removal of “unclean” elements: Communists, Jews, Gypsys, and persons with mental and physical defects.

Societies and cultures in America can be likened to its topography. There are mountains, hills, rolling countryside, and flat land. The difference between a huge mountain and a somewhat smaller one is imperceptible — they are both mountains. But at some arbitrary point, a hump on the surface of the earth is called a hill instead of a mountain. This regression continues until hills are replaced by rolling countryside, and rolling countryside is replaced by flat land. There are no definite lines of demarcation between these various features, but the viewer usually knows which of them he is looking at.

Thus a person can tell the difference between a society-cum-culture that consists of impoverished inner-city blacks and one that revolves around a posh, all-white enclave. There are gradations between the two, and myriad overlapping memberships among those gradations, but the two are as distinct as the Rocky Mountains and the flatness of Florida.

Between the extremes, there are, of course, some distinct societal-cultural groupings; for example: Orthodox Jewish sects, Amish and Mennonite settlements, intellectually and culturally insular academic archipelagos, the remnants of enclaves formed by immigrants from Europe in the late 19th and early 20th centuries, and communities of later immigrants from Asia and Central America. But — to sustain the metaphor — America for a long time had been mainly flat land, which spanned not only the earliest (non-Indian) settlers and their descendants but also most of the descendants of the European immigrants.

To change the metaphor, the societal and cultural landscape of America was for a very long time largely amorphous, which was a blessing and a curse. It was a blessing because the interchangeability of the units meant that the divisions between them weren’t as deep as those between, say, Israel and Palestine, Northern Ireland and Eire (before the Republic went secular), the Basques and their neighbors, or the Kurds and the Turks. (The Civil War and its long aftermath of regional antipathy wouldn’t have happened but for the rabble-rousing rhetoric of pro-slavery and anti-slavery elites.)

The curse was that the growth of mass media (movies, radio, TV) and the advent of social media enabled rapid cultural change — change that hadn’t been tested in the acid of use and adopted because it made life better. It was change for the sake of change, which is a luxury afforded the beneficiaries of capitalism.

Take “noise”, for example — and by “noise” I mean sound, light, and motion — usually in combination. There are pockets of serenity to be sure, but the amorphous majority wallows in noise: in homes with blaring TVs; in stores, bars, clubs, and restaurants with blaring music, TVs, and light displays; in movies (which seem to be dominated by explosive computer graphics), in sports arenas (from Olympic and major-league venues down to minor-league venues, universities, and schools); and on an on.

I remember well the days before incessant noise. It wasn’t just that the electro-mechanical sources of noise were far less prevalent in those days, it was also that people simply weren’t as noisy (or demonstrative).

The prevalence of noise is telling evidence of the role of mass media in cultural change. Where culture is “thin” (the vestiges of the past have worn away) it is susceptible of outside influence. And where culture is thin, the edges of society are indistinct — one flows seamlessly into another. Thus the ease with which huge swaths of the amorphous majority were seduced, not just by noise but by leftist propaganda. The seduction was aided greatly by the parallel, taxpayer-funded efforts of public-school “educators” and the professoriate.

Thus did the amorphous majority bifurcate. (I locate the beginning of the bifurcation in the 1960s.) Those who haven’t been seduced by leftist propaganda have instead become resistant to it. This resistance to nanny-statism — the real resistance in America — seems to be anchored by members of that rapidly dwindling lot: adherents and practitioners of religion, especially between the two Left Coasts.

That they are also adherents of traditional social norms (e.g., marriage can only be between a man and a woman), upholders of the Second Amendment, and (largely) “blue collar” makes them a target of sneering (e.g., Barack Obama who called them “bitter clingers”; Hillary Clinton called them “deplorables”). That kind of sneering is a a socially divisive form of superiority-signaling, a result of which was the election of Donald Trump in 2016.

As the faux-resistance against Trump continues, for reasons detailed here, the wedge between the two halves of the once-amorphous mass is driven deeper by the clamor. Continued sneering would add impetus, but vote-hungry Democrats have (for now) curtailed it (and even made populist noises) in the hope of luring some malleable voters to the dark side if the impeachment plot fails.

But the end of the faux-resistance — one way or another — will not reunite the once-amorphous mass. The sneering, which persists on the dark side, will continue. Legislative, executive, and judicial efforts to impose the left’s agenda on the whole of America will persist. Despite all of that the real resistance might even despite the inevitable conversions to the dark side among the weak-willed. Or it might not, for a reason to which I will come.

The real resistance, it should be noted, pre-dates Trump’s emergence onto the political scene, and could be seen in the candidacies of Barry Goldwater and George Wallace. The real resistance finally made itself felt, electorally, by putting Ronald Reagan into the White House, though his efforts to roll back nanny-statism were hampered by a solid Democrat majority in the House. There was more success later, during the Tea Party era, which enabled congressional resistance to Obama’s leftist agenda. And then, just when the Tea Party movement seemed to have faded away, Trump revived it — in spirit if not in name.

The question is whether a new leader will emerge to ensure the continuation of the real resistance after Trump — whether he leaves the scene by impeachment and conviction, by failure of re-election, or at the end of a second term.

The answer is that as long as sizeable portion of the populace remains attached to traditional norms — mainly including religion — there will be a movement in search of and in need of a leader. But the movement will lose potency if such a leader fails to emerge.

Were that to happen, something like the old, amorphous society might re-form, but along lines that the remnant of the old, amorphous society wouldn’t recognize. In a reprise of the Third Reich, the freedoms of association, speech, and religious would have been bulldozed with such force that only the hardiest of souls would resist going over to the dark side. And their resistance would have to be covert.

Paradoxically, 1984 may lie in the not-too-distant future, not 35 years in the past. When the nation is ruled by one party (guess which one), footvoting will no longer be possible and the nation will settle into a darker version of the Californian dystopia.

Gillette and Me

My first and last Gillette razor looked like this one:

It was a hand-me-down from my father, and I used it (and Gillette’s double-edged blades) for about a decade. I then — more than 50 years ago — switched to a  Schick injector razor. I went through a few of those before I found an off-brand razor-mirror combination for shaving in the shower. I’ve been using it for more than 30 years.

The blades that came with the shower-shaving razor were a knock-off of Gillette’s Trac II. I’ve bought nothing but similar knock-offs since then. So, apart from a pittance in licensing fees (and maybe not even that), Gillette hasn’t made a dime from me in more than 50 years.

That makes me glad because of Gillette’s toxic wokeness, about which Harry Stein writes in the Autumn 2019 issue of City Journal:

If, as we’re often told, corporations aren’t people, Gillette recently did a good job of impersonating one — specifically, an over-the-top campus feminist — with an ad declaring its customers’ defining trait, masculinity, “toxic.” Featuring bullies, sexual harassers, and sociopaths without porfolio, the ad flipped Gillette’s usual tagline to ask: “Is this the best a man can get?” And soon, a Facebook ad followed showing — wait for it — a dad helping his transgender teen shave for the first time.

I missed that because I don’t watch commercial TV, other than 5 minutes a day to catch the local weather forecast (which is a habit but certainly not a necessity these days). It’s a good thing I missed it, or I might have ruined a good TV set by throwing a brick at it.

The good news, according to Stein, is that because of the strongly negative reaction of Gillette’s customers to the “woke” ad campaign, Gillette’s parent company, Procter & Gamble, had written Gillette down by $8 billion this past summer. I would have been among the many consumers who boycotted Gillette products and caused the write-down. But I presciently abandoned Gillette more than 50 years ago.

In Defense of the Oxford Comma

The Oxford comma, also known as the serial comma, is the comma that precedes the last item in a list of three or more items (e.g., the red, white, and blue). Newspapers (among other sinners) eschew the serial comma for reasons too arcane to pursue here. Thoughtful counselors advise its use. (See, for example, Wilson Follett’s Modern American Usage at pp. 422-423.) Why? Because the serial comma, like the hyphen in a compound adjective, averts ambiguity. It isn’t always necessary, but if it is used consistently, ambiguity can be avoided.

Here’s a great example, from the Wikipedia article linked to in the first sentence of this paragraph: “To my parents, Ayn Rand and God”. The writer means, of course, “To my parents, Ayn Rand, and God”.

Kylee Zempel has much more to say in her essay, “Using the Oxford Comma Is a Sign of Grace and Clarity“. It is, indeed.

(For much more about writing, see my page “Writing: A Guide“.)

I Hate to Hear Millennials Speak

My wife and I have a favorite Thai restaurant in Austin. It’s not the best Thai restaurant in our experience. We’ve dined at much better ones in Washington, D.C., and Yorktown, Virginia. The best one, in our book, is in Arlington, Virginia.

At any rate, our favorite Thai restaurant in Austin is very good and accordingly popular. And because Thai food is relatively inexpensive, it draws a lot of twenty-and-thirty-somethings.

Thus the air was filled (as usual) with “like”, “like”, “like”, “like”, and more “like”, ad nauseum. It makes me want to stand up and shout “Shut up, I can’t take it any more”.

The fellow at the next table not only used “like” in every sentence, but had a raspy, penetrating vocal fry, which is another irritating speech pattern of millennials. He was seated so that he was facing in my direction. As a result, I had to turn down my hearing aids to soften the creak that ended his every sentence.

His date (a female, which is noteworthy in Austin) merely giggled at everything he said. It must have been a getting-to-know you date. The relationship is doomed if she’s at all fussy about “like”. Though it may be that he doesn’t like giggly gals.

Harumph!

That’s today’s gripe. For more gripes, see these posts:

Stuff White (Liberal Yuppie) People Like
Driving and Politics
I’ve Got a LIttle List
Driving and Politics (2)
Amazon and Austin
Driving Is an IQ Test
The Renaming Mania Hits a New Low
Let the Punishment Deter the Crime

“Will the Circle Be Unbroken?”

That’s the title of the sixth episode of Country Music, produced by Ken Burns et al. The episode ends with a segment about the production of Will the Circle be Unbroken?, a three-LP album released in 1972, with Mother Maybelle Carter of the original Carter Family taking the lead. I have the album in my record collection. It sits proudly next to a two-LP album of recordings by Jimmie Rodgers, Jimmie Rodgers on Record: America’s Blue Yodeler.

The juxtaposition of the albums is fitting because, as Country Music‘s first episode makes clear, it was the 1927 recordings of Rodgers and the Carters that “made” country music. Country music had been recorded and broadcast live since 1922. But Rodgers and the Carters brought something new to the genre and it caught the fancy of a large segment of the populace.

In Rodgers’s case it was his original songs (mostly of heartbreak and rambling) and his unique delivery, which introduced yodeling to country music. In the Carters’ case it was the tight harmonies of Maybelle Addington Carter and her cousin and sister-in-law, Sara Dougherty Carter, applied to nostalgic ballads old and new (but old-sounding, even if new) compiled and composed mostly by Sara’s then-husband, A.P. Carter, who occasionally chimed in on the bass line. (“School House on the Hill” is a particular favorite of mine. The other songs at the link to “School House …” are great, too.)

Rodgers and the original Carters kept it simple. Rodgers accompanied himself on the guitar; Maybelle and Sara Carter accompanied themselves on guitar and autoharp. And that was it. No electrification or amplification, no backup players or singers, no aural tricks of any kind. What you hear is unadorned, and all the better for it. Only the Bluegrass sound introduced by Bill Monroe could equal it for a true “country” sound. Its fast pace and use of acoustic, stringed instruments harked back to the reels and jigs brought to this land (mainly from the British Isles) by the first “country” people — the settlers of Appalachia and the South.

As for the miniseries, I give it a B, or 7 out of 10. As at least one commentator has said, it’s a good crash course for those who are new to country music, but only a glib refresher course for those who know it well. At 16 hours in length, it is heavily padded with mostly (but not always) vapid commentary by interviewees who were and are, in some way, associated with country music; Burns’s typical and tedious social commentary about the treatment of blacks and women, as if no one knows about those things; and biographical information that really adds nothing to the music.

The biographical information suggests that to be a country singer you must be an orphan from a hardscrabble-poor, abusive home who survived the Great Depression or run-ins with the law. Well, you might think that until you reflect on the fact that little is said about the childhoods of the many country singers who weren’t of that ilk, especially the later ones whose lives were untouched by the Great Depression or World War II.

Based on what I’ve seen of the series thus far (six of eight episodes), what it takes to be a country singer — with the notable exception of the great Hank Snow (a native of Nova Scotia) — is (a) to have an accent that hails from the South, and (b) to sing in a way that emphasizes the accent. A nasal twang seems to be a sine qua non, even though many of the singers who are interviewees don’t speak like they sing. It’s mostly put on, in other words, and increasingly so as regional accents fade away.

The early greats, like Rodgers and the Carters, were authentic, but the genre is becoming increasingly phony. And the Nashville sound and its later variants are abominations.

So, the circle has been broken. And the only way to mend it is to listen to the sounds of yesteryear.

Reflections on the “Feel Good” War

Prompted by my current reading — another novel about World War II — and the viewing of yet another film about Winston Churchill’s leadership during that war.

World War II was by no means a “feel good” war at the time it was fought. But it became one, eventually, as memories of a generation’s blood, toil, tears, and sweat faded away, to be replaced by the consoling fact of total victory. (That FDR set the stage for the USSR’s long dominance of Eastern Europe and status as a rival world power is usually overlooked.)

World War II is a “feel good” war in that it has been and continues to be depicted in countless novels, TV series, and movies as a valiant, sometimes romantic, and ultimately successful effort to defeat evil enemies: Nazi Germany and Imperial Japan. Most of the treatments, in fact, are about the war in Europe against Nazi Germany, because Hitler lingers in the general view as a personification of evil. Also, to the extent that the treatments are about stirring speeches, heroism, espionage, sabotage, and resistance, they are more readily depicted (and more commonly imagined) as the efforts of white Americans, Britons, and citizens of the various European nations that had been conquered by Nazi Germany.

World War II is also a “feel good” war — for millions of Americans, at least — because it is a reminder that the United States, once upon a time, united to fight and decisively won a great war against evil enemies. Remembering it in that way is a kind of antidote to the memories of later wars that left bitterness, divisiveness, and a sense of futility (if not failure) in their wake: from Korea, Vietnam, Afghanistan, and Iraq.

That World War II was nothing like a “feel good” war while it was being fought should never be forgotten. Americans got off “lightly” by comparison with the citizens of enemy and Allied nations. But “lightly” means more than 400,000 combat deaths, almost 700,000 combat injuries (too many of them disabling and disfiguring), millions of lives disrupted, the reduction of Americans’ standard of living to near-Depression levels so that vast quantities of labor and materiel could be poured into the war effort, and — not the least of it — the dread that hung over Americans for several years before it became clear that the war would end in the defeat of Nazi Germany and Imperial Japan.

The generations that fought and lived through World War II deserved to look back on it as a “feel good” war, if that was their wont. But my impression — as a grandson, son, and nephew of members of those generations — is that they looked back on it as a part of their lives that they wouldn’t want to relive. They never spoke of it in my presence, and I was “all ears”, as they say.

But there was no choice. World War II had to be fought, and it had to be won. I only hope that if such a war comes along someday Americans will support it and fight it as fiercely and tenaciously as did their ancestors in World War II. If Americans do fight it fiercely and tenaciously it will be won. But I am not confident. the character of Americans has changed a lot — mostly for the worst — in the nearly 75 years since the end of World War II.

(See also “A Grand Strategy for the United States“, “Rating America’s Wars“, “The War on Terror As It Should Have Been Fought“, “1963: The Year Zero“, and “World War II As an Aberration“.)

Preposition Proliferation

I have a habit of speech — acquired long ago and hard to shake — which is the unnecessary use of prepositions in phrases like “hurry up” and “stand up”. I don’t write that way, but I still (too often) speak that way.

I hadn’t been conscious of the habit, probably acquired from my parents, until about 30 years ago, when a young computer whiz corrected me when I said something like “open up”. She said that “open” would suffice, and my eyes were (figuratively) opened; that is, for the first time in my life I understood that I had long been been guilty of preposition proliferation.

In addition to “hurry up”, “stand up”, and “open up”, there are “fill up”, “lift up”, and dozens of others. I leave it to you to list your favorites.

There are also phrases involving prepositions that aren’t quite wrong, but which are unnecessarily long. Consider, for example, one that is used often: “come in”. It’s really a shorthand way of saying “come into the room/office/house to which you are seeking entry”. So the “in” isn’t superfluous, but it is unnecessary.

“Come” will suffice, as will “enter”. Why aren’t those expressions used as commonly as “come in”? I suspect that it’s because “come in” sounds more cordial than the peremptory “come” and “enter”. That is to say, “come in” is “softer” and more welcoming.

Which brings me back to “hurry up”, “stand up”, and similar phrases. Perhaps the prepositions were added long ago to suggest that the speaker was making a request, not issuing a command. That is, they were added out of politeness.

Perhaps it is politeness that prevents me from giving up abandoning the practice of preposition proliferation.

Thinking about the Unthinkable

Thinking about the Unthinkable is the title of a book by Herman Kahn, who according to an obituary that ran in The Washington Post, believed that

nuclear war would be terrible indeed, but mankind would survive it. Since such wars are bound to take place, it behooves man to prepare for them.

He stated his case in two books that appeared in the early 1960s…. [The first book] argued that the policy of deterrence, known officially as “mutually assured destruction” (MAD), was unworkable. Thus, the techniques of survival must take a large place in policy planning.

The second book [Thinking about the Unthinkable] restated this premise and went on to criticize those who refused to face the possibility of war as acting like “ancient kings who punished messengers who brought them bad news.”

The unthinkable, in this post, isn’t how the United States might (in some fashion) survive a nuclear war, but about how the traditional mores of the United States — which are rapidly being suppressed by enemies within — can be preserved and restored to primacy in the nation’s governmental and civil institutions. The possibility that traditional mores will be suppressed, is unthinkable enough to most people — including most conservatives, I fear. Even more unthinkable is the “how” of preventing the suppression of traditional mores, because (a) it requires acknowledgment that there are enemies within, (b) that they must be treated as enemies, and (c) that they might not be defeated by traditional (electoral) means.

If you are uncomfortable with the proposition that the left (or the organized part of it in politics, the media, academia, and Big Tech) is an enemy, consider the following (typical) report from the latest Democrat presidential debate:

Former Congressman Beto O’Rourke called racism not only “endemic” to America but “foundational.” He explained, “We can mark the creation of this country not at the Fourth of July, 1776, but August 20, 1619, when the first kidnapped African was brought to this country against his will and in bondage and as a slave built the greatness, and the success, and the wealth that neither he nor his descendants would ever be able to participate in or enjoy.”

The villains in the Democratic Party story of America do not remain hundreds of years beyond our reach. Cops, gun owners, factory farmers, employees of insurance and pharmaceutical companies, Wall Street speculators, the oil industry, Republicans, and so many others who, together, constitute the majority of the nation: our Houston Dems do not look to them as fellow countrymen but as impediments, evil impediments in some cases, to realizing their ideological vision. And if that message did not come across in English, several candidates speaking Spanish not comprehended by most viewers nevertheless did not get lost in translation.

That ideological vision includes a doubly unconstitutional confiscation of weapons through executive fiat endorsed by Senator Kamala Harris and O’Rourke (“Hell, yes, we’re going to take your AR-15, your AK-47”), abolition of private health insurance in a bill sponsored by Senators Sanders and Warren, former Vice President Joe Biden’s insistence that “nobody should be in jail for a nonviolent crime,” reparations for slavery supported by O’Rourke, a wealth tax proposed by Warren, Senator Cory Booker’s call to “create an office in the White House to deal with the problem of white supremacy and hate crimes,” Harris demanding that government “de-incarcerate women and children” (even ones who murder?), Andrew Yang wanting to “give every American 100 democracy dollars that you only give to candidates and causes you like,” and the entire stage endorsing open borders, if in muted terms during this debate, and amnesty for illegal immigrants.

That’s just the tip of the ideological iceberg. I urge you to read at least some of the following posts:

Intellectuals and Capitalism
“Intellectuals and Society”: A Review
The Left’s Agenda
The Left and Its Delusions
Politics, Sophistry, and the Academy
Subsidizing the Enemies of Liberty
The Barbarians Within and the State of the Union
The Culture War
Ruminations on the Left in America
The Left and Violence
The Internet-Media-Academic Complex vs. Real Life
The Vast Left-Wing Conspiracy
Leftism
Leftism As Crypto-Fascism: The Google Paradigm
What Is Going On? A Stealth Revolution
“Tribalists”, “Haters”, and Psychological Projection
Utopianism, Leftism, and Dictatorship
Whence Polarization?
Social Norms, the Left, and Social Disintegration
Can Left and Right Be Reconciled?
The Fourth Great Awakening
It’s Them or Us
Conservatism, Society, and the End of America

Many of the themes of those posts are captured in “Not With a Bang“, wherein I say something that I’ve said many times and have come to believe more firmly in recent months.

The advocates of the new dispensation haven’t quite finished the job of dismantling America. But that day isn’t far off. Complete victory for the enemies of America is only a few election cycles away. The squishy center of the electorate — as is its wont — will swing back toward the Democrat Party. With a Democrat in the White House, a Democrat-controlled Congress, and a few party switches in the Supreme Court (of the packing of it), the dogmas of the anti-American culture will become the law of the land; for example:

Billions and trillions of dollars will be wasted on various “green” projects, including but far from limited to the complete replacement of fossil fuels by “renewables”, with the resulting impoverishment of most Americans, except for comfortable elites who press such policies).

It will be illegal to criticize, even by implication, such things as abortion, illegal immigration, same-sex marriage, transgenderism, anthropogenic global warming, or the confiscation of firearms. These cherished beliefs will be mandated for school and college curricula, and enforced by huge fines and draconian prison sentences (sometimes in the guise of “re-education”).

Any hint of Christianity and Judaism will be barred from public discourse, and similarly punished. Islam will be held up as a model of unity and tolerance.

Reverse discrimination in favor of females, blacks, Hispanics, gender-confused persons, and other “protected” groups will be required and enforced with a vengeance. But “protections” will not apply to members of such groups who are suspected of harboring libertarian or conservative impulses.

Sexual misconduct (as defined by the “victim”) will become a crime, and any male person may be found guilty of it on the uncorroborated testimony of any female who claims to have been the victim of an unwanted glance, touch (even if accidental), innuendo (as perceived by the victim), etc.

There will be parallel treatment of the “crimes” of racism, anti-Islamism, nativism, and genderism.

All health care in the United States will be subject to review by a national, single-payer agency of the central government. Private care will be forbidden, though ready access to doctors, treatments, and medications will be provided for high officials and other favored persons. The resulting health-care catastrophe that befalls most of the populace (like that of the UK) will be shrugged off as a residual effect of “capitalist” health care.

The regulatory regime will rebound with a vengeance, contaminating every corner of American life and regimenting all businesses except those daring to operate in an underground economy. The quality and variety of products and services will decline as their real prices rise as a fraction of incomes.

The dire economic effects of single-payer health care and regulation will be compounded by massive increases in other kinds of government spending (defense excepted). The real rate of economic growth will approach zero.

The United States will maintain token armed forces, mainly for the purpose of suppressing domestic uprisings. Given its economically destructive independence from foreign oil and its depressed economy, it will become a simulacrum of the USSR and Mao’s China — and not a rival to the new superpowers, Russia and China, which will largely ignore it as long as it doesn’t interfere in their pillaging of respective spheres of influence. A policy of non-interference (i.e., tacit collusion) will be the order of the era in Washington.

Though it would hardly be necessary to rig elections in favor of Democrats, given the flood of illegal immigrants who will pour into the country and enjoy voting rights, a way will be found to do just that. The most likely method will be election laws requiring candidates to pass ideological purity tests by swearing fealty to the “law of the land” (i.e., abortion, unfettered immigration, same-sex marriage, freedom of gender choice for children, etc., etc., etc.). Those who fail such a test will be barred from holding any kind of public office, no matter how insignificant.

Are my fears exaggerated? I don’t think so, given what has happened in recent decades and the cultural revolutionaries’ tightening grip on the Democrat party. What I have sketched out can easily happen within a decade after Democrats seize total control of the central government.

All of it will be done in ways that Democrats will justify in the name of “equality”, “fairness”, “public safety”, and other such shibboleths. (See “An Addendum to (Asymmetrical) Ideological Warfare“.) Bill Vallicella offers an example of how it will be done in his post, “The Grave Danger to the Republic of ‘Red Flag’ Laws“:

Destructive Democrats now label the National Rifle Association  a ‘domestic terror organization.’ Mind-mannered Mike of Mesa is a member and receives their publications. His mail man, though, is a flaming lefty. The mail man reports Mike to the government as a domestic terrorist on the ground that anyone who is a member of a terrorist organization is a terrorist. ATF agents break into Mike’s house in the wee hours and seize his one and only firearm, a semi-automatic pistol. A year later, Mike is able to get his gun back, but he must pay all court costs.

Not quite Nazi Germany, but getting there….

The Democrat Party is now a hard-Left party.

Kevin D. Williamson expands on that theme in “The Divine Right of the Democratic Party“:

Michelle Goldberg of the New York Times has a dream, a dream in which about half of the American people are deprived of an effective means of political representation, a dream of one-party government in which the Democrats are the only game in town — “Dare We Dream of the End of the GOP?” her column is headlined — which also is a dream of visiting vengeance upon those who dared to vote for their own interests as they understood them and thereby schemed “to stop the New America from governing.”That quotation is from a new book by Democratic pollster Stanley Greenberg bearing the title R.I.P. G.O.P. Greenberg himself has a new column in the Times on the same theme. “The 2020 election will be transformative like few in our history,” he writes. “It will end with the death of the Republican Party as we know it . . . [and] liberate the Democratic Party from the country’s suffocating polarization and allow it to use government to address the vast array of problems facing the nation.”

We might understand the Goldberg-Greenberg position as “the divine right of Democrats,” who apparently have an eternal moral mandate to rule for reasons that must remain mysterious to those outside the ranks of New York Times columnists….

Restrictions on immigration and abortion, conditions on welfare for the able-bodied, lower taxes and lower spending — these are not positions associated with the Democratic party. But millions of Americans, in some cases majorities and even large majorities, hold these views. They are entitled to political representation, irrespective of the future of the Republican party as an organization. And they will have that representation, whether it goes by the brand name Republican, Liberal, Whig, or Monster Raving Loony (RIP Screaming Lord Sutch). Eliminating the Republican party would not relieve the country of the “polarization” — meaning opposition — that annoys the Goldberg-Greenberg camp.

The only way to achieve that would be through the political suppression of those with dissenting political views.

Which, of course, is the Left’s current agenda, from deputizing Corporate America to act as its political enforcer by making employment contingent upon the acceptance of progressive political orthodoxies to attempting to gut the First Amendment in the name of “campaign finance” regulation — it is the Democratic party, not the moral scolds of the Christian Coalition, that proposes to lock up Americans for showing movies with unauthorized political content — to grievously abusing legislative and prosecutorial powers to harass and persecute those with nonconforming political views (“Arrest Climate-Change Deniers”) and declaring political rivals “domestic terrorists,” as California Democrats have with the National Rifle Association.

Which is to say: It is not only the Republican party as a political grouping they dream of eliminating: It is Republicans as such and those who hold roughly Republican ideas about everything from climate change to gun rights, groups that Democrats in agencies ranging from state prosecutors’ offices to the IRS already — right now, not at some point in some imaginary dystopian future — are targeting through both legal and extralegal means.

The Democrats who are doing this believe themselves to be acting morally, even patriotically, and sometimes heroically. Why? Because they believe that opposition is fundamentally illegitimate.

Eliminating the ability of those who currently align with the Republican party to meaningfully participate in national politics is not only wishful thinking in the pages of the New York Times. It is the progressive program, from Washington to Palo Alto and beyond.

William L. Gensert is more apocalyptic in “No Matter Who Wins in 2020, There Will Be Blood“:

Tone-deaf to [the] silent majority and emboldened by victory, the new [Democrat] president will borrow Barry’s “pen and phone” and start issuing executive orders throwing open our borders, banning fossil fuels, and of course, implementing “common sense” gun control.  Buoyed by media, the new president will start with universal background checks and a gun registry.

Eventually, the president will overreach, signing an order for gun confiscation, euphemistically called, “mandatory buybacks.”  Antifa and their ilk will flood the streets in support of seizing these “weapons of war.”  Media will declare, “It’s the will of the people.”

And for the right, that will be the last straw (plastic or paper).

[M]illions will refuse to give up their guns.  And, many gun owners in this country will not go “meekly into the night,” there will be “rage” against what they will see as a usurpation of their constitutional rights.

Confiscation will go well at first, with gun owners in the cities acquiescing to the knock on the door in the middle of the night and the intimidation of, “Papers please.”

But in flyover country, a different scenario will play out.  Most gun owners will hide their weapons and most local police departments will accept that, not wanting to jail their neighbors.  Resistance will be broad, perhaps encompassing hundreds of millions of Americans.  Barack Obama, for once in the dismal history of his efforts to kill the America we love, will be proven correct.  Americans do “cling to their guns.”

The media will call it “white supremacy,” but a still unregulated internet will be rife with videos of an out of control government battling its own citizens.

The president will call for mobilizing the National Guard.  Some governors will refuse, and army units now overseas will be sent home to deal with the growing unrest.  Mistakes will be made and there will be gunfire in the streets; people will die on both sides.  The  president will desperately call for martial law.

Many Army, National Guard, and police will defect, or desert, or simply refuse orders.

What will happen after that is anybody’s guess.

I am less pessimistic about the possibility of widespread violence. But that is because I am realistic about the ability and willingness of a Democrat president to enforce gun confiscation (and more) throughout the nation, with the aid of acquiescent and cowed State governors, and the dozens of federal law-enforcement agencies under his command, including but far from limited to the FBI, the BATF, the DEA, and the U.S. Marshals Service. Only a relatively small number of (literal) die-hards will put up much of a fight, and they will go down fighting.

It can happen here.

Is there a way to prevent it? A year-and-a-half ago I offered a peaceful and constitutional option in “Preemptive (Cold) Civil War” and “Preemptive (Cold) Civil War, Without Delay“. Near the end of the latter post, I quoted a piece by Publius Decius Mus (Michael Anton), “It’s Clear That Conservatism Inc. Wants Trump to Lose“:

I believe the Left, as it increasingly feels its oats, will openly discard the pretense that it need face any opposition. It’s already started. This will rise to a crescendo during the 2020 election, which the Left will of course win, after which it will be open-season on remaining “conservative” dissent. Audits. Investigations. Prosecutions. Regulatory dictates. Media leaks. Denunciations from the bully pulpit. SJW witch-hunts. The whole panoply of persecution tools now at their disposal, plus some they’ve yet to deploy or invent.

Much of that passage covers ground previously covered in this post. The key phrase is “which the Left will of course win”, because Democrats are masters of asymmetrical ideological warfare. And they are expert in the art of “winning” close elections. States that narrowly went for Trump in 2016 can easily be flipped by means fair and foul — and it won’t take many such States to do the trick.

Further, as I noted in the same post,

[t]he squishy center [of the electorate], having been bombarded by anti-Trump propaganda for four years is just as likely to turn against him as to re-elect him.

I ended with this:

There’s no time to lose. The preemptive (cold) civil war must start yesterday.

But it didn’t. And now the fate of America hinges on the election of 2020.

Unless thinking about the unthinkable includes thinking, quickly and cleverly, about how to defeat the enemy within. And I don’t necessarily mean at the ballot box.

Another Thought about “Darkest Hour”

I said recently about Darkest Hour  that

despite Gary Oldman’s deservedly acclaimed, Oscar-winning impersonation of Winston Churchill, earned a rating of 7 from me. It was an entertaining film, but a rather trite offering of Hollywoodized history.

There was a subtle aspect of the film which led me to believe that Churchill’s firm stance against a negotiated peace with Hitler had more support from the Labour Party than from Churchill’s Conservative colleagues. So I went to Wikipedia, which says this (among many things) in a discussion of the film’s historical accuracy:

In The New Yorker, Adam Gopnik wrote: “…in late May of 1940, when the Conservative grandee Lord Halifax challenged Churchill, insisting that it was still possible to negotiate a deal with Hitler, through the good offices of Mussolini, it was the steadfast anti-Nazism of Attlee and his Labour colleagues that saved the day – a vital truth badly underdramatized in the current Churchill-centric film, Darkest Hour“. This criticism was echoed by Adrian Smith, emeritus professor of modern history at the University of Southampton, who wrote in the New Statesman that the film was “yet again overlooking Labour’s key role at the most dangerous moment in this country’s history … in May 1940 its leaders gave Churchill the unequivocal support he needed when refusing to surrender. Ignoring Attlee’s vital role is just one more failing in a deeply flawed film”.

I thought that, if anything, the film did portray Labour as more steadfast than the Tories. First, the Conservatives (especially Halifax and Neville Chamberlain) were made to seem derisive of Churchill and all-too-willing to compromise with Hitler. Second — and here’s the subtlety — at the end of Churcill’s speech to the House of Commons on June 4, 1940, which is made the climactic scene in Darkest Hour, the Labour side of the House erupts in enthusiastic applause, while the Conservative side is subdued until it follows Labour’s suit.

The final lines of Churchill’s speech are always worth repeating:

Even though large tracts of Europe and many old and famous States have fallen or may fall into the grip of the Gestapo and all the odious apparatus of Nazi rule, we shall not flag or fail. We shall go on to the end, we shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our Island, whatever the cost may be, we shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender, and even if, which I do not for a moment believe, this Island or a large part of it were subjugated and starving, then our Empire beyond the seas, armed and guarded by the British Fleet, would carry on the struggle, until, in God’s good time, the New World, with all its power and might, steps forth to the rescue and the liberation of the old.

If G.W. Bush could have been as adamant in his opposition to the enemy (instead of pandering to the “religion of peace”), and as eloquent in his speech to Congress after 9/11 and at subsequent points in the ill-executed “war on terror”, there might now be a Pax Americana in the Middle East.

(See also “September 20, 2001: Hillary Clinton Signals the End of ‘Unity’“, “The War on Terror As It Should Have Been Fought“, and “A Rearview Look at the Invasion of Iraq and the War on Terror“.)

A Footnote to “Movies”

I noted here that I’ve updated my “Movies” page. There’s a further update. I’ve added a list of my very favorite films — the 69 that I’ve rated a 10 or 9 (out of 10). The list is reproduced below, complete with links to IMDb pages so that you can look up a film with which you may be unfamiliar.

Many of the films on my list are slanted to the left (e.g., Inherit the Wind), but they’re on my list because of their merit as entertainment. Borrowing from the criteria posted at the bottom of “Movies”, a rating of 9 means that I found a film to be superior several of thhe following dimensions: mood, plot, dialogue, music (if applicable), dancing (if applicable), quality of performances, production values, and historical or topical interest; worth seeing twice but not a slam-dunk great film. A “10” is an exemplar of its type; it can be enjoyed many times.

My Very Favorite Films: Releases from 1920 through 2018
(listed roughly in descending order of my ratings)
Ratings
Title (year of release) IMDb Me
1. The Wizard of Oz (1939)  8 10
2. Alice in Wonderland (1951)  7.4 10
3. A Man for All Seasons (1966)  7.7 10
4. Amadeus (1984)  8.3 10
5. The Harmonists (1997)  7.1 10
6. Dr. Jack (1922)  7.1 9
7. The General (1926)  8.1 9
8. City Lights (1931)  8.5 9
9. March of the Wooden Soldiers (1934)  7.3 9
10. The Gay Divorcee (1934)  7.5 9
11. David Copperfield (1935)  7.4 9
12. Captains Courageous (1937)  8 9
13. The Adventures of Robin Hood (1938)  7.9 9
14. Alexander Nevsky (1938)  7.6 9
15. Bringing Up Baby (1938)  7.9 9
16. A Christmas Carol (1938)  7.5 9
17. Destry Rides Again (1939)  7.7 9
18. Gunga Din (1939)  7.4 9
19. The Hunchback of Notre Dame (1939)  7.8 9
20. Mr. Smith Goes to Washington (1939)  8.1 9
21. The Women (1939)  7.8 9
22. The Grapes of Wrath (1940)  8 9
23. The Philadelphia Story (1940)  7.9 9
24. Pride and Prejudice (1940)  7.4 9
25. Rebecca (1940)  8.1 9
26. Sullivan’s Travels (1941)  8 9
27. Woman of the Year (1942)  7.2 9
28. The African Queen (1951)  7.8 9
29. The Browning Version (1951)  8.2 9
30. The Bad Seed (1956)  7.5 9
31. The Bridge on the River Kwai (1957)  8.1 9
32. Inherit the Wind (1960)  8.1 9
33. Psycho (1960)  8.5 9
34. The Hustler (1961)  8 9
35. Billy Budd (1962)  7.8 9
36. Lawrence of Arabia (1962)  8.3 9
37. Zorba the Greek (1964)  7.7 9
38. Doctor Zhivago (1965)  8 9
39. The Graduate (1967)  8 9
40. The Lion in Winter (1968)  8 9
41. Butch Cassidy and the Sundance Kid (1969)  8 9
42. Five Easy Pieces (1970)  7.5 9
43. The Godfather (1972)  9.2 9
44. Papillon (1973)  8 9
45. Chinatown (1974)  8.2 9
46. The Godfather: Part II (1974)  9 9
47. One Flew Over the Cuckoo’s Nest (1975)  8.7 9
48. Star Wars: Episode IV – A New Hope (1977)  8.6 9
49. Breaker Morant (1980)  7.8 9
50. Star Wars: Episode V – The Empire Strikes Back (1980)  8.7 9
51. Das Boot (1981)  8.3 9
52. Chariots of Fire (1981)  7.2 9
53. Raiders of the Lost Ark (1981)  8.4 9
54. Blade Runner (1982)  8.1 9
55. Gandhi (1982)  8 9
56. The Last Emperor (1987)  7.7 9
57. Dangerous Liaisons (1988)  7.6 9
58. Henry V (1989)  7.5 9
59. Chaplin (1992)  7.6 9
60. Noises Off… (1992)  7.6 9
61. Three Colors: Blue (1993)  7.9 9
62. Pulp Fiction (1994)  8.9 9
63. Richard III (1995)  7.4 9
64. The English Patient (1996)  7.4 9
65. Fargo (1996)  8.1 9
66. Chicago (2002)  7.1 9
67. Master and Commander: The Far Side of the World (2003)  7.4 9
68. The Chronicles of Narnia: The Lion, the Witch and the Wardrobe (2005)  6.9 9
69. The Kite Runner (2007)  7.6 9

“Movies” Updated

I have updated my “Movies” page. I was prompted to do so by having recently (and unusually) viewed two feature-length films (on consecutive evenings, no less): Darkest Hour and Goodbye Christopher Robin.

The former, despite Gary Oldman’s deservedly acclaimed, Oscar-winning impersonation of Winston Churchill, earned a rating of 7 from me. It was an entertaining film, but a rather trite offering of Hollywoodized history. The latter film, on the other hand earned a rating of 8 from me for the quality of its script, excellent performances, and non-saccharine treatment of Christopher Robin’s boyhood and his parents’ failings as parents.

In any event, go to “Movies”. Even if you’ve been there before, you will find new material in the updated version. You will find at the bottom of the page an explanation of my use of the 10-point rating scale.

Viewing Recommendations: TV Series and Mini-Series

My wife and I have watched many a series and mini-series. Some of them predate the era of VHS, DVD, and streaming, though much of the older fare is now available on DVD (and sometimes on streaming media). Our long list of favorites includes these (right-click a link to open it in a new tab):

Better Call Saul
Rumpole of the Bailey
Slings and Arrows
Pride and Prejudice
Cold Lazarus
Karaoke
Love in a Cold Climate
Oliver Twist
Bleak House
The Six Wives of Henry VIII
Danger UXB
Lonesome Dove
Sunset Song
Lillie
Vienna 1900
The Durrells in Corfu
The Wire
The Glittering Prizes
Bron/Broen
Wallander
Little Dorrit
Justified
Cracker
Pennies from Heaven
Mad Men
The Sopranos
Charters & Caldicott
Reckless
Our Mutual Friend
The First Churchills
The Unpleasantness at the Bellona Club
Murder Must Advertise
The Nine Tailors
Cakes and Ale
Madame Bovary
I, Claudius
Smiley’s People
Reilly: Ace of Spies
Prime Suspect
The Norman Conquests
Bramwell
Prime Suspect 2
Prime Suspect 3
Mystery!: Cadfael
Prime Suspect 5: Errors of Judgement
David Copperfield
Prime Suspect 6: The Last Witness
The Forsyte Saga
Elizabeth R
Jude the Obscure
Clouds of Witness
Country Matters
Notorious Woman
Five Red Herrings
Anna Karenina
Brideshead Revisited
To Serve Them All My Days

If you have more than a passing acquaintance with this genre, you will recognize that almost all of the fare is British. The Brits seem to have a near-lock on good acting and literate and clever writing.

Alas, of the series listed above, only Better Call Saul, Bron/Broen, and The Durrells in Corfu are still running. The Durrells comes to end this fall for U.S. viewers (Brits have already seen the final season). The final season of Bron/Broen has also aired in Europe, but isn’t yet available in the U.S.

As for Better Call Saul, the fifth season of which will air in 2020, there are rumors of a sixth and final season to follow.

Enjoy!