Conservatism vs. Libertarianism

Returning to the subject of political ideologies, I take up a post that had languished in my drafts folder for these past 18 months. It begins by quoting an unintentionally prescient piece by Michael Warren Davis: “The Max Bootification of the American Right” (The American Conservative, April 13, 2018). It’s unintentionally prescient because Davis boots Boot out of conservatism at about the same time that Boot was declaring publicly that he was no longer a conservative.

By way of introduction, Davis takes issue with

an article from the Spring 2012 issue of the Intercollegiate Review called “The Pillars of Modern American Conservatism” by Alfred S. Regnery. Like the [Intercollegiate Studies Institute] itself, it was excellent on the main. But it suffers from the grave (albeit common) sin of believing there is such a thing as “modern” conservatism, which can be distinguished from historic conservatism….

The trouble with “modern” conservatism … is that historic conservatism didn’t fail. It has not been tried and found wanting, as Chesterton would say; it has been found difficult and not tried….

The genius of fusionists (what is generally meant by “modern” conservatives) like William F. Buckley and Frank S. Meyer was joining the intellectual sophistication of traditionalism with the political credibility of libertarianism. The greatest traditionalists and libertarians of that age—Russell Kirk and Friedrich Hayek, respectively—protested vehemently against this fusion, insisting that their two schools were different species and could not intermarry. It was inevitable that “modern” conservatism would prioritize the first principles of one movement over the other. That is to say, this new conservatism would be either fundamentally traditionalist or fundamentally libertarian. It could not be both.

Regnery’s article proves that the latter came to pass. “Modern” conservatism is in fact not conservatism at all: it is a kind of libertarianism, albeit with an anti-progressive instinct.

Consider the subheadings: “The first pillar of conservatism,” Regnery writes, “is liberty, or freedom… The second pillar of conservative philosophy is tradition and order.” This is an inversion of the hierarchy put forward in What Is Conservatism?, a collection of essays edited by Meyer and published by the ISI in 1964. According to Meyer’s table of contents, essays with an “emphasis on tradition and authority” (Kirk, Willmoore Kendall) rank higher than those with an “emphasis on freedom” (M. Stanton Evans, Wilhelm Röpke, Hayek).

The ordering is no coincidence. This question of priorities became one of the principal logjams between the Kirkians and Hayekians. As Kirk explained in “Libertarians: Chirping Sectaries,” published in the Fall 1981 issue of Modern Age:

In any society, order is the first need of all. Liberty and justice may be established only after order is tolerably secure. But the libertarians give primacy to an abstract liberty. Conservatives, knowing that “liberty inheres in some sensible object,” are aware that true freedom can be found only within the framework of a social order, such as the constitutional order of these United States. In exalting an absolute and indefinable “liberty” at the expense of order, the libertarians imperil the very freedoms they praise.

This seems rather straightforward in terms of domestic policy, but we should consider its implications for foreign policy, too. The triumph of the “emphasis on freedom” is responsible for the disastrous interventionist tendencies that have plagued all modern Republican administrations.

We again turn to Kirk in his essay for What is Conservatism? titled “Prescription, Authority, and Ordered Freedom.” Here he warned:

To impose the American constitution on all the world would not render all the world happy; to the contrary, our constitution would work in few lands and would make many men miserable in short order. States, like men, must find their own paths to order and justice and freedom; and usually those paths are ancient and winding ways, and their signposts are Authority, Tradition, Prescription.

That is why traditionalists oppose regime change in the Middle East. Freedom may follow tyranny only if (as in the Revolutions of 1989) the people themselves desire it and are capable of maintaining the machinery of a free society. If the public is not especially interested in self-government, they will succumb either to a new despot or a stronger neighboring country. We have seen both of these scenarios play out in post-Ba’athist Iraq, with the rise of ISIS and the expansion of Iranian hegemony.

It is also why traditionalist conservatives are tarred as pro-Putin by liberals and “modern” conservatives. If Putin is indeed a neo-Tsarist, we may hope to see Russia follow C.S. Lewis’s maxim: “A sum can be put right: but only by going back till you find the error and working it afresh from that point, never by simply going on.” Communism is the error, and while Putinism is by no means the solution, we may hope (though not blindly) that it represents a return to the pre-communist order. Those are, if not optimal conditions for true liberty to flourish, at least the best we can reasonably expect.

More important, however, is that we recognize the absurdity of “modern” conservatives’ hopes that Russia would have transitioned from the Soviet Union to a carbon copy of 1980s Britain. We do the Russian people a disservice by holding President Putin to the example of some mythical Tsarina Thatcherova. That is simply not the “ancient and winding way” Providence has laid out for them.

Such an unhealthy devotion to abstract liberty is embodied in Max Boot, the Washington Post’s new conservative [sic] columnist. Consider the opening lines of his essay “I Would Vote for a (Sane) Donald Trump,” published last year in Foreign Policy:

I am socially liberal: I am pro-LGBTQ rights, pro-abortion rights, pro-immigration. I am fiscally conservative: I think we need to reduce the deficit and get entitlement spending under control… I am pro-free trade: I think we should be concluding new trade treaties rather than pulling out of old ones. I am strong on defense: I think we need to beef up our military to cope with multiple enemies. And I am very much in favor of America acting as a world leader: I believe it is in our own self-interest to promote and defend freedom and free markets as we have been doing in one form or another since at least 1898.

Boot has no respect for Authority, Tradition, and Prescription—not in this country, and not in those manifold countries he would have us invade. His politics are purely propositional: freedom is the greatest (perhaps the sole) virtue, and can be achieved equally by all men in all ages. Neither God nor history nor the diverse and delicate fibers that comprise a nation’s social order have any bearing on his ideologically tainted worldview.

Boot, of course, was hired by the Post to rubber-stamp the progressive agenda with the seal of Principled Conservatism™. Even he can’t possibly labor under the delusion that Jeff Bezos hired him to threaten Washington’s liberal establishment. Yet his conclusions follow logically from the pillars of “modern” conservatism.

Two choices lie before us, then. One is to restore a conservatism of Authority, Tradition, and Prescription. The other is to stand by and watch the Bootification of the American Right. Pray that we choose correctly, before it’s too late to undo the damage that’s already been done.

Boot was to have been the Post‘s answer to David Brooks, the nominal conservative at The New York Times, about whom I have often written. Boot, however, has declared himself a person of the left, whereas Brooks calls himself a “moderate“, which is another way of saying wishy-washy. Both of them give aid and comfort to the left. They are Tweedeldum and Tweedle-dumber, as a wag once observed (inaccurately) of Richard Nixon and Hubert Humphrey (opponents in the 1968 presidential election).

Returning to the main point of this post, which is the difference between conservatism and libertarianism, I will offer a view that is consistent with Davis’s, but expressed somewhat differently. This is from “Political Ideologies“:

There is an essential difference between conservatism and libertarianism. Conservatives value voluntary social institutions not just because they embed accumulated wisdom. Conservatives value voluntary social institutions because they bind people in mutual trust and respect, which foster mutual forbearance and breed social comity in the face of provocations. Adherence to long-standing social norms helps to preserve the wisdom embedded in them while also signalling allegiance to the community that gave rise to the norms.

Libertarians, on the other hand, following the lead of their intellectual progenitor, John Stuart Mill, are anxious to throw off what they perceive as social “oppression”. The root of libertarianism is Mill’s “harm principle”, which I have exposed for the fraud that it is (e.g., here and here)….

There’s more. Libertarianism, as it is usually explained and presented, lacks an essential ingredient: morality. Yes, libertarians espouse a superficially plausible version of morality — the harm principle, quoted above by Scott Yeonor. But the harm principle is empty rhetoric. Harm must be defined, and its definition must arise from social norms. The alternative, which libertarians — and “liberals” — obviously embrace, is that they are uniquely endowed with the knowledge of what is “right”, and therefore should be enforced by the state. Not the least of their sins against social comity is the legalization of abortion and same-sex “marriage” (detailed arguments at the links).

Liberty is not an abstraction. It is the scope of action that is allowed by long-standing, voluntarily evolved social norms. It is that restrained scope of action which enables people to coexist willingly, peacefully, and cooperatively for their mutual benefit. That is liberty, and it is served by conservatism, not by amoral, socially destructive libertarianism.

I rest my case.

Homelessness

It has long been my contention that homelessness is encouraged by programs to aid the homeless. It’s a fact of life: If you offer people a chance to get something for doing nothing, some of them will take your offer. (The subsidization of unemployment with welfare payments, food stamps, etc., is among the reasons that the real unemployment rate is markedly higher than the official rate.)

Recently, after I had mentioned my hypothesis to a correspondent, Francis Menton posted “The More Public Money Spent to Solve ‘Homelessness,’ the More Homelessness There Is“, at his blog, Manhattan Contrarian. Menton observes that the budget for homeless services in San Francisco

has gone from about $155 million annually in the 2011-12 fiscal year, to $271 million annually in San Francisco’s most recent 2018-19 spending plan.

[T]he $271 million per year would place San Francisco right near the top of the heap in per capita spending by a municipality to solve the homelessness problem. With a population of about 900,000, $271 million would come to about $300 per capita per year. By comparison, champion spender New York City, with a population close to ten times that of San Francisco, is up to spending some $3.2 billion annually on the homeless, which would be about $375 per capita….

So surely, with all this spending, homelessness in San Francisco must have at least begun its inevitable rapid decline? No, I’m sorry. Once again, it is the opposite. According to a piece in the City Journal by Erica Sandberg on October 10, the official count of homeless in San Francisco is now 9,780. That represents an increase of at least 30% just since 2017.

There’s more. It comes from The Economist, a magazine that was founded in the era of classical liberalism but which has gone over to the dark side: modern “liberlism”. In case you don’t know the difference, see “Political Ideologies“.

In “Homelessness Is Declining in America” (available with a limited-use free subscription), the real story is buried. The fake story is the nationwide decline of homelessness since 2009, which is unsurprising given that 2009 marked the nadir of the Great Recession.

The real story is that despite the nationwide decline of homelessness, its incidence has risen in major cities, where reigning Democrats are bent on solving the problem by throwing money at it; thus this graph, which is well down the page:

Further, The Economist acknowledges the phenomenon discussed by Menton:

Despite significant public efforts—such as a surcharge on sales tax directed entirely towards homeless services and a $1.2bn bond issue to pay for affordable housing—the problem of homelessness is worsening in Los Angeles. It has emerged as the greatest liability for Eric Garcetti, the mayor, and may have hindered his ambitions to run for president. After spending hundreds of millions, the city was surprised to learn in July that the number of homeless people had increased by 12% from the previous year (city officials point out that this was less than in many other parts of California). Though it can be found everywhere, homelessness, unlike other social pathologies, is not a growing national problem. Rather it is an acute and worsening condition in America’s biggest, most successful cities.

Every year in January, America’s Department of Housing and Urban Development mobilises thousands of volunteers to walk the streets and count the unsheltered homeless. Along with data provided by homeless shelters, these create an annual census of types of homeless residents. Advocates think that the methodology produces a significant undercount, but they are the best statistics available (and much higher quality than those of other developed countries). Since 2009 they show a 12% decline nationally, but increases of 18% in San Francisco, 35% in Seattle, 50% in Los Angeles and 59% in New York. [These figures seem to be drawn from HUD reports that can be found here and here.]

The Economist tries to minimize the scope of the problem by addressing “myths”:

The first is that the typical homeless person has lived on the street for years, while dealing with addiction, mental illness, or both. In fact, only 35% of the homeless have no shelter, and only one-third of those are classified as chronically homeless. The overwhelming majority of America’s homeless are in some sort of temporary shelter paid for by charities or government. This skews public perceptions of the problem. Most imagine the epicentre of the American homeless epidemic to be San Francisco—where there are 6,900 homeless people, of whom 4,400 live outdoors—instead of New York, where there are 79,000 homeless, of whom just 3,700 are unsheltered.

The “mythical” perception about the “typical homeless person” is a straw man, which seems designed to distract attention from the fact that homelessness is on the rise in big cities. Further, there is the attempt to distinguish between sheltered and unsheltered homeless persons. But sheltering is part of the problem, in that the availability of shelters makes it easier to be homeless. (More about that, below.)

The second myth is that rising homelessness in cities is the result of migration, either in search of better weather or benefits. Homelessness is a home-grown problem. About 70% of the homeless in San Francisco previously lived in the city; 75% of those living on the streets of Los Angeles, in places like Skid Row, come from the surrounding area. Though comparable data do not exist for Hawaii—which has one of the highest homelessness rates in the country—a majority of the homeless are ethnic Hawaiians and Pacific Islanders, suggesting that the problem is largely local.

The fact that homelessness is mainly a home-grown problem is consistent with the hypothesis that spending by big-city governments helps to promote it. The Economist doesn’t try to rebut that idea, but mentions in a sneering way a report by the Council of Economic Advisers “suggesting that spending on shelters would incentivise homelessness.” Well, I found the report (“The State of Homelessness in America“), and it cites evidence from actual research (as opposed to The Economist‘s hand-waving) to support what should be obvious to anyone who thinks about it: Sheltering incentivizes homelessness.

The Economist isn’t through, however:

All this obscures the chief culprit, however, which is the cost of housing. Even among the poor—of which there are officially 38m in America—homelessness is relatively rare, affecting roughly one in 70 people. What pushes some poor people into homelessness, and not others, remains obscure. So too are the reasons for the sharp racial disparities in homelessness; roughly 40% of the homeless are black, compared with 13% of the population. But remarkably tight correlations exist with rent increases.

An analysis by Chris Glynn and Emily Fox, two statisticians, predicts that a 10% increase in rents in a high-cost city like New York would result in an 8% increase in the number of homeless residents. Wherever homelessness appears out of control in America—whether in Honolulu, Seattle or Washington, DC—high housing costs almost surely lurk. Fixing this means dealing with a lack of supply, created by over-burdensome zoning regulations and an unwillingness among Democratic leaders to overcome entrenched local interests.

Ah, yes, “affordable housing” is always the answer if you’re a leftist. But it isn’t. Housing costs are high and bound to get higher because population continues to grow and businesses continue to grow and hire. Most of the population and business growth occurs in big cities. And if not in city cores, then in the satellite cities and developed areas that revolve around the cores. What this means is that there is a limited amount of land on which housing, offices, and factories can be built, so that the value of the land rises as the demand for it rises. Even if the supply of construction materials and labor were to rise with demand, the price of housing would continue to rise.

The only real “solution” is for governments to dictate across-the-board restrictions on lot size, building-unit size, and the elaborateness of materials used. That isn’t an issue for “entrenched local interests”, it’s an issue for anyone who believes that government shouldn’t tell him that he must live in a middle-income home when he can afford (and enjoy) something more luxurious, or that he must squeeze his highly paid employees into barren lofts.

Thus “affordable housing” in practice means subsidization. If opposition to subsidization is an “entrenched local interest”, it’s of a piece with opposition to across-the-board restrictions. It requires people who earn money to give it to people who don’t earn money (or very much money), thus blunting everyone’s incentive to earn more. Nobody promised anybody a rose garden — at least not until the welfare state came along in the 1930s. And, despite that, my father and grandfathers held menial jobs during the Great Depression and paid for their own housing, such as it was. If people are different now, it’s because of the welfare state.

Finally, homelessness is also encouraged by “enlightened” policies that allow (or don’t discourage) loitering, camping, and panhandling. I happen to live in Austin, where the homeless have been encouraged to do all of those things, to the detriment of public health and safety. I hope that Governor Abbott follows through on his commitment to rid public spaces of homeless encampments.

More Unsettled Science

Now hear this:

We’re getting something wrong about the universe.

It might be something small: a measurement issue that makes certain stars looks closer or farther away than they are, something astrophysicists could fix with a few tweaks to how they measure distances across space. It might be something big: an error — or series of errors — in  cosmology, or our understanding of the universe’s origin and evolution. If that’s the case, our entire history of space and time may be messed up. But whatever the issue is, it’s making key observations of the universe disagree with each other: Measured one way, the universe appears to be expanding at a certain rate; measured another way, the universe appears to be expanding at a different rate. And, as a new paper shows, those discrepancies have gotten larger in recent years, even as the measurements have gotten more precise….

The two most famous measurements work very differently from one another. The first relies on the Cosmic Microwave Background (CMB): the microwave radiation leftover from the first moments after the Big Bang. Cosmologists have built theoretical models of the entire history of the universe on a CMB foundation — models they’re very confident in, and that would require an all-new physics to break. And taken together, Mack said, they produce a reasonably precise number for the Hubble constant, or H0, which governs how fast the universe is currently expanding.

The second measurement uses supernovas and flashing stars in nearby galaxies, known as Cepheids. By gauging how far those galaxies are from our own, and how fast they’re moving away from us, astronomers have gotten what they believe is a very precise measurement of the Hubble constant. And that method offers a different H0.

It’s possible that the CMB model is just wrong in some way, and that’s leading to some sort of systematic error in how physicists are understanding the universe….

It’s [also] possible … that the supernovas-Cepheid calculation is just wrong. Maybe physicists are measuring distances in our local universe wrong, and that’s leading to a miscalculation. It’s hard to imagine what that miscalculation would be, though…. Lots of astrophysicists have measured local distances from scratch and have come up with similar results. One possibility … is just that we live in a weird chunk of the universe where there are fewer galaxies and less gravity, so our neighborhood is expanding faster than the universe as a whole….

Coming measurements might clarify the contradiction — either explaining it away or heightening it, suggesting a new field of physics is necessary. The Large Synoptic Survey Telescope, scheduled to come online in 2020, should find hundreds of millions of supernovas, which should vastly improve the datasets astrophysicists are using to measure distances between galaxies. Eventually, … gravitational wave studies will get good enough to constrain the expansion of the universe as well, which should add another level of precision to cosmology. Down the road, … physicists might even develop instruments sensitive enough to watch objects expand away from one another in real time.

But for the moment cosmologists are still waiting and wondering why their measurements of the universe don’t make sense together.

Here’s a very rough analogy to the problem described above:

  • A car traveling at a steady speed on a highway passes two markers that are separated by a measured distance (expressed in miles). Dividing the distance between the markers by the time of travel between the markers (expressed in hours) gives the speed of the car in miles per hour.
  • The speed of the same car is estimated by a carefully calibrated radar gun, one that has been tested on many cars under conditions like those in which it is used on the car in question.
  • The two methods yield different results. They are so different that there is no overlap between the normal ranges of uncertainty for the two methods.

The problem is really much more complicated than that. In the everyday world of cars traveling on highways, relativistic effects are unimportant and can be ignored. In the universe where objects are moving away from each other at a vastly greater speed — a speed that seems to increase constantly — relativistic effects are crucial. By relativistic effects I mean the interdependence of distance, time, and speed — none of which is an absolute, and all of which depend on each other (maybe).

If the relativistic effects involved in measuring cosmological phenomena are well understood, they shouldn’t account for the disparate estimates of the Hubble constant (H0). This raises a possibility that isn’t mentioned in the article quoted above, namely, that the relativistic effects aren’t well understood or have been misestimated.

There are other possibilities; for example:

  • The basic cosmological assumption of a Big Bang and spatially uniform expansion is wrong.
  • The speed of light (and/or other supposed constants) isn’t invariant.
  • There is an “unknown unknown” that may never be identified, let alone quantified.

Whatever the case, this is a useful reminder that science is never settled.

Political Ideologies

I have just published a new page, “Political Ideologies”. Here’s the introduction:

Political ideologies proceed in a circle. Beginning arbitrarily with conservatism and moving clockwise, there are roughly the following broad types of ideology: conservatism, anti-statism (libertarianism), and statism. Statism is roughly divided into left-statism (“liberalism”or “progressivism”, left-populism) and right-statism (faux conservatism, right-populism). Left-statism and right-statism are distinguishable by their stated goals and constituencies.

By statism, I mean the idea that government should do more than merely defend the people from force and fraud. Conservatism and libertarianism are both anti-statist, but there is a subtle and crucial difference between them, which I will explain.

Not everyone has a coherent ideology of a kind that I discuss below. Far from it. There is much vacillation between left-statism and right-statism. And there is what I call the squishy center of the electorate which is easily swayed by promises and strongly influenced by bandwagon effects. In general, there is what one writer calls clientelism:

the distribution of resources by political power through an agreement in which politicians – the patrons – make this allocation dependent on the political support of the beneficiaries – their clients. Clientelism emerges at the intersection of political power with social and economic activity.

Politicians themselves are prone to stating ideological positions to which they don’t adhere, out of moral cowardice and a strong preference for power over principle. Republicans have been especially noteworthy in this respect. Democrats simply try to do what they promise to do — increase the power of government (albeit at vast but unmentioned economic and social cost).

In what follows, I will ignore the squishy center and the politics of expediency. I will focus on the various ideologies, the contrasts between them, and the populist allure of left-statism and right-statism. Because the two statisms are so much alike under the skin, I will start with conservatism and work around the circle to them. Conservatism gets more attention than the other ideologies because it is intellectually richer.

Go here for the rest.

Leninthink and Left-think

The following passages from Gary Saul Morson’s “Leninthink” (The New Criterion, October 2019) speak volumes about today’s brand of leftism:

In [Lenin’s] view, Marx’s greatest contribution was not the idea of the class struggle but “the dictatorship of the proletariat,” and as far back as 1906 Lenin had defined dictatorship as “nothing other than power which is totally unlimited by any laws, totally unrestrained by absolutely any rules, and based directly on force.”

*   *   *

For us, the word “politics” means a process of give and take, but for Lenin it’s we take, and you give. From this it follows that one must take maximum advantage of one’s position. If the enemy is weak enough to be destroyed, and one stops simply at one’s initial demands, one is objectively helping the enemy, which makes one a traitor.

*   *   *

If there is one sort of person Lenin truly hated more than any other, it is—to use some of his more printable adjectives—the squishy, squeamish, spineless, dull-witted liberal reformer.

*   *   *

If by law one means a code that binds the state as well as the individual, specifies what is and is not permitted, and eliminates arbitrariness, then Lenin entirely rejected law as “bourgeois.”…  Recall that he defined the dictatorship of the proletariat as rule based entirely on force absolutely unrestrained by any law.

*   *   *

Lenin’s language, no less than his ethics, served as a model, taught in Soviet schools and recommended in books with titles like Lenin’s Language and On Lenin’s Polemical Art. In Lenin’s view, a true revolutionary did not establish the correctness of his beliefs by appealing to evidence or logic, as if there were some standards of truthfulness above social classes. Rather, one engaged in “blackening an opponent’s mug so well it takes him ages to get it clean again.” Nikolay Valentinov, a Bolshevik who knew Lenin well before becoming disillusioned, reports him saying: “There is only one answer to revisionism: smash its face in!”

*   *   *

No concessions, compromises, exceptions, or acts of leniency; everything must be totally uniform, absolutely the same, unqualifiedly unqualified.

*   *   *

Critics objected that Lenin argued by mere assertion. He disproved a position simply by showing it contradicted what he believed. In his attack on the epistemology of Ernst Mach and Richard Avenarius, for instance, every argument contrary to dialectical materialism is rejected for that reason alone. Valentinov, who saw Lenin frequently when he was crafting this treatise, reports that Lenin at most glanced through their works for a few hours. It was easy enough to attribute to them views they did not hold, associate them with disreputable people they had never heard of, or ascribe political purposes they had never imagined. These were Lenin’s usual techniques, and he made no bones about it.

Opponents objected that Lenin lied without compunction, and it is easy to find quotations in which he says—as he did to the Bolshevik leader Karl Radek—“Who told you a historian has to establish the truth?” Yes, we are contradicting what we said before, he told Radek, and when it is useful to reverse positions again, we will.

*   *   *

Lenin did not just invent a new kind of party, he also laid the basis for what would come to be known in official parlance as “partiinost’,” literally Partyness, in the sense of Party-mindedness….

… The true Party member cares for nothing but the Party. It is his family, his community, his church. And according to Marxism-Leninism, everything it did was guaranteed to be correct.

*   *   *

[The prominent Bolshevik Yuri] Pyatakov grasped Lenin’s idea that coercion is not a last resort but the first principle of Party action. Changing human nature, producing boundless prosperity, overcoming death itself: all these miracles could be achieved because the Party was the first organization ever to pursue coercion without limits.

*   *   *

Many former Communists describe their belated recognition that experienced Party members do not seem to believe what they profess…. It gradually dawned on [Richard Wright] that the Party takes stances not because it cares about them—although it may—but because it is useful for the Party to do so.

Doing so may help recruit new members, as its stance on race had gotten Wright to join. But after a while a shrewd member learned, without having been explicitly told, that loyalty belonged not to an issue, not even to justice broadly conceived, but to the Party itself. Issues would be raised or dismissed as needed.

*   *   *

I remarked to one colleague, who called herself a Marxist-Leninist, that it only made things worse when she told obvious falsehoods in departmental meetings. Surely, such unprincipled behavior must bring discredit to your own position, I pleaded.

Her reply brought me back to my childhood [as a son of a party member]. I quote it word-for-word: “You stick to your principles, and I’ll stick to mine.” From a Leninist perspective, a liberal, a Christian, or any type of idealist only ties his hands by refraining from doing whatever works. She meant: we Leninists will win because we know better than to do that.

In the end, leftism is about power — unchallenged power to do whatever it is that must be done.

The Modern Presidency: From TR to DJT

This is a revision and expansion of a post that I published at my old blog late in 2007. The didactic style of this post reflects its original purpose, which was to give my grandchildren some insights into American history that aren’t found in standard textbooks. Readers who consider themselves already well-versed in the history of American politics should nevertheless scan this post for its occasionally provocative observations.

Theodore Roosevelt Jr. (1858-1919) was elected Vice President as a Republican in 1900, when William McKinley was elected to a second term as President. Roosevelt became President when McKinley was assassinated in September 1901. Roosevelt was re-elected President in 1904, with 56 percent of the “national” popular vote. (I mention popular-vote percentages here and throughout this post because they are a gauge of the general popularity of presidential candidates, though an inaccurate gauge if a strong third-party candidate emerges to distort the usual two-party dominance of the popular vote. There is, in fact, no such thing as a national popular vote. Rather, it is the vote in each State which determines the distribution of that State’s electoral votes between the various candidates. The electoral votes of all States are officially tallied about a month after the general election, and the president-elect is the candidate with the most electoral votes. I have more to say more about electoral votes in several of the entries that follow this one.)

Theodore Roosevelt (also known as TR) served almost two full terms as President, from September 14, 1901, to March 4, 1909. (Before 1937, a President’s term of office began on March 4 of the year following his election to office.)

Roosevelt was an “activist” President. Roosevelt used what he called the “bully pulpit” of the presidency to gain popular support for programs that exceeded the limits set in the Constitution. Roosevelt was especially willing to use the power of government to regulate business and to break up companies that had become successful by offering products that consumers wanted. Roosevelt was typical of politicians who inherited a lot of money and didn’t understand how successful businesses provided jobs and useful products for less-wealthy Americans.

Roosevelt was more like the Democrat Presidents of the Twentieth Century. He did not like the “weak” government envisioned by the authors of the Constitution. The authors of the Constitution designed a government that would allow people to decide how to live their own lives (as long as they didn’t hurt other people) and to run their own businesses as they wished to (as long as they didn’t cheat other people). The authors of the Constitution thought government should exist only to protect people from criminals and foreign enemies.

William Howard Taft (1857-1930), a close friend of Theodore Roosevelt, served as President from March 4, 1909, to March 4, 1913. Taft ran for the presidency as a Republican in 1908 with Roosevelt’s support. But Taft didn’t carry out Roosevelt’s anti-business agenda aggressively enough to suit Roosevelt. So, in 1912, when Taft ran for re-election as a Republican, Roosevelt ran for election as a Progressive (a newly formed political party). Many Republican voters decided to vote for Roosevelt instead of Taft. The result was that a Democrat, Woodrow Wilson, won the most electoral votes. Although Taft was defeated for re-election, he later became Chief Justice of the United States, making him the only person ever to have served as head of the executive and judicial branches of the U.S. Government.

Thomas Woodrow Wilson (1856-1924) served as President from March 4, 1913, to March 4, 1921. (Wilson didn’t use his first name, and was known officially as Woodrow Wilson.) Wilson is the only President to have earned the degree of doctor of philosophy. Wilson’s field of study was political science, and he had many ideas about how to make government “better”. But “better” government, to Wilson, was “strong” government of the kind favored by Theodore Roosevelt. In fact, it was government by executive decree rather than according to the Constitution’s rules for law-making, in which Congress plays the central role.

Wilson was re-elected in 1916 because he promised to keep the United States out of World War I, which had begun in 1914. But Wilson changed his mind in 1917 and asked Congress to declare war on Germany. After the war, Wilson tried to get the United States to join the League of Nations, an international organization that was supposed to prevent future wars by having nations assemble to discuss their differences. The U.S. Senate, which must approve America’s membership in international organizations, refused to join the League of Nations. The League did not succeed in preventing future wars because wars are started by leaders who don’t want to discuss their differences with other nations.

Warren Gamaliel Harding (1865-1923), a Republican, was elected in 1920 and inaugurated on March 4, 1921. Harding asked voters to reject the kind of government favored by Democrats, and voters gave Harding what is known as a “landslide” victory; he received 60 percent of the votes cast in the 1920 election for president, one of the highest percentages ever recorded. Harding’s administration was about to become involved in a major scandal when Harding died suddenly on August 3, 1923, while he was on a trip to the West Coast. The exact cause of Harding’s death is unknown, but he may have had a stroke when he learned of the impending scandal, which involved Albert Fall, Secretary of the Interior. Fall had secretly allowed some of his business associates to lease government land for oil-drilling, in return for personal loans.

There were a few other scandals, but Harding probably had nothing to do with any of them. Because of the scandals, most historians say that they consider Harding to have been a poor President. But that isn’t the real reason for their dislike of Harding. Most historians, like most college professors, favor “strong” government. Historians don’t like Harding because he didn’t use the power of government to interfere in the nation’s economy. An important result of Harding’s policy (called laissez-faire, or “hands off”) was high employment and increasing prosperity during the 1920s.

John Calvin Coolidge (1872-1933) , who was Harding’s Vice President, became President upon Harding’s death in 1923. (Coolidge didn’t use his first name, and was known as Calvin.) Coolidge was elected President in 1924. He served as President from August 3, 1923, to March 4, 1929. Coolidge continued Harding’s policy of not interfering in the economy, and people continued to become more prosperous as businesses grew and hired more people and paid them higher wages. Coolidge was known as “Silent Cal” because he was a man of few words. He said only what was necessary for him to say, and he meant what he said. That was in keeping with his approach to the presidency. He was not the “activist” that reporters and historians like to see in the presidency; he simply did the job required of him by the Constitution, which was to execute the laws of the United States. He continued Harding’s hands-off policy, and the country prospered as a result. Coolidge chose not run for re-election in 1928, even though he was quite popular.

Herbert Clark Hoover (1874-1964), a Republican who had been Secretary of Commerce under Coolidge, was elected to the presidency in 1928. He served as President from March 4, 1929, to March 4, 1933.

Hoover won 58 percent of the popular vote, an endorsement of the hands-off policy of Harding and Coolidge. Hoover’s administration is known mostly for the huge drop in the price of stocks (shares of corporations, which are bought and sold in places known as stock exchanges), and for the Great Depression that was caused partly by the “Crash” — as it became known. The rate of unemployment (the percentage of American workers without jobs) rose from 3 percent just before the Crash to 25 percent by 1933, at the depth of the Great Depression.

The Crash had two main causes. First, the prices of shares in businesses (called stocks) began to rise sharply in the late 1920s. That caused many persons to borrow money in order to buy stocks, in the hope that the price of stocks would continue to rise. If the price of stocks continued to rise, buyers could sell their stocks at a profit and repay the money they had borrowed. But when stock prices got very high in the fall of 1929, some buyers began to worry that prices would fall, so they began to sell their stocks. That drove down the price of stocks, and caused more buyers to sell in the hope of getting out of the stock market before prices fell further. But prices went down so quickly that almost everyone who owned stocks lost money. Prices of stocks kept going down. By 1933, many stocks had become worthless and most stocks were selling for only a small fraction of prices that they had sold for before the Crash.

Because so many people had borrowed money to buy stocks, they went broke when stock prices dropped. When they went broke, they were unable to pay their other debts. That had a ripple effect throughout the economy. As people went broke they spent less money and were unable to pay their debts. Banks had less money to lend. Because people were buying less from businesses, and because businesses couldn’t get loans to stay in business, many businesses closed and people lost their jobs. Then the people who lost their jobs had less money to spend, and so more people lost their jobs.

The effects of the Great Depression were felt in other countries because Americans couldn’t afford to buy as much as they used to from other countries. Also, Congress passed a law known as the Smoot-Hawley Tarrif Act, which President Hoover signed. The Smoot-Hawley Act raised tarrifs (taxes) on items imported into the United States, which meant that Americans bought even less from foreign countries. Foreign countries passed similar laws, which meant that foreigners began to buy less from Americans, which put more Americans out of work.

The economy would have recovered quickly, as it had done in the past when stock prices fell and unemployment increased. But the actions of government — raising tariffs and making loans harder to get — only made things worse. What could have been a brief recession turned into the Great Depression. People were frightened. They blamed President Hoover for their problems, although President Hoover didn’t cause the Crash. Hoover ran for re-election in 1932, but he lost to Franklin Delano Roosevelt, a Democrat.

Franklin Delano Roosevelt (1882-1945), known as FDR, served as President from March 4, 1933 until his death on April 12, 1945, just a month before V-E Day. FDR was elected to the presidency in 1932, 1936, 1940, and 1944 — the only person elected more than twice. Roosevelt was a very popular President because he served during the Depression and World War II, when most Americans — having lost faith in themselves — sought reassurance that “someone was in charge”. FDR was not universally popular; his share of the popular vote rose from 57 percent in 1932 to 61 percent in 1936, but then dropped to 55 percent in 1940 and 54 percent in 1944. Americans were coming to understand what FDR’s opponents knew at the time, and what objective historians have said since:

FDR’s program to end the Great Depression was known as the New Deal. It consisted of welfare programs, which put people to work on government projects instead of making useful things. It also consisted of higher taxes and other restrictions on business, which discouraged people from starting and investing in businesses, which is the cure for unemployment.

Roosevelt did try to face up to the growing threat from Germany and Japan. However, he wasn’t able to do much to prepare America’s defenses because of strong isolationist and anti-war feelings in the country. Those feelings were the result of America’s involvement in World War I. (Similar feelings in Great Britain kept that country from preparing for war with Germany, which encouraged Hitler’s belief that he could easily conquer Europe.)

When America went to war after Japan’s attack on Pearl Harbor, Roosevelt proved to be an able and inspiring commander-in-chief. But toward the end of the war his health was failing and he was influenced by close aides who were pro-communist and sympathetic to the Soviet Union (Union of Soviet Socialist Republics, or USSR). Roosevelt allowed Soviet forces to claim Eastern Europe, including half of Germany. Roosevelt also encouraged the formation of the United Nations, where the Soviet Union (now the Russian Federation) has had a strong voice because it was made a permanent member of the Security Council, the policy-making body of the UN. As a member of the Security Council, Russia can obstruct actions proposed by the United States. (In any event, the UN has long since become a hotbed of anti-American, left-wing sentiment.)

Roosevelt’s appeasement of the USSR caused Josef Stalin (the Soviet dictator) to believe that the U.S. had weak leaders who would not challenge the USSR’s efforts to spread Communism. The result was the Cold War, which lasted for 45 years. During the Cold War the USSR developed nuclear weapons, built large military forces, kept a tight rein on countries behind the Iron Curtain (in Eastern Europe), and expanded its influence to other parts of the world.

Stalin’s belief in the weakness of U.S. leaders was largely correct, until Ronald Reagan became President. As I will discuss, Reagan’s policies led to the end of the Cold War.

Harry S Truman (1884-1972), who was Vice President in FDR’s fourth term, became President upon FDR’s death. Truman was re-elected in 1948, so he served as President from April 12, 1945 until January 20, 1953 — almost two full terms.

Truman made one right decision during his presidency. He approved the dropping of atomic bombs on Japan. Although hundreds of thousands of Japanese were killed by the bombs, the Japanese soon surrendered. If the Japanese hadn’t surrendered then, U.S. forces would have invaded Japan and millions of Americans and Japanese lives would have been lost in the battles that followed the invasion.

Truman ordered drastic reductions in the defense budget because he thought that Stalin was an ally of the United States. (Truman, like FDR, had advisers who were Communists.) Truman changed his mind about defense budgets, and about Stalin, when Communist North Korea attacked South Korea in 1950. The attack on South Korea came after Truman’s Secretary of State (the man responsible for relations with other countries) made a speech about countries that the United States would defend. South Korea was not one of those countries.

When South Korea was invaded, Truman asked General of the Army Douglas MacArthur to lead the defense of South Korea. MacArthur planned and executed the amphibious landing at Inchon, which turned the war in favor of South Korea and its allies. The allied forces then succeeded in pushing the front line far into North Korea. Communist China then entered the war on the side of North Korea. MacArthur wanted to counterattack Communist Chinese bases and supply lines in Manchuria, but Truman wouldn’t allow that. Truman then “fired” MacArthur because MacArthur spoke publicly about his disagreement with Truman’s decision. The Chinese Communists pushed allied forces back and the Korean War ended in a deadlock, just about where it had begun, near the 38th parallel.

In the meantime, Communist spies had stolen the secret plans for making atomic bombs. They were able to do that because Truman refused to hear the truth about Communist spies who were working inside the government. By the time Truman left office the Soviet Union had manufactured nuclear weapons, had strengthened its grip on Eastern Europe, and was beginning to expand its influence into the Third World (the nations of Africa and the Middle East).

Truman was very unpopular by 1952. As a result he chose not to run for re-election, even though he could have done so. (The “Lame Duck” amendment to the Constitution, which bars a person from serving as President for more than six years was adopted while Truman was President, but it didn’t apply to him.)

Dwight David Eisenhower (1890-1969), a Republican, served as President from January 20, 1953 to January 20, 1961. Eisenhower (also known by his nickname, “Ike”) received 55 percent of the popular vote in 1952 and 57 percent in 1956; his Democrat opponent in both elections was Adlai Stevenson. The Republican Party chose Eisenhower as a candidate mainly because he had become famous as a general during World War II. Republican leaders thought that by nominating Eisenhower they could end the Democrats’ twenty-year hold on the presidency. The Republican leaders were right about that, but in choosing Eisenhower as a candidate they rejected the Republican Party’s traditional stand in favor of small government.

Eisenhower was a “moderate” Republican. He was not a “big spender” but he did not try to undo all of the new government programs that had been started by FDR and Truman. Traditional Republicans eventually fought back and, in 1964, nominated a small-government candidate named Barry Goldwater. I will discuss him when I get to President Lyndon B. Johnson.

Eisenhower was a popular President, and he was a good manager, but he gave the impression of being “laid back” and not “in charge” of things. The news media had led Americans to believe that “activist” Presidents are better than laissez-faire Presidents, and so there was by 1960 a lot of talk about “getting the country moving again” — as if it was the job of the President to “run” the country instead of execution laws duly enacted in accordance with the Constitution.

John Fitzgerald Kennedy (1917-1963), a Democrat, was elected in 1960 to succeed President Eisenhower. Kennedy, who became known as JFK, served from January 20, 1961, until November 22, 1963, when he was assassinated in Dallas, Texas.

One reason that Kennedy won the election of 1960 (with 50 percent of the popular vote) was his image of “vigorous youth” (he was 27 years younger than Eisenhower). In fact, JFK had been in bad health for most of his life. He seemed to be healthy only because he used a lot of medications. Those medications probably impaired his judgment and would have caused him to die at a relatively early age if he hadn’t been assassinated.

Late in Eisenhower’s administration a Communist named Fidel Castro had taken over Cuba, which is only 90 miles south of Florida. The Central Intelligence Agency then began to work with anti-Communist exiles from Cuba. The exiles were going to attempt an invasion of Cuba at a place called the Bay of Pigs. In addition to providing the necessary military equipment, the U.S. was also going to provide air support during the invasion.

JFK succeeded Eisenhower before the invasion took place, in April 1961. JFK approved changes in the invasion plan that resulted in the failure of the invasion. The most important change was to discontinue air support for the invading forces. The exiles were defeated, and Castro has remained firmly in control of Cuba.

The failed invasion caused Castro to turn to the USSR for military and economic assistance. In exchange for that assistance, Castro agreed to allow the USSR to install medium-range ballistic missiles in Cuba. That led to the so-called Cuban Missile Crisis in 1962. Many historians give Kennedy credit for resolving the crisis and avoiding a nuclear war with the USSR. The Russians withdrew their missiles from Cuba, but JFK had to agree to withdraw American missiles from bases in Turkey.

The myth that Kennedy had stood up to the Russians made him more popular in the U.S. His major accomplishment, which Democrats today like to ignore, was to initiate tax cuts, which became law after his assassination. The Kennedy tax cuts helped to make America more prosperous during the 1960s by giving people more money to spend, and by encouraging businesses to expand and create jobs.

The assassination of JFK on November 22, 1963, in Dallas was a shocking event. It also led many Americans to believe that JFK would have become a great President if he had lived and been re-elected to a second term. There is little evidence that JFK would have become a great President. His record in Cuba suggests that he would not have done a good job of defending the country.

Lyndon Baines Johnson (1908-1973), also known as LBJ, was Kennedy’s Vice President and became President upon Kennedy’s assassination. LBJ was re-elected in 1964; he served as President from November 22, 1963 to January 20, 1969. LBJ’s Republican opponent in 1964 was Barry Goldwater, who was an old-style Republican conservative, in favor of limited government and a strong defense. LBJ portrayed Goldwater as a threat to America’s prosperity and safety, when it was LBJ who was the real threat. Americans were still in shock about JFK’s assassination, and so they rallied around LBJ, who won 61 percent of the popular vote.

LBJ is known mainly for two things: his “Great Society” program and the war in Vietnam. The Great Society program was an expansion of FDR’s New Deal. It included such things as the creation of Medicare, which is medical care for retired persons that is paid for by taxes. Medicare is an example of a “welfare” program. Welfare programs take money from people who earn it and give money to people who don’t earn it. The Great Society also included many other welfare programs, such as more benefits for persons who are unemployed. The stated purpose of the expansion of welfare programs under the Great Society was to end poverty in America, but that didn’t happen. The reason it didn’t happen is that when people receive welfare they don’t work as hard to take care of themselves and their families, and they don’t save enough money for their retirement. Welfare actually makes people worse off in the long run.

America’s involvement in Vietnam began in the 1950s, when Eisenhower was President. South Vietnam was under attack by Communist guerrillas, who were sponsored by North Vietnam. Small numbers of U.S. forces were sent to South Vietnam to train and advise South Vietnamese forces. More U.S. advisers were sent by JFK, but within a few years after LBJ became President he had turned the war into an American-led defense of South Vietnam against Communist guerrillas and regular North Vietnamese forces. LBJ decided that it was important for the U.S. to defeat a Communist country and stop Communism from spreading in Southeast Asia.

However, LBJ was never willing to commit enough forces in order to win the war. He allowed air attacks on North Vietnam, for example, but he wouldn’t invade North Vietnam because he was afraid that the Chinese Communists might enter the war. In other words, like Truman in Korea, LBJ was unwilling to do what it would take to win the war decisively. Progress was slow and there were a lot of American casualties from the fighting in South Vietnam. American newspapers and TV began to focus attention on the casualties and portray the war as a losing effort. That led a lot of Americans to turn against the war, and college students began to protest the war (because they didn’t want to be drafted). Attention shifted from the war to the protests, giving the world the impression that America had lost its resolve. And it had.

LBJ had become so unpopular because of the war in Vietnam that he decided not to run for President in 1968. Most of the candidates for President campaigned by saying that they would end the war. In effect, the United States had announced to North Vietnam that it would not fight the war to win. The inevitable outcome was the withdrawal of U.S. forces from Vietnam, which finally happened in 1973, under LBJ’s successor, Richard Nixon. South Vietnam was left on its own, and it fell to North Vietnam in 1975.

Richard Milhous Nixon (1913-1994) was a Republican. He won the election of 1968 by beating the Democrat candidate, Hubert H. Humphrey (who had been LBJ’s Vice President), and a third-party candidate, George C. Wallace. Nixon and Humphrey each received 43 percent of the popular vote; Wallace received 14 percent. If Wallace had not been a candidate, most of the votes cast for him probably would have been cast for Nixon.

Even though Nixon received less than half of the popular vote, he won the election because he received a majority of electoral votes. Electoral votes are awarded to the winner of each State’s popular vote. Nixon won a lot more States than Humphrey and Wallace, so Nixon became President.

Nixon won re-election in 1972, with 61 percent of the popular vote, by beating a Democrat (George McGovern) who would have expanded LBJ’s Great Society and cut America’s armed forces even more than they were cut after the Vietnam War ended. Nixon’s victory was more a repudiation of McGovern than it was an endorsement of Nixon. His second term ended in disgrace when he resigned the presidency on August 9, 1974.

Nixon called himself a conservative, but he did nothing during his presidency to curb the power of government. He did not cut back on the Great Society. He spent a lot of time on foreign policy. But Nixon’s diplomatic efforts did nothing to make the USSR and Communist China friendlier to the United States. Nixon had shown that he was essentially a weak President by allowing U.S. forces to withdraw from Vietnam. Dictatorial rulers like do not respect countries that display weakness.

Nixon was the first (and only) President who resigned from office. He resigned because the House of Representatives was ready to impeach him. An impeachment is like a criminal indictment; it is a set of charges against the holder of a public office. If Nixon had been impeached by the House of Representatives, he would have been tried by the Senate. If two-thirds of the Senators had voted to convict him he would have been removed from office. Nixon knew that he would be impeached and convicted, so he resigned.

The main charge against Nixon was that he ordered his staff to cover up his involvement in a crime that happened in 1972, when Nixon was running for re-election. The crime was a break-in at the headquarters of the Democratic Party in Washington, D.C. Because the Democratic Party’s headquarters was located in the Watergate Building in Washington, D.C., this episode became known as the Watergate Scandal.

The purpose of the break-in was to obtain documents that might help Nixon’s re-election effort. The men who participated in the break-in were hired by aides to Nixon. Details about the break-in and Nixon’s involvement were revealed as a result of investigations by Congress, which were helped by reporters who were doing their own investigative work.

But there is good reason to believe that Nixon was unjustly forced from office by the concerted efforts of the news media (most of which had long been biased against Nixon), Democrats in Congress, and many Republicans who were anxious to rid themselves of Nixon, who was a magnet for controversy.

Gerald Rudolph Ford (born Leslie King Jr.) (1913 – 2007), who was Nixon’s Vice President at the time Nixon resigned, became President on August 9, 1974 and served until January 20, 1977. Ford succeeded Spiro T. Agnew, who had been Nixon’s Vice President until October 10, 1973, when he resigned because he had been taking bribes while he was Governor of Maryland (the job he had before becoming Vice President).

Ford became the first Vice President chosen in accordance with the Twenty-Fifth Amendment to the Constitution. That amendment spells out procedures for filling vacancies in the presidency and vice presidency. When Vice President Agnew resigned, President Nixon nominated Ford as Vice President, and the nomination was approved by a majority vote of the House and Senate. Then, when Ford became President, he nominated Nelson Rockefeller to fill the vice presidency, and Rockefeller was elected Vice President by the House and Senate.

Ford ran for re-election in 1976, but he was defeated by James Earl Carter, mainly because of the Watergate Scandal. Ford was not involved in the scandal, but voters often cast votes for silly reasons. Carter’s election was a rejection of Richard Nixon, who had left office two years earlier, not a vote of confidence in Carter.

James Earl (“Jimmy”) Carter Jr. (1924 – ), a Democrat who had been Governor of Georgia, received only 50 percent of the popular vote. He was defeated for re-election in 1980, so he served as President from January 20, 1977 to January 20, 1981.

Carter was an ineffective President who failed at the most important duty of a President, which is to protect Americans from foreign enemies. His failure came late in his term of office, during the Iran Hostage Crisis. The Shah of Iran had ruled the country for 38 years. He was overthrown in 1979 by a group of Muslim clerics (religious men) who disliked the Shah’s pro-American policies. In November 1979 a group of students loyal to the new Muslim government of Iran invaded the American embassy in Tehran (Iran’s capital city) and took 66 hostages. Carter approved rescue efforts, but they were poorly planned. The hostages were still captive by the time of the presidential election in 1980. Carter lost the election largely because of his feeble rescue efforts.

In recent years Carter has become an outspoken critic of America’s foreign policy. Carter is sympathetic to America’s enemies and he opposes strong military action in defense of America.

Ronald Wilson Reagan (1911-2004), a Republican, succeeded Jimmy Carter as President. Reagan won 51 percent of the popular vote in 1980. Reagan would have received more votes, but a former Republican (John Anderson) ran as a third-party candidate and took 7 percent of the popular vote. Reagan was re-elected in 1984 with 59 percent of the popular vote. He served as President from January 20, 1981, until January 20, 1989.

Reagan had two goals as President: to reduce the size of government and to increase America’s military strength. He was unable to reduce the size of government because, for most of his eight years in office, Democrats were in control of Congress. But Reagan was able to get Congress to approve large reductions in income-tax rates. Those reductions led to more spending on consumer goods and more investment in the creation of new businesses. As a result, Americans had more jobs and higher incomes.

Reagan succeeded in rebuilding America’s military strength. He knew that the only way to defeat the USSR, without going to war, was to show the USSR that the United States was stronger. A lot of people in the United States opposed spending more on military forces; they though that it would cause the USSR to spend more. They also thought that a war between the U.S. and USSR would result. Reagan knew better. He knew that the USSR could not afford to keep up with the United States. Reagan was right. Not long after the end of his presidency the countries of Eastern Europe saw that the USSR was really a weak country, and they began to break away from the USSR. Residents of Berlin demolished the Berlin Wall, which the USSR had erected in 1961 to keep East Berliners from crossing over into West Berlin. East Germany was freed from Communist rule, and it reunited with West Germany. The USSR collapsed, and many of the countries that had been part of the USSR became independent. We owe the end of the Soviet Union and its influence President Reagan’s determination to defeat the threat posed by the Soviet Union.

George Herbert Walker Bush (1924 – 2019), a Republican, was Reagan’s Vice President. He won 54 percent of the popular vote when he defeated his Democrat opponent, Michael Dukakis, in the election of 1988. Bush lost the election of 1992. He served as President from January 20, 1989 to January 20, 1993.

The main event of Bush’s presidency was the Gulf War of 1990-1991. Iraq, whose ruler was Saddam Hussein, invaded the small neighboring country of Kuwait. Kuwait produces and exports a lot of oil. The occupation of Kuwait by Iraq meant that Saddam Hussein might have been able to control the amount of oil shipped to other countries, including Europe and the United States. If Hussein had been allowed to control Kuwait, he might have moved on to Saudi Arabia, which produces much more oil than Kuwait. President Bush asked Congress to approve military action against Iraq. Congress approved the action, although most Democrats voted against giving President Bush authority to defend Kuwait. The war ended in a quick defeat for Iraq’s armed forces. But President Bush decided not to allow U.S. forces to finish the job and end Saddam Hussein’s reign as ruler of Iraq.

Bush’s other major blunder was to raise taxes, which helped to cause a recession. The country was recovering from the recession in 1992, when Bush ran for re-election, but his opponents were able to convince voters that Bush hadn’t done enough to end the recession. In spite of his quick (but incomplete) victory in the Persian Gulf War, Bush lost his bid for re-election because voters were concerned about the state of the economy.

William Jefferson Clinton (born William Jefferson Blythe III) (1946 – ), a Democrat, defeated George H.W. Bush in the 1992 election by gaining a majority of the electoral vote. But Clinton won only 43 percent of the popular vote. Bush won 37 percent, and 19 percent went to H. Ross Perot. Perot, a third-party candidate, who received many votes that probably would have been cast for Bush.

Clinton’s presidency got off to a bad start when he sent to Congress a proposal that would have put health care under government control. Congress rejected the plan, and a year later (in 1994) voters went to the polls in large number to elect Republican majorities to the House and Senate.

Clinton was able to win re-election in 1996, but he received only 49 percent of the popular vote. He was re-elected mainly because fewer Americans were out of work and incomes were rising. This economic “boom” was a continuation of the recovery that began under President Reagan. Clinton got credit for the “boom” of the 1990s, which occurred in spite of tax increases passed by Congress while it was still controlled by Democrats.

Clinton was perceived as a “moderate” Democrat because he tried to balance the government’s budget; that is, he tried not to spend more money than the government was receiving in taxes. He was eventually able to balance the budget, but only because he cut defense spending. In addition to that, Clinton made several bad decisions about defense issues. In 1993 he withdrew American troops from Somalia, instead of continuing with the military mission there after some troops were captured and killed by natives. In 1994 he signed an agreement with North Korea that was supposed to keep North Korea from developing nuclear weapons, but the North Koreans continued to work on building nuclear weapons because they had fooled Clinton. By 1998 Clinton knew that al Qaeda had become a major threat when terrorists bombed two U.S. embassies in Africa, but Clinton failed to go to war against al Qaeda. Only after terrorists struck a Navy ship, the USS Cole, in 2000 did Clinton declare terrorism to be a major threat. By then, his term of office was almost over.

Clinton was the second President to be impeached. The House of Representatives impeached him in 1998. He was charged with perjury (lying under oath) when he was the defendant (the person being charged with wrong-doing) in a law suit. The Senate didn’t convict Clinton because every Democrat senator refused to vote for conviction, in spite of overwhelming evidence that Clinton was guilty. The day before Clinton left office he acknowledged his guilt by agreeing to a five-year suspension of his law license. A federal judge later found Clinton guilty of contempt of court for his misleading testimony and fined him $90,000.

Clinton was involved in other scandals during his presidency, but he remains popular with many people because he is good at giving the false impression that he is a nice, humble person.

Clinton’s scandals had more effect on his Vice President, Al Gore, who ran for President as the nominee of the Democrat Party in 2000. His main opponent was George W. Bush, a Republican. A third-party candidate named Ralph Nader also received a lot of votes. The election of 2000 was the closest presidential election since 1876. Bush and Gore each won about 48 percent of the popular vote (Gore’s percentage was slightly higher than Bush’s); Nader won 3 percent. The winner of the election was decided by outcome of the vote in Florida. That outcome was the subject of legal proceedings for six weeks. It had to be decided by the U.S. Supreme Court.

Initial returns in Florida gave that State’s electoral votes to Bush, which meant that he would become President. But the Supreme Court of Florida decided that election officials should violate Florida’s election laws and keep recounting the ballots in certain counties. Those counties were selected because they had more Democrats than Republicans, and so it was likely that recounts would favor Gore, the Democrat. The case finally went to the U.S. Supreme Court, which decided that the Florida Supreme Court was wrong. The U.S. Supreme Court ordered an end to the recounts, and Bush was declared the winner of Florida’s electoral votes.

George Walker Bush (1946 – ), a Republican, was the second son of a President to become President. (The first was John Quincy Adams, the sixth President, whose father, John Adams, was the second President. Also, Benjamin Harrison, the 23rd President, was the grandson of William Henry Harrison, the ninth President.) Bush won re-election in 2004, with 51 percent of the popular vote. He served as President from January 20, 2001, to January 20, 2009.

President Bush’s major accomplishment before September 11, 2001, was to get Congress to cut taxes. The tax cuts were necessary because the economy had been in a recession since 2000. The tax cuts gave people more money to spend and encouraged businesses to expand and create new jobs.

The terrorist attacks on September 11, 2001, caused President Bush to give most of his time and attention to the War on Terror. The invasion of Afghanistan, late in 2001, was part of a larger campaign to disrupt terrorist activities. Afghanistan was ruled by the Taliban, a group that gave support and shelter to al Qaeda terrorists. The U.S. quickly defeated the Taliban and destroyed al Qaeda bases in Afghanistan.

The invasion of Iraq, which took place in 2003, was also intended to combat al Qaeda, but in a different way. Iraq, under Saddam Hussein, had been an enemy of the U.S. since the Persian Gulf War of 1990-1991. Hussein was trying to acquire deadly weapons to use against the U.S. and its allies. Hussein was also giving money to terrorists and sheltering them in Iraq. The defeat of Hussein, which came quickly after the invasion of Iraq, was intended to establish a stable, friendly government in the Middle East.

The invasion of Iraq produced some of the intended results, but there was much unrest there because of long-standing animosity between Sunni Muslims and Shi’a Muslims. There was also much defeatist talk about Iraq — especially by Democrats and the media. That defeatist talk helped to encourage those who were creating unrest in Iraq. It gave them hope that the U.S. would abandon Iraq, just as it abandoned Vietnam more than 30 years earlier. The country had become almost uncontrollable until Bush authorized a military “surge” — enough additional troops to quell the unrest.

However, Bush, like his father, failed to take a strategically decisive course of action. He should have ended the pretense of “nation-building”, beefed up U.S. military presence, and installed a compliant Iraqi government. That would have created a U.S. stronghold in the Middle East and stifled Iran’s moves toward regional hegemony, just as the presence of U.S. forces in Europe for decades after World War II kept the USSR from seizing new territory and eventually wore it down.

With Iraq as a U.S. base of operations, it would have been easier to quell Afghanistan and to launch preemptive strikes on Iran’s nuclear-weapons program while it was still in its early stages.

But the early failures in Iraq — and the futility of the Afghan operation (also done on the cheap) — meant that Bush had no political backing for bolder military measures. Further, the end of his second term was blighted by a financial crisis that led a stock-market crash, the failure of some major financial firms, the bailout of some others, and thence to the Great Recession.

The election of 2008 coincided with the economic downturn, and it was no surprise that the Democrat candidate handily beat the feckless Republican (in-name-only) candidate, John Sidney McCain III.

Barack Hussein Obama II (1961 – ) was the Democrat who defeated McCain. Obama, like most of his predecessors, was a professional politician, but most of his political experience was as a “community organizer” (i.e., rabble-rouser and shakedown artist) in Chicago. He was still serving in his first major office (as U.S. Senator from Illinois) when he vaulted ahead of Hillary Rodham Clinton and seized the Democrat nomination for the presidency. He served as President from January 20, 2009, until January 20, 2017.

Obama’s ascendancy was owed in large part to the perception of him as youthful and energetic. He was careful to seem moderate in his campaign rhetoric, though those in the know (party leaders and activists) were well aware of his strong left-wing leanings, which were revealed in his Senate votes and positions. Clinton, by contrast, was perceived as middle-of the-road, but only because the road had shifted well to the left over the years. It was she, for example, who propounded the health-care nationalization scheme known as HillaryCare. The scheme was defeated in Congress, but it was responsible in large part for massive swing of House seats in 1994, which returned the House to GOP control for the first time in 42 years.

Obama’s election was due also to a health dose of white “guilt”. Here was an opportunity for many voters to “prove” (and to brag about) their lack of racism. And so, given the experience of Iraq, the onset of the Great Recession, and a me-too Republican candidate, they did the easy thing by voting for Obama, and enjoyed the feel-good sensation that went with it.

At any rate, Obama served two terms (the second was secured by defeating Willard Mitt Romney, another feckless RINO). His presidency throughout both terms was marked by disastrous policies; for example:

  • Obamacare, which drastically raised health-care costs and insurance premiums and added millions of freeloaders to Medicaid
  • encouragement of illegal immigration, which imposes heavy burdens on middle-class taxpayers and is intended to swell the rolls of Democrat voters through amnesty schemes
  • increases in marginal tax rates for individuals and businesses
  • issuance of economically stultifying regulations at an unprecedented page
  • nomination of dozens of left-wing judges and two left-wing Supreme Court Justices, partly to ensure “empathic” (leftist) rulings rather than rulings in accordance with the Constitution
  • sharp reductions in defense spending
  • meddling in Libya, which through Hillary Clinton’s negligence cost the lives of American diplomats
  • Clinton’s use of a private e-mail server, in which Obama was complicit, and which resulted in the compromise of sensitive, classified information.
  • a drastic military draw-down in Iraq, with immediately dire consequences (and a just-in-time reversal by Obama)
  • persistent anti-white and anti-American rhetoric (the latter especially on foreign soil and at the UN)
  • persistent anti-business rhetoric that, together with tax increases and regulatory excesses, killed the recovery from the Great Recession and put the U.S. firmly on the road to economic stagnation.

It should therefore have been a simple matter for voters to reject Obama’s inevitable successor: Hillary Clinton. But the American public has been indoctrinated in leftism for decades by public schools, the mainstream media, and a plethora TV shows and movies, with the result that Clinton acquired 5 million more popular votes, nationwide, than did her Republican opponent. The foresight of the Framers of the Constitution proved providential because her opponent carefully chose his battlegrounds and was handily won in the electoral college. Thus …

Donald John Trump (1946 – ) succeeded Obama and was inaugurated as President on January 20, 2017. He is only in the third year of his presidency, but has accomplished much despite a “resistance” movement that began as soon as his election was assured in the early-morning hours of November 9, 2016. (The “resistance”, which I discuss here, is a continuation of political and social trends that are rooted in the 1960s.)

These are among Trump’s accomplishments, many of them the result of a successful collaboration with both houses of Congress, which Republicans controlled for the first two years of Trump’s presidency, and the Senate, which remains under GOP control:

  • the end of Obamacare’s requirement to buy some form of health-insurance or pay a “tax”, which penalized the healthy and forced many to do something that would otherwise not do
  • discouragement of illegal immigration through tougher enforcement (against a huge, left-wing financed influx of illegals)
  • decreases in marginal tax rates for individuals and businesses
  • the repeal of many economically stultifying regulations and a drastic slowdown in the issuance of regulations
  • nomination of dozens of conservative judges and two conservative Supreme Court Justices
  • sharp increases in defense spending
  • the beginning of the end of foreign adventures that are unrelated to the interests of Americans (e.g., the drawdown in Syria)
  • relative stability in Iraq
  • pro-American rhetoric on foreign soil and at the UN
  • persistent pro-business rhetoric that, together with tax-rate cuts and regulatory reform, is helping to buoy the U.S. economy despite slowdowns elsewhere and Trump’s “trade war”, which is really aimed at creating a level playing field for American companies and workers.

This story will be continued.

The Ukraine Controversy and Trump’s Popularity

The dust hasn’t settled yet, and probably won’t settle for many months, but here’s an interim report. The graph below depicts Trump’s approval ratings, according to a reliable source: Rasmussen Reports:

Rasmussen’s polling method covers all respondents (a sample of likely voters) over a span of three days. The gaps represent weekends, when Rasumussen doesn’t publish the results of the presidential approval poll.

The Washington Post broke the story on September 20 about Trump’s July 25 phone conversation with the president of Ukraine. Thus the results for September 16 through September 20 did not reflect the effects of the story on the views of Rasmussen’s respondents. Trump’s approval ratings continued to rise after September 20, and peaked on September 24. The ratings bottomed out on October 3 and have since returned to about where they were on September 16.

In sum, the reaction of likely voters to the Trump-Ukraine controversy is rather tepid. It may heat up if the House moves decisively toward writing articles of impeachment, with the controversy as a centerpiece. But as of now it seems (to me) that likely voters are reacting as follows:

They don’t care much about U.S.-Ukraine affairs.

They don’t care much about what Trump might have done to get the Ukraine to investigate Joe Biden’s son.

Biden’s history with Ukraine is as crooked as Trump’s putative attempt to extort an investigation of Biden’s son by Ukraine.

Even if the charges against Trump are true, in this case, it’s just Trump being Trump. Ho hum.

P.S. If you’re unimpressed by Trump’s approval ratings, you should read this post. Next to Obama he is a hero.

More about Modeling and Science

This post is based on a paper that I wrote 38 years ago. The subject then was the bankruptcy of warfare models, which shows through in parts of this post. I am trying here to generalize the message to encompass all complex, synthetic models (defined below). For ease of future reference, I have created a page that includes links to this post and the many that are listed at the bottom.

THE METAPHYSICS OF MODELING

Alfred North Whitehead said in Science and the Modern World (1925) that “the certainty of mathematics depends on its complete abstract generality” (p. 25). The attraction of mathematical models is their apparent certainty. But a model is only a representation of reality, and its fidelity to reality must be tested rather than assumed. And even if a model seems faithful to reality, its predictive power is another thing altogether. We are living in an era when models that purport to reflect reality are given credence despite their lack of predictive power. Ironically, those who dare point this out are called anti-scientific and science-deniers.

To begin at the beginning, I am concerned here with what I will call complex, synthetic models of abstract variables like GDP and “global” temperature. These are open-ended, mathematical models that estimate changes in the variable of interest by attempting to account for many contributing factors (parameters) and describing mathematically the interactions between those factors. I call such models complex because they have many “moving parts” — dozens or hundreds of sub-models — each of which is a model in itself. I call them synthetic because the estimated changes in the variables of interest depend greatly on the selection of sub-models, the depictions of their interactions, and the values assigned to the constituent parameters of the sub-models. That is to say, compared with a model of the human circulatory system or an internal combustion engine, a synthetic model of GDP or “global” temperature rests on incomplete knowledge of the components of the systems in question and the interactions among those components.

Modelers seem ignorant of or unwilling to acknowledge what should be a basic tenet of scientific inquiry: the complete dependence of logical systems (such as mathematical models) on the underlying axioms (assumptions) of those systems. Kurt Gödel addressed this dependence in his incompleteness theorems:

Gödel’s incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system capable of modelling basic arithmetic….

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.

There is the view that Gödel’s theorems aren’t applicable in fields outside of mathematical logic. But any quest for certainty about the physical world necessarily uses mathematical logic (which includes statistics).

This doesn’t mean that the results of computational exercises are useless. It simply means that they are only as good as the assumptions that underlie them; for example, assumptions about relationships between parameters, assumptions about the values of the parameters, and assumptions as to whether the correct parameters have been chosen (and properly defined) in the first place.

There is nothing new in that, certainly nothing that requires Gödel’s theorems by way of proof. It has long been understood that a logical argument may be valid — the conclusion follows from the premises — but untrue if the premises (axioms) are untrue. But it bears repeating — and repeating.

REAL MODELERS AT WORK

There have been mathematical models of one kind and another for centuries, but formal models weren’t used much outside the “hard sciences” until the development of microeconomic theory in the 19th century. Then came F.W. Lanchester, who during World War I devised what became known as Lanchester’s laws (or Lanchester’s equations), which are

mathematical formulae for calculating the relative strengths of military forces. The Lanchester equations are differential equations describing the time dependence of two [opponents’] strengths A and B as a function of time, with the function depending only on A and B.

Lanchester’s equations are nothing more than abstractions that must be given a semblance of reality by the user, who is required to make myriad assumptions (explicit and implicit) about the factors that determine the “strengths” of A and B, including but not limited to the relative killing power of various weapons, the effectiveness of opponents’ defenses, the importance of the speed and range of movement of various weapons, intelligence about the location of enemy forces, and commanders’ decisions about when, where, and how to engage the enemy. It should be evident that the predictive value of the equations, when thus fleshed out, is limited to small, discrete engagements, such as brief bouts of aerial combat between two (or a few) opposing aircraft. Alternatively — and in practice — the values are selected so as to yield results that mirror what actually happened (in the “replication” of a historical battle) or what “should” happen (given the preferences of the analyst’s client).

More complex (and realistic) mathematical modeling (also known as operations research) had seen limited use in industry and government before World War II. Faith in the explanatory power of mathematical models was burnished by their use during the war, where such models seemed to be of aid in the design of more effective tactics and weapons.

But the foundation of that success wasn’t the mathematical character of the models. Rather, it was the fact that the models were tested against reality. Philip M. Morse and George E. Kimball put it well in Methods of Operations Research (1946):

Operations research done separately from an administrator in charge of operations becomes an empty exercise. To be valuable it must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. [Op cit., p. 10]

A mathematical model doesn’t represent scientific knowledge unless its predictions can be and have been tested. Even then, a valid model can represent only a narrow slice of reality. The expansion of a model beyond that narrow slice requires the addition of parameters whose interactions may not be well understood and whose values will be uncertain.

Morse and Kimball accordingly urged “hemibel thinking”:

Having obtained the constants of the operations under study … we compare the value of the constants obtained in actual operations with the optimum theoretical value, if this can be computed. If the actual value is within a hemibel ( … a factor of 3) of the theoretical value, then it is extremely unlikely that any improvement in the details of the operation will result in significant improvement. [When] there is a wide gap between the actual and theoretical results … a hint as to the possible means of improvement can usually be obtained by a crude sorting of the operational data to see whether changes in personnel, equipment, or tactics produce a significant change in the constants. [Op cit., p. 38]

Should we really attach little significance to differences of less than a hemibel? Consider a five-parameter model involving the conditional probabilities of detecting, shooting at, hitting, and killing an opponent — and surviving, in the first place, to do any of these things. Such a model can easily yield a cumulative error of a hemibel (or greater), given a twenty-five percent error in the value each parameter. (Mathematically, 1.255 = 3.05; alternatively, 0.755 = 0.24, or about one-fourth.)

ANTI-SCIENTIFIC MODELING

What does this say about complex, synthetic models such as those of economic activity or “climate change”? Any such model rests on the modeler’s assumptions as to the parameters that should be included, their values (and the degree of uncertainty surrounding them), and the interactions among them. The interactions must be modeled based on further assumptions. And so assumptions and uncertainties — and errors — multiply apace.

But the prideful modeler (I have yet to meet a humble one) will claim validity if his model has been fine-tuned to replicate the past (e.g., changes in GDP, “global” temperature anomalies). But the model is useless unless it predicts the future consistently and with great accuracy, where “great” means accurately enough to validly represent the effects of public-policy choices (e.g., setting the federal funds rate, investing in CO2 abatement technology).

Macroeconomic Modeling: A Case Study

In macroeconomics, for example, there is Professor Ray Fair, who teaches macroeconomic theory, econometrics, and macroeconometric modeling at Yale University. He has been plying his trade at prestigious universities since 1968, first at Princeton, then at MIT, and since 1974 at Yale. Professor Fair has since 1983 been forecasting changes in real GDP — not decades ahead, just four quarters (one year) ahead. He has made 141 such forecasts, the earliest of which covers the four quarters ending with the second quarter of 1984, and the most recent of which covers the four quarters ending with the second quarter of 2019. The forecasts are based on a model that Professor Fair has revised many times over the years. The current model is here. His forecasting track record is here.) How has he done? Here’s how:

1. The median absolute error of his forecasts is 31 percent.

2. The mean absolute error of his forecasts is 69 percent.

3. His forecasts are rather systematically biased: too high when real, four-quarter GDP growth is less than 3 percent; too low when real, four-quarter GDP growth is greater than 3 percent.

4. His forecasts have grown generally worse — not better — with time. Recent forecasts are better, but still far from the mark.

Thus:


This and the next two graphs were derived from The Forecasting Record of the U.S. Model, Table 4: Predicted and Actual Values for Four-Quarter Real Growth, at Prof. Fair’s website. The vertical axis of this graph is truncated for ease of viewing, as noted in the caption.

You might think that Fair’s record reflects the persistent use of a model that’s too simple to capture the dynamics of a multi-trillion-dollar economy. But you’d be wrong. The model changes quarterly. This page lists changes only since late 2009; there are links to archives of earlier versions, but those are password-protected.

As for simplicity, the model is anything but simple. For example, go to Appendix A: The U.S. Model: July 29, 2016, and you’ll find a six-sector model comprising 188 equations and hundreds of variables.

And what does that get you? A weak predictive model:

It fails a crucial test, in that it doesn’t reflect the downward trend in economic growth:

General Circulation Models (GCMs) and “Climate Change”

As for climate models, Dr. Tim Ball writes about a

fascinating 2006 paper by Essex, McKitrick, and Andresen asked, Does a Global Temperature Exist.” Their introduction sets the scene,

It arises from projecting a sampling of the fluctuating temperature field of the Earth onto a single number (e.g. [3], [4]) at discrete monthly or annual intervals. Proponents claim that this statistic represents a measurement of the annual global temperature to an accuracy of ±0.05 ◦C (see [5]). Moreover, they presume that small changes in it, up or down, have direct and unequivocal physical meaning.

The word “sampling” is important because, statistically, a sample has to be representative of a population. There is no way that a sampling of the “fluctuating temperature field of the Earth,” is possible….

… The reality is we have fewer stations now than in 1960 as NASA GISS explain (Figure 1a, # of stations and 1b, Coverage)….

Not only that, but the accuracy is terrible. US stations are supposedly the best in the world but as Anthony Watt’s project showed, only 7.9% of them achieve better than a 1°C accuracy. Look at the quote above. It says the temperature statistic is accurate to ±0.05°C. In fact, for most of the 406 years when instrumental measures of temperature were available (1612), they were incapable of yielding measurements better than 0.5°C.

The coverage numbers (1b) are meaningless because there are only weather stations for about 15% of the Earth’s surface. There are virtually no stations for

  • 70% of the world that is oceans,
  • 20% of the land surface that are mountains,
  • 20% of the land surface that is forest,
  • 19% of the land surface that is desert and,
  • 19% of the land surface that is grassland.

The result is we have inadequate measures in terms of the equipment and how it fits the historic record, combined with a wholly inadequate spatial sample. The inadequacies are acknowledged by the creation of the claim by NASA GISS and all promoters of anthropogenic global warming (AGW) that a station is representative of a 1200 km radius region.

I plotted an illustrative example on a map of North America (Figure 2).

clip_image006

Figure 2

Notice that the claim for the station in eastern North America includes the subarctic climate of southern James Bay and the subtropical climate of the Carolinas.

However, it doesn’t end there because this is only a meaningless temperature measured in a Stevenson Screen between 1.25 m and 2 m above the surface….

The Stevenson Screen data [are] inadequate for any meaningful analysis or as the basis of a mathematical computer model in this one sliver of the atmosphere, but there [are] even less [data] as you go down or up. The models create a surface grid that becomes cubes as you move up. The number of squares in the grid varies with the naïve belief that a smaller grid improves the models. It would if there [were] adequate data, but that doesn’t exist. The number of cubes is determined by the number of layers used. Again, theoretically, more layers would yield better results, but it doesn’t matter because there are virtually no spatial or temporal data….

So far, I have talked about the inadequacy of the temperature measurements in light of the two- and three-dimensional complexities of the atmosphere and oceans. However, one source identifies the most important variables for the models used as the basis for energy and environmental policies across the world.

Sophisticated models, like Coupled General Circulation Models, combine many processes to portray the entire climate system. The most important components of these models are the atmosphere (including air temperature, moisture and precipitation levels, and storms); the oceans (measurements such as ocean temperature, salinity levels, and circulation patterns); terrestrial processes (including carbon absorption, forests, and storage of soil moisture); and the cryosphere (both sea ice and glaciers on land). A successful climate model must not only accurately represent all of these individual components, but also show how they interact with each other.

The last line is critical and yet impossible. The temperature data [are] the best we have, and yet [they are] completely inadequate in every way. Pick any of the variables listed, and you find there [are] virtually no data. The answer to the question, “what are we really measuring,” is virtually nothing, and what we measure is not relevant to anything related to the dynamics of the atmosphere or oceans.

I am especially struck by Dr. Ball’s observation that the surface-temperature record applies to about 15 percent of Earth’s surface. Not only that, but as suggested by Dr. Ball’s figure 2, that 15 percent is poorly sampled.

And yet the proponents of CO2-forced “climate change” rely heavily on that flawed temperature record because it is the only one that goes back far enough to “prove” the modelers’ underlying assumption, namely, that it is anthropogenic CO2 emissions which have caused the rise in “global” temperatures. See, for example, Dr. Roy Spencer’s “The Faith Component of Global Warming Predictions“, wherein Dr. Spencer points out that the modelers

have only demonstrated what they assumed from the outset. It is circular reasoning. A tautology. Evidence that nature also causes global energy imbalances is abundant: e.g., the strong warming before the 1940s; the Little Ice Age; the Medieval Warm Period. This is why many climate scientists try to purge these events from the historical record, to make it look like only humans can cause climate change.

In fact the models deal in temperature anomalies, that is, departures from a 30-year average. The anomalies — which range from -1.41 to +1.68 degrees C — are so small relative to the errors and uncertainties inherent in the compilation, estimation, and model-driven adjustments of the temperature record, that they must fail Morse and Kimball’s hemibel test. (The model-driven adjustments are, as Dr. Spencer suggests, downward adjustments of historical temperature data for consistency with the models which “prove” that CO2 emissions induce a certain rate of warming. More circular reasoning.)

They also fail, and fail miserably, the acid test of predicting future temperatures with accuracy. This failure has been pointed out many times. Dr. John Christy, for example, has testified to that effect before Congress (e.g., this briefing). Defenders of the “climate change” faith have attacked Dr. Christy’s methods and finding, but the rebuttals to one such attack merely underscore the validity of Dr. Christy’s work.

This is from “Manufacturing Alarm: Dana Nuccitelli’s Critique of John Christy’s Climate Science Testimony“, by Mario Lewis Jr.:

Christy’s testimony argues that the state-of-the-art models informing agency analyses of climate change “have a strong tendency to over-warm the atmosphere relative to actual observations.” To illustrate the point, Christy provides a chart comparing 102 climate model simulations of temperature change in the global mid-troposphere to observations from two independent satellite datasets and four independent weather balloon data sets….

To sum up, Christy presents an honest, apples-to-apples comparison of modeled and observed temperatures in the bulk atmosphere (0-50,000 feet). Climate models significantly overshoot observations in the lower troposphere, not just in the layer above it. Christy is not “manufacturing doubt” about the accuracy of climate models. Rather, Nuccitelli is manufacturing alarm by denying the models’ growing inconsistency with the real world.

And this is from Christopher Monckton of Brenchley’s “The Guardian’s Dana Nuccitelli Uses Pseudo-Science to Libel Dr. John Christy“:

One Dana Nuccitelli, a co-author of the 2013 paper that found 0.5% consensus to the effect that recent global warming was mostly manmade and reported it as 97.1%, leading Queensland police to inform a Brisbane citizen who had complained to them that a “deception” had been perpetrated, has published an article in the British newspaper The Guardian making numerous inaccurate assertions calculated to libel Dr John Christy of the University of Alabama in connection with his now-famous chart showing the ever-growing discrepancy between models’ wild predictions and the slow, harmless, unexciting rise in global temperature since 1979….

… In fact, as Mr Nuccitelli knows full well (for his own data file of 11,944 climate science papers shows it), the “consensus” is only 0.5%. But that is by the bye: the main point here is that it is the trends on the predictions compared with those on the observational data that matter, and, on all 73 models, the trends are higher than those on the real-world data….

[T]he temperature profile [of the oceans] at different strata shows little or no warming at the surface and an increasing warming rate with depth, raising the possibility that, contrary to Mr Nuccitelli’s theory that the atmosphere is warming the ocean, the ocean is instead being warmed from below, perhaps by some increase in the largely unmonitored magmatic intrusions into the abyssal strata from the 3.5 million subsea volcanoes and vents most of which Man has never visited or studied, particularly at the mid-ocean tectonic divergence boundaries, notably the highly active boundary in the eastern equatorial Pacific. [That possibility is among many which aren’t considered by GCMs.]

How good a job are the models really doing in their attempts to predict global temperatures? Here are a few more examples:

Mr Nuccitelli’s scientifically illiterate attempts to challenge Dr Christy’s graph are accordingly misconceived, inaccurate and misleading.

I have omitted the bulk of both pieces because this post is already longer than needed to make my point. I urge you to follow the links and read the pieces for yourself.

Finally, I must quote a brief but telling passage from a post by Pat Frank, “Why Roy Spencer’s Criticism is Wrong“:

[H]ere’s NASA on clouds and resolution: “A doubling in atmospheric carbon dioxide (CO2), predicted to take place in the next 50 to 100 years, is expected to change the radiation balance at the surface by only about 2 percent. … If a 2 percent change is that important, then a climate model to be useful must be accurate to something like 0.25%. Thus today’s models must be improved by about a hundredfold in accuracy, a very challenging task.

Frank’s very long post substantiates what I say here about the errors and uncertainties in GCMs — and the multiplicative effect of those errors and uncertainties. I urge you to read it. It is telling that “climate skeptics” like Spencer and Frank will argue openly, whereas “true believers” work clandestinely to present a united front to the public. It’s science vs. anti-science.

CONCLUSION

In the end, complex, synthetic models can be defended only by resorting to the claim that they are “scientific”, which is a farcical claim when models consistently fail to yield accurate predictions. It is a claim based on a need to believe in the models — or, rather, what they purport to prove. It is, in other words, false certainty, which is the enemy of truth.

Newton said it best:

I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

Just as Newton’s self-doubt was not an attack on science, neither have I essayed an attack on science or modeling — only on the abuses of both that are too often found in the company of complex, synthetic models. It is too easily forgotten that the practice of science (of which modeling is a tool) is in fact an art, not a science. With this art we may portray vividly the few pebbles and shells of truth that we have grasped; we can but vaguely sketch the ocean of truth whose horizons are beyond our reach.


Related pages and posts:

Climate Change
Modeling and Science

Modeling Is Not Science
Modeling, Science, and Physics Envy
Demystifying Science
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
The Limits of Science (II)
“The Science Is Settled”
The Limits of Science, Illustrated by Scientists
Rationalism, Empiricism, and Scientific Knowledge
Ty Cobb and the State of Science
Is Science Self-Correcting?
Mathematical Economics
Words Fail Us
“Science” vs. Science: The Case of Evolution, Race, and Intelligence
Modeling Revisited
The Fragility of Knowledge
Global-Warming Hype
Pattern-Seeking
Hurricane Hysteria
Deduction, Induction, and Knowledge
A (Long) Footnote about Science
The Balderdash Chronicles
Analytical and Scientific Arrogance
The Pretence of Knowledge
Wildfires and “Climate Change”
Why I Don’t Believe in “Climate Change”
Modeling Is Not Science: Another Demonstration
Ad-Hoc Hypothesizing and Data Mining
Analysis vs. Reality

Understanding the “Resistance”: The Enemies Within

There have been, since the 1960s, significant changes in the culture of America. Those changes have been led by a complex consisting of the management of big corporations (especially but not exclusively Big Tech), the crypto-authoritarians of academia, the “news” and “entertainment” media, and affluent adherents of  “hip” urban culture on the two Left Coasts. The changes include but are far from limited to the breakdown of long-standing, civilizing, and uniting norms. These are notably (but far from exclusively) traditional marriage and family formation, religious observance, self-reliance, gender identity, respect for the law (including immigration law), pride in America’s history, and adherence to the rules of politics even when you are on the losing end of an election.

Most of the changes haven’t occurred through cultural diffusion, trial and error, and general acceptance of what seems to be a change for the better. No, most of the changes have been foisted on the public at large through legislative, executive, and judicial “activism” by the disciples of radical guru Saul Alinsky (e.g., Barack Obama), who got their start in the anti-war riots of the 1960s and 1970s. They and their successors then cloaked themselves in respectability (e.g., by obtaining Ivy League degrees) to infiltrate and subvert the established order.

How were those disciples bred? Through the public-education system, the universities, and the mass media. The upside-down norms of the new order became gospel to the disciples. Thus the Constitution is bad, free markets are bad, freedom of association (for thee) is bad, self-defense (for thee) is bad, defense of the country must not get in the way of “social justice”, socialism and socialized medicine (for thee) are good, a long list of “victims” of “society” must be elevated, compensated, and celebrated regardless of their criminality and lack of ability.

And the disciples of the new dispensation must do whatever it takes to achieve their aims. Even if it means tearing up long-accepted rules, from those inculcated through religion to those written in the Constitution. Even if it means aggression beyond beyond strident expression of views to the suppression of unwelcome views, “outing” those who don’t subscribe to those views, and assaulting perceived enemies — physically and verbally.

All of this is the product of a no-longer-stealthy revolution fomented by a vast, left-wing conspiracy. One aspect of this movement has been the unrelenting attempt to subvert the 2016 election and reverse its outcome. Thus the fraud known as Spygate (a.k.a. Russiagate) and the renewed drive to impeach Trump, engineered with the help of a former Biden staffer.

Why such a hysterical and persistent reaction to the outcome of the 2016 election?  (The morally corrupt, all-out effort to block the confirmation of Justice Kavanaugh was a loud echo of that reaction.) Because the election of 2016 had promised to be the election to end all elections — the election that might have all-but-assured the the ascendancy of the left in America, with the Supreme Court as a strategic high ground.

But Trump — through his budget priorities, deregulatory efforts, and selection of constitutionalist judges — has made a good start on undoing Obama’s great leap forward in the left’s century-long march toward its vision of Utopia. The left cannot allow this to continue, for if Trump succeeds (and a second term might cement his success), its vile work could be undone.

There has been, in short, a secession — not of States (though some of them act that way), but of a broad and powerful alliance, many of whose members serve in government. They constitute a foreign presence in the midst of “real Americans“.

They are barbarians inside the gate, and must be thought of as enemies.

Expressing Certainty (or Uncertainty)

I have waged war on the misuse of probability for a long time. As I say in the post at the link:

A probability is a statement about a very large number of like events, each of which has an unpredictable (random) outcome. Probability, properly understood, says nothing about the outcome of an individual event. It certainly says nothing about what will happen next.

From a later post:

It is a logical fallacy to ascribe a probability to a single event. A probability represents the observed or computed average value of a very large number of like events. A single event cannot possess that average value. A single event has a finite number of discrete and mutually exclusive outcomes. Those outcomes will not “average out” — only one of them will obtain, like Schrödinger’s cat.

To say that the outcomes will average out — which is what a probability implies — is tantamount to saying that Jack Sprat and his wife were neither skinny nor fat because their body-mass indices averaged to a normal value. It is tantamount to saying that one can’t drown by walking across a pond with an average depth of 1 foot, when that average conceals the existence of a 100-foot-deep hole.

But what about hedge words that imply “probability” without saying it: certain, uncertain, likely, unlikely, confident, not confident, sure, unsure, and the like? I admit to using such words, which are common in discussions about possible future events and the causes of past events. But what do I, and presumably others, mean by them?

Hedge words are statements about the validity of hypotheses about phenomena or causal relationships. There are two ways of looking at such hypotheses, frequentist and Bayesian:

While for the frequentist, a hypothesis is a proposition (which must be either true or false) so that the frequentist probability of a hypothesis is either 0 or 1, in Bayesian statistics, the probability that can be assigned to a hypothesis can also be in a range from 0 to 1 if the truth value is uncertain.

Further, as discussed above, there is no such thing as the probability of a single event. For example, the Mafia either did or didn’t have JFK killed, and that’s all there is to say about that. One might claim to be “certain” that the Mafia had JFK killed, but one can be certain only if one is in possession of incontrovertible evidence to that effect. But that certainty isn’t a probability, which can refer only to the frequency with which many events of the same kind have occurred and can be expected to occur.

A Bayesian view about the “probability” of the Mafia having JFK killed is nonsensical. Even If a Bayesian is certain, based on incontrovertible evidence, that the Mafia had JFK killed, there is no probability attached to the occurrence. It simply happened, and that’s that.

Lacking such evidence, a Bayesian (or an unwitting “man on the street”) might say “I believe there’s a 50-50 chance that the Mafia had JFK killed”. Does that mean (1) there’s some evidence to support the hypothesis, but it isn’t conclusive, or (2) that the speaker would bet X amount of money, at even odds, that incontrovertible evidence (if any) surfaces it will prove that the Mafia had JFK killed? In the first case, attaching a 50-percent probability to the hypothesis is nonsensical; how does the existence of some evidence translate into a statement about the probability of a one-off event that either occurred or didn’t occur? In the second case, the speaker’s willingness to bet on the occurrence of an event at certain odds tells us something about the speaker’s preference for risk-taking but nothing at all about whether or not the event occurred.

What about the familiar use of “probability” (a.k.a., “chance”) in weather forecasts? Here’s my take:

[W]hen you read or hear a statement like “the probability of rain tomorrow is 80 percent”, you should mentally translate it into language like this:

X guesses that Y will (or will not) happen at time Z, and the “probability” that he attaches to his guess indicates his degree of confidence in it.

The guess may be well-informed by systematic observation of relevant events, but it remains a guess. As most Americans have learned and relearned over the years, when rain has failed to materialize or has spoiled an outdoor event that was supposed to be rain-free.

Further, it is true that some things happen more often than other things but

only one thing will happen at a given time and place.

[A] clever analyst could concoct a probability of a person’s being shot by writing an equation that includes such variables as his size, the speed with which he walks, the number of shooters, their rate of fire, and the distance across the shooting range.

What would the probability estimate mean? It would mean that if a very large number of persons walked across the shooting range under identical conditions, approximately S percent of them would be shot. But the clever analyst cannot specify which of the walkers would be among the S percent.

Here’s another way to look at it. One person wearing head-to-toe bullet-proof armor could walk across the range a large number of times and expect to be hit by a bullet on S percent of his crossings. But the hardy soul wouldn’t know on which of the crossings he would be hit.

Suppose the hardy soul became a foolhardy one and made a bet that he could cross the range without being hit. Further, suppose that S is estimated to be 0.75; that is, 75 percent of a string of walkers would be hit, or a single (bullet-proof) walker would be hit on 75 percent of his crossings. Knowing the value of S, the foolhardy fellow offers to pay out $1 million dollars if he crosses the range unscathed — one time — and claim $4 million (for himself or his estate) if he is shot. That’s an even-money bet, isn’t it?

No it isn’t….

The bet should be understood for what it is, an either-or-proposition. The foolhardy walker will either lose $1 million or win $4 million. The bettor (or bettors) who take the other side of the bet will either win $1 million or lose $4 million.

As anyone with elementary reading and reasoning skills should be able to tell, those possible outcomes are not the same as the outcome that would obtain (approximately) if the foolhardy fellow could walk across the shooting range 1,000 times. If he could, he would come very close to breaking even, as would those who bet against him.

I omitted from the preceding quotation a sentence in which I used “more likely”:

If a person walks across a shooting range where live ammunition is being used, he is more likely to be killed than if he walks across the same patch of ground when no one is shooting.

Inasmuch as “more likely” is a hedge word, I seem to have contradicted my own position about the probability of a single event, such as being shot while walking across a shooting range. In that context, however, “more likely” means that something could happen (getting shot) that wouldn’t happen in a different situation. That’s not really a probabilistic statement. It’s a statement about opportunity; thus:

  • Crossing a firing range generates many opportunities to be shot.
  • Going into a crime-ridden neighborhood certainly generates some opportunities to be shot, but their number and frequency depends on many variables: which neighborhood, where in the neighborhood, the time of day, who else is present, etc.
  • Sitting by oneself, unarmed, in a heavy-gauge steel enclosure generates no opportunities to be shot.

The “chance” of being shot is, in turn, “more likely”, “likely”, and “unlikely” — or a similar ordinal pattern that uses “certain”, “confident”, “sure”, etc. But the ordinal pattern, in any case, can never (logically) include statements like “completely certain”, “completely confident”, etc.

An ordinal pattern is logically valid only if it conveys the relative number of opportunities to attain a given kind of outcome — being shot, in the example under discussion.

Ordinal statements about different types of outcome are meaningless. Consider, for example, the claim that the probability that the Mafia had JFK killed is higher than (or lower than or the same as) the probability that the moon is made of green cheese. First, and to repeat myself for the nth time, the phenomena in question are one-of-a-kind and do not lend themselves to statements about their probability, nor even about the frequency of opportunities for the occurrence of the phenomena. Second, the use of “probability” is just a hifalutin way of saying that the Mafia could have had a hand in the killing of JFK, whereas it is known (based on ample scientific evidence, including eye-witness accounts) that the Moon isn’t made of green cheese. So the ordinal statement is just a cheap rhetorical trick that is meant to (somehow) support the subjective belief that the Mafia “must” have had a hand in the killing of JFK.

Similarly, it is meaningless to say that the “average person” is “more certain” of being killed in an auto accident than in a plane crash, even though one may have many opportunities to die in an auto accident or a plane crash. There is no “average person”; the incidence of auto travel and plane travel varies enormously from person to person; and the conditions that conduce to fatalities in auto travel and plane travel vary just as enormously.

Other examples abound. Be on the lookout for them, and avoid emulating them.

Socialism, Communism, and Three Paradoxes

According to Wikipedia, socialism

is a range of economic and social systems characterised by social ownership of the means of production and workers’ self-management, as well as the political theories and movements associated with them. Social ownership can be public, collective[,] or cooperative ownership, or citizen ownership of equity.

Communism

is the philosophical, social, political, and economic ideology and movement whose ultimate goal is the establishment of the communist society, which is a socioeconomic order structured upon the common ownership of the means of production and the absence of social classes, money, and the state.

The only substantive difference between socialism and communism, in theory, is that communism somehow manages to do away with the state. This, of course, never happens, except in real communes, most of which were and are tiny, short-lived arrangements. (In what follows, I therefore put communism in “sneer quotes”.)

The common thread of socialism and “communism” is collective ownership of “equity”, that is, the means of production. But that kind of ownership eliminates an important incentive to invest in the development and acquisition of capital improvements that yield more and better output and therefore raise the general standard of living. The incentive, of course, is the opportunity to reap a substantial reward for taking a substantial risk. Absent that incentive, as has been amply demonstrated by the tragic history of socialist and “communist” regimes, the general standard of living is low and economic growth is practically (if not actually) stagnant.*

So here’s the first paradox: Systems that, by magical thinking, are supposed to make people better off do just the opposite: They make people worse off than they would otherwise be.

All of this because of class envy. Misplaced class envy, at that. “Capitalism” (a smear word) is really the voluntary and relatively unfettered exchange of products and services, including labor. Its ascendancy in the West is just a happy accident of the movement toward the kind of liberalism exemplified in the Declaration of Independence and Constitution. People were from traditional economic roles and allowed to put their talents to more productive uses, which included investing their time and money in capital that yielded more and better products and services.

Most “capitalists” in America were and still are workers who made risky investments to start and build businesses. Businesses that employs other workers and which offer things of value that consumers can take or leave, as they wish (unlike the typical socialist or “communist” system).

So here’s the second paradox: Socialism and “communism” actually suppress the very workers whom they are meant to benefit, in theory and rhetoric.

The third paradox is that socialist and “communist” regimes like to portray themselves as “democratic”, even though they are quite the opposite: ruled by party bosses who bestow favors on their protegees. Free markets are in fact truly democratic, in that their outcomes are determined directly by the participants in those markets.
__________
* If you believe that socialist and “communist” regimes can efficiently direct capital formation and make an economy more productive, see “Socialist Calculation and the Turing Test“, “Monopoly: Private Is Better Than Public“, and “The Rahn Curve in Action“, which quantifies the stultifying effects of government spending and regulation.

As for China, imagine what an economic powerhouse it would be if, long ago, its emperors (including its “communist” ones, like Mao) had allowed its intelligent populace to become capitalists. China’s recent emergence as an economic dynamo is built on the sand of state ownership and direction. China, in fact, ranks low in per-capita GDP among industrialized nations. Its progress is a testament to forced industrialization, and was bound to better than what had come before. But it is worse than what could have been had China not suffered under autocratic rule for millennia.

In Defense of the Oxford Comma

The Oxford comma, also known as the serial comma, is the comma that precedes the last item in a list of three or more items (e.g., the red, white, and blue). Newspapers (among other sinners) eschew the serial comma for reasons too arcane to pursue here. Thoughtful counselors advise its use. (See, for example, Wilson Follett’s Modern American Usage at pp. 422-423.) Why? Because the serial comma, like the hyphen in a compound adjective, averts ambiguity. It isn’t always necessary, but if it is used consistently, ambiguity can be avoided.

Here’s a great example, from the Wikipedia article linked to in the first sentence of this paragraph: “To my parents, Ayn Rand and God”. The writer means, of course, “To my parents, Ayn Rand, and God”.

Kylee Zempel has much more to say in her essay, “Using the Oxford Comma Is a Sign of Grace and Clarity“. It is, indeed.

(For much more about writing, see my page “Writing: A Guide“.)

Regarding Napoleon Chagnon

Napoleon Alphonseau Chagnon (1938-2019) was a noted anthropologist to whom the label “controversial” was applied. Some of the story is told in this surprisingly objective New York Times article about Chagnon’s life and death. Matthew Blackwell gives a more complete account in “The Dangerous Life of an Anthropologist” (Quilette, October 5, 2019).

Chagnon’s sin was his finding that “nature” trumped “nurture”, as demonstrated by his decades-long ethnographic field work among the Yanomamö, indigenous Amazonians who live in the border area between Venezuela and Brazil. As Blackwell tells it,

Chagnon found that up to 30 percent of all Yanomamö males died a violent death. Warfare and violence were common, and duelling was a ritual practice, in which two men would take turns flogging each other over the head with a club, until one of the combatants succumbed. Chagnon was adamant that the primary causes of violence among the Yanomamö were revenge killings and women. The latter may not seem surprising to anyone aware of the ubiquity of ruthless male sexual competition in the animal kingdom, but anthropologists generally believed that human violence found its genesis in more immediate matters, such as disputes over resources. When Chagnon asked the Yanomamö shaman Dedeheiwa to explain the cause of violence, he replied, “Don’t ask such stupid questions! Women! Women! Women! Women! Women!” Such fights erupted over sexual jealousy, sexual impropriety, rape, and attempts at seduction, kidnap and failure to deliver a promised girl….

Chagnon would make more than 20 fieldwork visits to the Amazon, and in 1968 he published Yanomamö: The Fierce People, which became an instant international bestseller. The book immediately ignited controversy within the field of anthropology. Although it commanded immense respect and became the most commonly taught book in introductory anthropology courses, the very subtitle of the book annoyed those anthropologists, who preferred to give their monographs titles like The Gentle Tasaday, The Gentle People, The Harmless People, The Peaceful People, Never in Anger, and The Semai: A Nonviolent People of Malaya. The stubborn tendency within the discipline was to paint an unrealistic façade over such cultures—although 61 percent of Waorani men met a violent death, an anthropologist nevertheless described this Amazonian people as a “tribe where harmony rules,” on account of an “ethos that emphasized peacefulness.”…

These anthropologists were made more squeamish still by Chagnon’s discovery that the unokai of the Yanomamö—men who had killed and assumed a ceremonial title—had about three times more children than others, owing to having twice as many wives. Drawing on this observation in his 1988 Science article “Life Histories, Blood Revenge, and Warfare in a Tribal Population,” Chagnon suggested that men who had demonstrated success at a cultural phenomenon, the military prowess of revenge killings, were held in higher esteem and considered more attractive mates. In some quarters outside of anthropology, Chagnon’s theory came as no surprise, but its implication for anthropology could be profound. In The Better Angels of Our Nature, Steven Pinker points out that if violent men turn out to be more evolutionarily fit, “This arithmetic, if it persisted over many generations, would favour a genetic tendency to be willing and able to kill.”…

Chagnon considered his most formidable critic to be the eminent anthropologist Marvin Harris. Harris had been crowned the unofficial historian of the field following the publication of his all-encompassing work The Rise of Anthropological Theory. He was the founder of the highly influential materialist school of anthropology, and argued that ethnographers should first seek material explanations for human behavior before considering alternatives, as “human social life is a response to the practical problems of earthly existence.” Harris held that the structure and “superstructure” of a society are largely epiphenomena of its “infrastructure,” meaning that the economic and social organization, beliefs, values, ideology, and symbolism of a culture evolve as a result of changes in the material circumstances of a particular society, and that apparently quaint cultural practices tend to reflect man’s relationship to his environment. For instance, prohibition on beef consumption among Hindus in India is not primarily due to religious injunctions. These religious beliefs are themselves epiphenomena to the real reasons: that cows are more valuable for pulling plows and producing fertilizers and dung for burning. Cultural materialism places an emphasis on “-etic” over “-emic” explanations, ignoring the opinions of people within a society and trying to uncover the hidden reality behind those opinions.

Naturally, when the Yanomamö explained that warfare and fights were caused by women and blood feuds, Harris sought a material explanation that would draw upon immediate survival concerns. Chagnon’s data clearly confirmed that the larger a village, the more likely fighting, violence, and warfare were to occur. In his book Good to Eat: Riddles of Food and Culture Harris argued that fighting occurs more often in larger Yanomamö villages because these villages deplete the local game levels in the rainforest faster than smaller villages, leaving the men no option but to fight with each other or to attack outside groups for meat to fulfil their protein macronutrient needs. When Chagnon put Harris’s materialist theory to the Yanomamö they laughed and replied, “Even though we like meat, we like women a whole lot more.” Chagnon believed that smaller villages avoided violence because they were composed of tighter kin groups—those communities had just two or three extended families and had developed more stable systems of borrowing wives from each other.

There’s more:

Survival International … has long promoted the Rousseauian image of a traditional people who need to be preserved in all their natural wonder from the ravages of the modern world. Survival International does not welcome anthropological findings that complicate this harmonious picture, and Chagnon had wandered straight into their line of fire….

For years, Survival International’s Terence Turner had been assisting a self-described journalist, Patrick Tierney, as the latter investigated Chagnon for his book, Darkness in El Dorado: How Scientists and Journalists Devastated the Amazon. In 2000, as Tierney’s book was being readied for publication, Turner and his colleague Leslie Sponsel wrote to the president of the American Anthropological Association (AAA) and informed her that an unprecedented crisis was about to engulf the field of anthropology. This, they warned, would be a scandal that, “in its scale, ramifications, and sheer criminality and corruption, is unparalleled in the history of Anthropology.” Tierney alleged that Chagnon and Neel had spread measles among the Yanomamö in 1968 by using compromised vaccines, and that Chagnon’s documentaries depicting Yanomamö violence were faked by using Yanomamö to act out dangerous scenes, in which further lives were lost. Chagnon was blamed, inter alia, for inciting violence among the Yanomamö, cooking his data, starting wars, and aiding corrupt politicians. Neel was also accused of withholding vaccines from certain populations of natives as part of an experiment. The media were not slow to pick up on Tierney’s allegations, and the Guardian ran an article under an inflammatory headline accusing Neel and Chagnon of eugenics: “Scientists ‘killed Amazon Indians to test race theory.’” Turner claimed that Neel believed in a gene for “leadership” and that the human genetic stock could be upgraded by wiping out mediocre people. “The political implication of this fascistic eugenics,” Turner told the Guardian, “is clearly that society should be reorganised into small breeding isolates in which genetically superior males could emerge into dominance, eliminating or subordinating the male losers.”

By the end of 2000, the American Anthropological Association announced a hearing on Tierney’s book. This was not entirely reassuring news to Chagnon, given their history with anthropologists who failed to toe the party line….

… Although the [AAA] taskforce [appointed to investigate Tierney’s accusations] was not an “investigation” concerned with any particular person, for all intents and purposes, it blamed Chagnon for portraying the Yanomamö in a way that was harmful and held him responsible for prioritizing his research over their interests.

Nonetheless, the most serious claims Tierney made in Darkness in El Dorado collapsed like a house of cards. Elected Yanomamö leaders issued a statement in 2000 stating that Chagnon had arrived after the measles epidemic and saved lives, “Dr. Chagnon—known to us as Shaki—came into our communities with some physicians and he vaccinated us against the epidemic disease which was killing us. Thanks to this, hundreds of us survived and we are very thankful to Dr. Chagnon and his collaborators for help.” Investigations by the American Society of Human Genetics and the International Genetic Epidemiology Society both found Tierney’s claims regarding the measles outbreak to be unfounded. The Society of Visual Anthropology reviewed the so-called faked documentaries, and determined that these allegations were also false. Then an independent preliminary report released by a team of anthropologists dissected Tierney’s book claim by claim, concluding that all of Tierney’s most important assertions were either deliberately fraudulent or, at the very least, misleading. The University of Michigan reached the same conclusion. “We are satisfied,” its Provost stated, “that Dr. Neel and Dr. Chagnon, both among the most distinguished scientists in their respective fields, acted with integrity in conducting their research… The serious factual errors we have found call into question the accuracy of the entire book [Darkness in El Dorado] as well as the interpretations of its author.” Academic journal articles began to proliferate, detailing the mis-inquiry and flawed conclusions of the 2002 taskforce. By 2005, only three years later, the American Anthropological Association voted to withdraw the 2002 taskforce report, re-exonerating Chagnon.

A 2000 statement by the leaders of the Yanomamö and their Ye’kwana neighbours called for Tierney’s head: “We demand that our national government investigate the false statements of Tierney, which taint the humanitarian mission carried out by Shaki [Chagnon] with much tenderness and respect for our communities. The investigation never occurred, but Tierney’s public image lay in ruins and would suffer even more at the hands of historian of science Alice Dreger, who interviewed dozens of people involved in the controversy. Although Tierney had thanked a Venezuelan anthropologist for providing him with a dossier of information on Chagnon for his book, the anthropologist told Dreger that Tierney had actually written the dossier himself and then misrepresented it as an independent source of information.

A “dossier” and its use to smear an ideological opponent. Where else have we seen that?

Returning to Blackwell:

Scientific American has described the controversy as “Anthropology’s Darkest Hour,” and it raises troubling questions about the entire field. In 2013, Chagnon published his final book, Noble Savages: My Life Among Two Dangerous Tribes—The Yanomamö and the Anthropologists. Chagnon had long felt that anthropology was experiencing a schism more significant than any difference between research paradigms or schools of ethnography—a schism between those dedicated to the very science of mankind, anthropologists in the true sense of the word, and those opposed to science; either postmodernists vaguely defined, or activists disguised as scientists who seek to place indigenous advocacy above the pursuit of objective truth. Chagnon identified Nancy Scheper-Hughes as a leader in the activist faction of anthropologists, citing her statement that we “need not entail a philosophical commitment to Enlightenment notions of reason and truth.”

Whatever the rights and wrong of his debates with Marvin Harris across three decades, Harris’s materialist paradigm was a scientifically debatable hypothesis, which caused Chagnon to realize that he and his old rival shared more in common than they did with the activist forces emerging in the field: “Ironically, Harris and I both argued for a scientific view of human behavior at a time when increasing numbers of anthropologists were becoming skeptical of the scientific approach.”…

Both Chagnon and Harris agreed that anthropology’s move away from being a scientific enterprise was dangerous. And both believed that anthropologists, not to mention thinkers in other fields of social sciences, were disguising their increasingly anti-scientific activism as research by using obscurantist postmodern gibberish. Observers have remarked at how abstruse humanities research has become and even a world famous linguist like Noam Chomsky admits, “It seems to me to be some exercise by intellectuals who talk to each other in very obscure ways, and I can’t follow it, and I don’t think anybody else can.” Chagnon resigned his membership of the American Anthropological Association in the 1980s, stating that he no longer understood the “unintelligible mumbo jumbo of postmodern jargon” taught in the field. In his last book, Theories of Culture in Postmodern Times, Harris virtually agreed with Chagnon. “Postmodernists,” he wrote, “have achieved the ability to write about their thoughts in a uniquely impenetrable manner. Their neo-baroque prose style with its inner clauses, bracketed syllables, metaphors and metonyms, verbal pirouettes, curlicues and figures is not a mere epiphenomenon; rather, it is a mocking rejoinder to anyone who would try to write simple intelligible sentences in the modernist tradition.”…

The quest for knowledge of mankind has in many respects become unrecognizable in the field that now calls itself anthropology. According to Chagnon, we’ve entered a period of “darkness in cultural anthropology.” With his passing, anthropology has become darker still.

I recount all of this for three reasons. First, Chagnon’s findings testify to the immutable urge to violence that lurks within human beings, and to the dominance of “nature” over “nurture”. That dominance is evident not only in the urge to violence (pace Steven Pinker), but in the strong heritability of such traits as intelligence.

The second reason for recounting Chagnon’s saga it is to underline the corruption of science in the service of left-wing causes. The underlying problem is always the same: When science — testable and tested hypotheses based on unbiased observations — challenges left-wing orthodoxy, left-wingers — many of them so-called scientists — go all out to discredit real scientists. And they do so by claiming, in good Orwellian fashion, to be “scientific”. (I have written many posts about this phenomenon.) Leftists are, in fact, delusional devotees of magical thinking.

The third reason for my interest in the story of Napoleon Chagnon is a familial connection of sorts. He was born in a village where his grandfather, also Napoleon Chagnon, was a doctor. My mother was one of ten children, most of them born and all of them raised in the same village. When the tenth child was born, he was given Napoleon as his middle name, in honor of Doc Chagnon.

Another Anniversary

I will be offline for a few days, so I’m reposting this item from a year ago.

Today is the 21st 22nd anniversary of my retirement from full-time employment at a defense think-tank. (I later, and briefly, ventured into part-time employment for the intellectual fulfillment it offered. But it became too much like work, and so I retired in earnest.) If your idea of a think-tank is an outfit filled with hacks who spew glib, politically motivated “policy analysis“, you have the wrong idea about the think-tank where I worked. For most of its history, it was devoted to rigorous, quantitative analysis of military tactics, operations, and systems. Most of its analysts held advanced degrees in STEM fields and economics — about two-thirds of them held Ph.D.s.

I had accumulated 30 years of employment at the think-tank when I retired. (That was in addition to four years as a Pentagon “whiz kid” and owner-operator of a small business.) I spent my first 17 years at the think-tank in analytical pursuits, which included managing other analysts and reviewing their work. I spent the final 13 years on the think-tank’s business side, and served for 11 of those 13 years as chief financial and administrative officer.

I take special delight in observing the anniversary of my retirement because it capped a subtle campaign to arrange the end of my employment on favorable financial terms. The success of the campaign brought a profitable end to a bad relationship with a bad boss.

I liken the campaign to fly-fishing: I reeled in a big fish by accurately casting an irresistible lure then playing the fish into my net. I have long wondered whether my boss ever grasped what I had done and how I had done it. The key was patience; more than a year passed between my casting of the lure and the netting of the fish (early retirement with a financial sweetener). Without going into the details of my “fishing expedition,” I can translate them into the elements of success in any major undertaking:

  • strategy — a broad and feasible outline of a campaign to attain a major objective
  • intelligence — knowledge of the opposition’s objectives, resources, and tactical repertoire, supplemented by timely reporting of his actual moves (especially unanticipated ones)
  • resources — the physical and intellectual wherewithal to accomplish the strategic objective while coping with unforeseen moves by the opposition and strokes of bad luck
  • tactical flexibility — a willingness and ability to adjust the outline of the campaign, to fill in the outline with maneuvers that take advantage of the opposition’s errors, and to compensate for one’s own mistakes and bad luck
  • and — as mentioned — a large measure of patience, especially when one is tempted either to quit or escalate blindly.

My patience was in the service of my felt need to quit the think-tank as it had become under the direction of my boss, the CEO. He had politicized an organization whose effectiveness depended upon its long-standing (and mostly deserved) reputation for independence and objectivity. That reputation rested largely on the organization’s emphasis on empirical research, as opposed to the speculative “policy analysis” that he favored. Further, he — as an avowed Democrat — was also in thrall to political correctness (e.g., a foolish and futile insistence on trying to give blacks a “fair share” of representation on the research staff, despite the paucity of qualified blacks with requisite qualifications). There are other matters that are best left unmentioned, despite the lapse of 21 years.

Because of a special project that I was leading, I could have stayed at the think-tank for at least another three years, had I the stomach for it. And in those three years my retirement fund and savings would have grown to make my retirement more comfortable. But the stress of working for a boss whom I disrespected was too great, so I took the money and ran. And despite occasional regrets, which are now well in the past, I am glad of it.

All of this is by way of prelude to some lessons that I gleaned from my years of work — lessons that may be of interest and value to readers.

If you are highly conscientious (as I am), your superiors will hold a higher opinion of your work than you do. You must constantly remind yourself that you are probably doing better than you think you are. In other words, you should be confident of your ability, because if you feel confident (not self-deluded or big-headed, just confident), you will be less fearful of making mistakes and more willing to venture into new territory. Your value to the company will be enhanced by your self-confidence and by your (justified) willingness to take on new challenges.

When you have established yourself as a valued contributor, you will be better able to stand up to a boss who is foolish, overbearing, incompetent (either singly or in combination). Rehearse your grievances carefully, confront the boss, and then go over his head if he shrugs off your complaints or retaliates against you. But go over his head only if you are confident of (a) your value to the company, (b) the validity of your complaints, and (c) the fair-mindedness of your boss’s boss. (I did this three times in my career. I succeeded in getting rid of a boss the first two times. I didn’t expect to succeed the third time, but it was worth a try because it positioned me for my cushioned exit.)

Patience, which I discussed earlier, is a key to successfully ridding yourself of a bad boss. Don’t push the boss’s boss. He has to admit (to himself) the mistake that he made in appointing your boss. And he has to find a graceful way to retract the mistake.

Patience is also a key to advancement. Never openly campaign for someone else’s job. I got my highest-ranking job simply by positioning myself for it. The big bosses took it from there and promoted me.

On the other hand, if you can invent a job at which you know you’ll succeed — and if that job is clearly of value to the company — go for it. I did it once, and my performance in the job that I invented led to my highest-ranking position.

Through all of that, be prepared to go it alone. Work “friendships” are usually transitory. Your colleagues are (rightly) concerned with their own preservation and advancement. Do not count on them when it comes to fighting battles — like getting rid of a bad boss. More generally, do not count on them. (See the first post listed below.)

Finally, having been a manager for more than half of my 30 years at the think-tank, I learned some things that are spelled out in the third post listed below. Read it if you are a manager, aspiring to be a manager, or simply intrigued by the “mystique” of management.


Related posts:

The Best Revenge
Analysis for Government Decision-Making: Hemi-Science, Hemi-Demi-Science, and Sophistry
How to Manage
Not-So-Random Thoughts (V) (first entry)

Impeachment Tit for Tat

The immediate impetus for the drive to impeach Trump, which began on election night 2016, is the fact that Democrats fully expected Hillary Clinton to win. The underlying impetus is that Democrats have long since abandoned more than token allegiance to the Constitution, which prescribes the rules by which Trump was elected. The Democrat candidate should have won, because that’s the way Democrats think. And the Democrat candidate would have won were the total popular vote decisive instead of the irrelevant Constitution. So Trump is an illegitimate president — in their view.

There is a contributing factor: The impeachment of Bill Clinton. It was obvious to me at the time that the drive to impeach Clinton was fueled by the widely held view, among Republicans, that he was an illegitimate president, even though he was duly elected according to the same rules that put Trump in office. A good case can be made that G.H.W. Bush would have won re-election in 1992 but for the third-party candidacy of Ross Perot. Once installed as president, Clinton incumbency (and relatively moderate policies) enabled him to win re-election in 1996.

The desperation of the impeachment effort against Clinton is evident in the scope of the articles of impeachment, which are about the lying and obstruction of justice that flowed from his personal conduct, and not about his conduct as chief executive of the United States government.

I admit that, despite the shallowness of the charges against Clinton, I was all for his impeachment and conviction. Moderate as his policies were, in hindsight, he was nevertheless a mealy-mouthed statist who, among other things, tried to foist Hillarycare on the nation.

At any rate, the effort to remove Clinton undoubtedly still rankles Democrats, and must be a factor in their fervent determination to remove Trump. This is telling:

“This partisan coup d’etat will go down in infamy in the history of our nation,” the congressman said.

He was outraged and wanted the nation to know why.

“Mr. Speaker,” he said, “this is clearly a partisan railroad job.”

“We are losing sight of the distinction between sins, which ought to be between a person and his family and his God, and crimes which are the concern of the state and of society as a whole,” he said.

“Are we going to have a new test if someone wants to run for office: Are you now or have you ever been an adulterer?” he said.

The date was Dec. 19, 1998. The House was considering articles of impeachment that the Judiciary Committee had approved against then-President Bill Clinton. The outraged individual, speaking on the House floor, was Democratic Rep. Jerry Nadler of New York.

Nadler now serves as chairman of the Judiciary Committee.

What goes around comes around.

I Hate to Hear Millennials Speak

My wife and I have a favorite Thai restaurant in Austin. It’s not the best Thai restaurant in our experience. We’ve dined at much better ones in Washington, D.C., and Yorktown, Virginia. The best one, in our book, is in Arlington, Virginia.

At any rate, our favorite Thai restaurant in Austin is very good and accordingly popular. And because Thai food is relatively inexpensive, it draws a lot of twenty-and-thirty-somethings.

Thus the air was filled (as usual) with “like”, “like”, “like”, “like”, and more “like”, ad nauseum. It makes me want to stand up and shout “Shut up, I can’t take it any more”.

The fellow at the next table not only used “like” in every sentence, but had a raspy, penetrating vocal fry, which is another irritating speech pattern of millennials. He was seated so that he was facing in my direction. As a result, I had to turn down my hearing aids to soften the creak that ended his every sentence.

His date (a female, which is noteworthy in Austin) merely giggled at everything he said. It must have been a getting-to-know you date. The relationship is doomed if she’s at all fussy about “like”. Though it may be that he doesn’t like giggly gals.

Harumph!

That’s today’s gripe. For more gripes, see these posts:

Stuff White (Liberal Yuppie) People Like
Driving and Politics
I’ve Got a LIttle List
Driving and Politics (2)
Amazon and Austin
Driving Is an IQ Test
The Renaming Mania Hits a New Low
Let the Punishment Deter the Crime

Oh, The Irony

Who damaged America greatly with his economic, social, and defense policies and with his anti-business, race-bating rhetoric? Obama, that’s who.

Who has undone much of Obama’s damage, but might be removed from office on a phony abuse-of-power charge — because Democrats (and some Republicans) can’t accept the outcome of the 2016 election? Trump, that’s who.

Do I smell the makings of a great upheaval if Democrats are successful? I think so.

“Will the Circle Be Unbroken?”

That’s the title of the sixth episode of Country Music, produced by Ken Burns et al. The episode ends with a segment about the production of Will the Circle be Unbroken?, a three-LP album released in 1972, with Mother Maybelle Carter of the original Carter Family taking the lead. I have the album in my record collection. It sits proudly next to a two-LP album of recordings by Jimmie Rodgers, Jimmie Rodgers on Record: America’s Blue Yodeler.

The juxtaposition of the albums is fitting because, as Country Music‘s first episode makes clear, it was the 1927 recordings of Rodgers and the Carters that “made” country music. Country music had been recorded and broadcast live since 1922. But Rodgers and the Carters brought something new to the genre and it caught the fancy of a large segment of the populace.

In Rodgers’s case it was his original songs (mostly of heartbreak and rambling) and his unique delivery, which introduced yodeling to country music. In the Carters’ case it was the tight harmonies of Maybelle Addington Carter and her cousin and sister-in-law, Sara Dougherty Carter, applied to nostalgic ballads old and new (but old-sounding, even if new) compiled and composed mostly by Sara’s then-husband, A.P. Carter, who occasionally chimed in on the bass line. (“School House on the Hill” is a particular favorite of mine. The other songs at the link to “School House …” are great, too.)

Rodgers and the original Carters kept it simple. Rodgers accompanied himself on the guitar; Maybelle and Sara Carter accompanied themselves on guitar and autoharp. And that was it. No electrification or amplification, no backup players or singers, no aural tricks of any kind. What you hear is unadorned, and all the better for it. Only the Bluegrass sound introduced by Bill Monroe could equal it for a true “country” sound. Its fast pace and use of acoustic, stringed instruments harked back to the reels and jigs brought to this land (mainly from the British Isles) by the first “country” people — the settlers of Appalachia and the South.

As for the miniseries, I give it a B, or 7 out of 10. As at least one commentator has said, it’s a good crash course for those who are new to country music, but only a glib refresher course for those who know it well. At 16 hours in length, it is heavily padded with mostly (but not always) vapid commentary by interviewees who were and are, in some way, associated with country music; Burns’s typical and tedious social commentary about the treatment of blacks and women, as if no one knows about those things; and biographical information that really adds nothing to the music.

The biographical information suggests that to be a country singer you must be an orphan from a hardscrabble-poor, abusive home who survived the Great Depression or run-ins with the law. Well, you might think that until you reflect on the fact that little is said about the childhoods of the many country singers who weren’t of that ilk, especially the later ones whose lives were untouched by the Great Depression or World War II.

Based on what I’ve seen of the series thus far (six of eight episodes), what it takes to be a country singer — with the notable exception of the great Hank Snow (a native of Nova Scotia) — is (a) to have an accent that hails from the South, and (b) to sing in a way that emphasizes the accent. A nasal twang seems to be a sine qua non, even though many of the singers who are interviewees don’t speak like they sing. It’s mostly put on, in other words, and increasingly so as regional accents fade away.

The early greats, like Rodgers and the Carters, were authentic, but the genre is becoming increasingly phony. And the Nashville sound and its later variants are abominations.

So, the circle has been broken. And the only way to mend it is to listen to the sounds of yesteryear.

Up from Darkness

I’m in the midst of writing a post about conservatism that will put a capstone on the many posts that I’ve written on the subject. It occurred to me that it might be helpful to understand why some conservatives (or people who thought of themselves as conservatives) abandoned the faith. I found “Do conservatives ever become liberal?” at Quora. None of the more than 100 replies is a good argument for switching, in my view. Most of them are whining, posturing, and erroneous characterizations of conservatism.

But the first reply struck home because it describes how a so-called conservative became a “liberal” in a matter of minutes. What that means, of course, is that the convert’s conservatism was superficial. (More about that in the promised post.) But the tale struck home because it reminded me of my own conversion, in the opposite direction, which began with a kind of “eureka” moment.

Here’s the story, from my “About” page:

I was apolitical until I went to college. There, under the tutelage of economists of the Keynesian persuasion, I became convinced that government could and should intervene in economic affairs. My pro-interventionism spread to social affairs in my early post-college years, as I joined the “intellectuals” of the time in their support for the Civil Rights Act and the Great Society, which was about social engineering as much as anything.

The urban riots that followed the murder of Martin Luther King Jr. opened my eyes to the futility of LBJ’s social tinkering. I saw at once that plowing vast sums into a “war” on black poverty would be rewarded with a lack of progress, sullen resentment, and generations of dependency on big brother in Washington.

There’s a lot more after that about my long journey home to conservatism, if you’re interested.