Information Security in the Cyber-Age

No system is perfect, but I am doing the best that I can:

1. PCs and mobile devices protected by anti-virus and anti-malware programs.

2. Password-protected home network, wrapped in a virtual private network, which is also used by the mobile devices when they aren’t linked to the home network.

3. Different usernames and passwords for every one of dozens of sites that require (and need) them.

4. Passwords created by a complex spreadsheet routine that generates and selects from random series of upper-case letters, lower-case letters, digits, and special characters.

5. Passwords stored in a password-protected file, with paper backup in a secure container.

6. Master password required for access to passwords stored in browser.

Measures 4, 5, and 6 adopted in lieu of reliance on vulnerable vendors for password generation and storage.

Suggestions for improvement are always welcome.

Social Security Is an Entitlement

Entitlement has come to mean the right to guaranteed benefits under a government program. In the nature of government programs, those who receive the benefits usually don’t pay the taxes required to fund those benefits.

I recently saw on Facebook (which I look at occasionally) a discussion to the effect that Social Security isn’t an entitlement program because “we (the discussants) paid into it”.

Well, paying into Social Security doesn’t mean that you paid your own way. First, the system is rigged so the persons in lower income brackets receive benefits that are disproportionately high relative to the payments that they (and their employers) made during their working years.

Second, the money that a person pays into Social Security doesn’t earn anything. You are not buying a financial instrument that funds productive investments, which in turn reward you with a future stream of income.

True, there’s the mythical Social Security Trust Fund, which has been paying out benefits that have been defrayed in part by interest earned on “investments” in U.S. Treasury securities. Where does that interest come from? Not from the beneficiaries of Social Security. It comes from taxpayers who are, at the same time, also making payments into Social Security in exchange for the “promise” of future Social Security benefits. (I say “promise” because there is no binding contract for Social Security benefits; you get what Congress provides by law.)

So, yes, Social Security is an entitlement program. Paying into it doesn’t mean that the payer earns what he eventually receives from it. Quite the contrary. Most participants are feeding from the public trough.

The Unique “Me”

Children, at some age, will begin to understand that there is death, the end of a human life (in material form, at least). At about the same time, in my experience, they will begin to speculate about the possibility that they might have been someone else: a child born in China, for instance.

Death eventually loses its fascination, though it may come to mind from time to time as one grows old. (Will I wake up in the morning? Is this the day that my heart stops beating? Will I be able to break my fall when the heart attack happens, or will I just go down hard and die of a fractured skull?)

But after careful reflection, at some age, the question of being been born as someone else is answered in the negative.

For each person, there is only one “I”, the unique “me”. If I hadn’t been born, I wouldn’t be “I” — there wouldn’t be a “me”. I couldn’t have been born as someone else: a child born in China, instance. A child born in China — or at any place and time other than where and when my mother gave birth to me — must be a different “I’. not the one I think of as “me”.

(Inspired by Sir Roger Scruton’s On Human Nature, for which I thank my son.)

An Antidote to Alienation

This much of Marx’s theory of alienation bears a resemblance to the truth:

The design of the product and how it is produced are determined, not by the producers who make it (the workers)….

[T]he generation of products (goods and services) is accomplished with an endless sequence of discrete, repetitive, motions that offer the worker little psychological satisfaction for “a job well done.”

These statements are true not only of assembly-line manufacturing. They’re also true of much “white collar” work — certainly routine office work and even a lot of research work that requires advanced degrees in scientific and semi-scientific disciplines (e.g., economics). They are certainly true of “blue collar” work that is rote, and in which the worker has no ownership stake.

There’s a relevant post at West Hunter which is short enough to quote in full:

Many have noted how difficult it is to persuade hunter-gatherers to adopt agriculture, or more generally, to get people to adopt a more intensive kind of agriculture.

It’s worth noting that, given the choice, few individuals pick the more intensive, more ‘civilized’ way of life, even when their ancestors have practiced it for thousands of years.

Benjamin Franklin talked about this. “When an Indian Child has been brought up among us, taught our language and habituated to our Customs, yet if he goes to see his relations and makes one Indian Ramble with them, there is no persuading him ever to return. [But] when white persons of either sex have been taken prisoners young by the Indians, and lived a while among them, tho’ ransomed by their Friends, and treated with all imaginable tenderness to prevail with them to stay among the English, yet in a Short time they become disgusted with our manner of life, and the care and pains that are necessary to support it, and take the first good Opportunity of escaping again into the Woods, from whence there is no reclaiming them.”

The life of the hunter-gatherer, however fraught, is less rationalized than the kind of life that’s represented by intensive agriculture, let alone modern manufacturing, transportation, wholesaling, retailing, and office work.

The hunter-gatherer isn’t a cog in a machine, he is the machine: the shareholder, the co-manager, the co-worker, and the consumer, all in one. His work with others is truly cooperative. It is like the execution of a game-winning touchdown by a football team, and unlike the passing of a product from stage to stage in an assembly line, or the passing of a virtual piece of paper from computer to computer.

The hunter-gatherer’s social milieu was truly societal:

Hunter-gatherer bands in the [Pleistocene] were in the range of 25 to 150 individuals: men, women, and children. These small bands would have sometimes formed larger agglomerations of up to a few thousand for the purpose of mate-seeking and defense, but this would have been unusual. The typically small size for bands meant that interactions within the group were face-to-face, with everyone knowing the name and something of the reputation and character of everyone else. Though group members would have engaged in some specializa­tion of labor beyond the normal sex distinctions (men as hunters, women as gatherers), specialization would not have been strict: all men, for example, would haft adzes, make spears, find game, kill, and dress it, and hunt in bands of ten to twenty individuals. [From Denis Dutton’s review of Paul Rubin’s Darwinian Politics: The Evolutionary Origin of Freedom.]

Nor is the limit of 150 unique to hunter-gatherer bands:

[C]ommunal societies — like those our ancestors lived in, or in any human group for that matter — tend to break down at about 150. Such is perhaps due to our limited brain capacity to know any more people that intimately, but it’s also due to the breakdown of reciprocal relationships like those discussed above — after a certain number (again, around 150).

A great example of this is given by Richard Stroup and John Baden in an old article about communal Hutterite colonies. (Hutterites are sort of like the Amish — or more broadly like Mennonites — but settled in different areas of North America.) Stroup, an economist at Montana State University, shared with me his Spring 1972 edition of Public Choice, wherein he and political scientist John Baden write:

In a relatively small colony, the proportional contribution of each member is greater. Likewise, surveillance of him by each of the others is more complete and an informal accounting of contribution is feasible. In a colony, there are no elaborate systems of formal controls over a person’s contribution. Thus, in general, the incentive and surveillance structures of a small or medium-size colony are more effective than those of a large colony and shirking is lessened.

Interestingly, according to Stroup and Baden, once the Hutterites reach Magic Number 150, they have a tradition of breaking off and forming another colony. This idea is echoed in Gladwell’s The Tipping Point, wherein he discusses successful companies that use 150 in their organizational models.

Had anyone known about this circa 1848, someone might have told Karl Marx that his theory [communism] could work, but only up to the Magic Number. [From Max Borders’s “The Stone Age Trinity“.]

What all of this means, of course, is that for the vast majority of people there’s no going back. How many among us are willing — really willing — to trade our creature comforts for the “simple life”? Few would be willing when faced with the reality of what the “simple life” means; for example, catching or growing your own food, dawn-to-post-dusk drudgery, nothing resembling culture as we know it (high or low), and lives that are far closer to nasty, brutish, and short than today’s norms.

Given that, it is important (nay, crucial) to cultivate an inner life of intellectual or spiritual satisfaction. Only that inner life — and the love and friendship of a small circle of fellows — can hold alienation at bay. Only that inner life — and love and close friendships — can give us serenity as civilization crumbles around us.

(See also “Alienation” and “Another Angle on Alienation“.)

The Paradox That Is Western Civilization

The main weakness of Western civilization is a propensity to tolerate ideas and actions that would undermine it. The paradox is that the main strength of Western civilization is a propensity to tolerate ideas and actions that would strengthen it. The survival and improvement of Western civilization requires carefully balancing the two propensities. It has long been evident in continental Europe and the British Isles that the balance has swung toward destructive toleration. The United States is rapidly catching up to Europe. At the present rate the intricate network of social relationships and norms that has made America great will be destroyed within a decade. Israel, if it remains staunchly defensive of its heritage, will be the only Western nation still worthy of the name.

(See also “Conservatism, Society, and the End of America” and “Another Take on the State of America“.)

Ad-Hoc Hypothesizing and Data Mining

An ad-hoc hypothesis is

a hypothesis added to a theory in order to save it from being falsified….

Scientists are often skeptical of theories that rely on frequent, unsupported adjustments to sustain them. This is because, if a theorist so chooses, there is no limit to the number of ad hoc hypotheses that they could add. Thus the theory becomes more and more complex, but is never falsified. This is often at a cost to the theory’s predictive power, however. Ad hoc hypotheses are often characteristic of pseudoscientific subjects.

An ad-hoc hypothesis can also be formed from an existing hypothesis (a proposition that hasn’t yet risen to the level of a theory) when the existing hypothesis has been falsified or is in danger of falsification. The (intellectually dishonest) proponents of the existing hypothesis seek to protect it from falsification by putting the burden of proof on the doubters rather than where it belongs, namely, on the proponents.

Data mining is “the process of discovering patterns in large data sets”. It isn’t hard to imagine the abuses that are endemic to data mining; for example, running regressions on the data until the “correct” equation is found, and excluding or adjusting portions of the data because their use leads to “counterintuitive” results.

Ad-hoc hypothesizing and data mining are two sides of the same coin: intellectual dishonesty. The former is overt; the latter is covert. (At least, it is covert until someone gets hold of the data and the analysis, which is why many “scientists” and “scientific” journals have taken to hiding the data and obscuring the analysis.) Both methods are justified (wrongly) as being consistent with the scientific method. But the ad-hoc theorizer is just trying to rescue a falsified hypothesis, and the data miner is just trying to conceal information that would falsify his hypothesis.

From what I have seen, the proponents of the human activity>CO2>”global warming” hypothesis have been guilty of both kinds of quackery: ad-hoc hypothesizing and data mining (with a lot of data manipulation thrown in for good measure).

Another Take on the State of America

Samuel J. Abrams, writing at newgeography, alleges that “America’s Regional Variations Are Wildly Overstated“. According to Abrams,

[p]erhaps the most widely accepted and popular idea of regional differences comes from Colin Woodard who carves the country into 11 regional nations each with unique histories and distinct cultures that he believes has shaped the ideologies and politics at play today….

Woodard argues that regions project “[a] force that you feel that’s there, and those sort of assumptions and givens about politics, and culture, and different social relationships.” Yet the problem with Woodard’s argument is that while these histories and memoirs are fascinating, they are not necessarily representative of what drives politics and society among those living in various regions around the country. New data from the AEI survey on Community and Society makes it clear that recent accounts of America splintering does not hold up to empirical scrutiny and are appreciably overstated.

In what follows, you will see references to Woodward’s 11 “nations”, which look like this:

Abrams, drawing on the AEI survey of which he is a co-author, tries to how alike the “nations” are statistically; for example:

The Deep South … is widely viewed as a conservative bastion given its electoral history but the data tells [sic] a different story[:] 39% of those in the Deep South identify as somewhat, very, or extremely conservative while 23% are somewhat, very or extremely liberal. There are more residents in the region who identify or even lean to the right compared to the left but 37% of Southerners assert themselves as moderate or do not think about themselves ideologically at all. Thus the South is hardly a conservative monoculture – almost a quarter of the population is liberal. Similarly, in the progressive northeast region that is Yankeedom, only 31% of its residents state that they are liberal to some degree compared to 26% conservative but plurality is in the middle with 43%….

Religion presents a similar picture where 47% of Americans nationally hold that religious faith is central or very important to their lives and 10 of the 11 regions are within a handful points of the average except the Left Coast which drops to 26%….

The AEI survey asks about the number of close friends one has and 73% of Americans state that they have between 1 and 5 close friends today. Regional variation is minor here but what is notable is that Yankeedom with its urban history and density is actually the lowest at 68% while the Deep South and its sprawl has the highest rate of 81%.

Turning to communities specifically, the survey asks respondents about how well they know their neighbors. A majority, 54% of Americans, gave positive responses – very and fairly well. The Deep South, El Norte and Far West all came in at 49% – the low end – and at the high end was 61% for the Midlands and 58% for New England. The remaining regions were within a few points of the national average….

[T]he survey asked about helping out one’s neighbor by doing such things as watching each other’s children, helping with shopping, house sitting, picking up newspapers or packages, lending tools and other similar things. These are relatively small efforts and 38% of Americans help their neighbors a few times a month or more often. Once again, the regions hover around this average with the Far West, New Netherlands, and the Left Coast being right in the middle. Those in the Midlands and Yankeedom – New England – were at 41% and El Norte at 30% were the least helpful. As before, there are minor differences from the average but they are relatively small with no region being an outlier in terms of being far more or less engaged communally.

Actually, Abrams has admitted to some significant gaps:

The Deep South is 39 percent conservative; Yankeedom, only 26 percent.

The Left Coast is markedly less religious than the rest of the country.

Denizens of the Deep South have markedly more friends than do inhabitants of Yankeedom (a ratio of 81:68).

Residents of the Midlands and New England are much more neighborly than are residents of The Deep South, El Norte, and the Far West (ratios of 61:49 and 58:49).

Residents of The Midlands and Yankeedom are much more helpful to their neighbors than are residents of El Norte (ratio of 41:30).

It’s differences like those that distinguish the regions. Abrams’s effort to minimize the difference is akin to saying that humans and chimps are pretty much alike because 96 percent of human genes are the same as chimp genes.

Moreover, Abrams hasn’t a thing to say about trends. Based on the following trends, it’s hard not to conclude that regional differences are growing:

Call me a cock-eyed pessimist.

Viewing Recommendations: TV Series and Mini-Series

My wife and I have watched many a series and mini-series. Some of them predate the era of VHS, DVD, and streaming, though many of those are now available on DVD. Our long list of favorites includes these (right-click a link to open it in a new tab):

Better Call Saul
Rumpole of the Bailey
Slings and Arrows
Pride and Prejudice
Cold Lazarus
Karaoke
Love in a Cold Climate
Oliver Twist
Bleak House
The Six Wives of Henry VIII
Danger UXB
Lonesome Dove
Sunset Song
Lillie
Vienna 1900
The Durrells in Corfu
The Wire
The Glittering Prizes
Bron/Broen
Wallander
Little Dorrit
Justified
Cracker
Pennies from Heaven
Mad Men
The Sopranos
Charters & Caldicott
Reckless
Our Mutual Friend
The First Churchills
The Unpleasantness at the Bellona Club
Murder Must Advertise
The Nine Tailors
Cakes and Ale
Madame Bovary
I, Claudius
Smiley’s People
Reilly: Ace of Spies
Prime Suspect
The Norman Conquests
Bramwell
Prime Suspect 2
Prime Suspect 3
Mystery!: Cadfael
Prime Suspect 5: Errors of Judgement
David Copperfield
Prime Suspect 6: The Last Witness
The Forsyte Saga
Elizabeth R
Jude the Obscure
Clouds of Witness
Country Matters
Notorious Woman
Five Red Herrings
Anna Karenina
Brideshead Revisited
To Serve Them All My Days

If you have more than a passing acquaintance with this genre, you will recognize that almost all of the fare is British. The Brits seem to have a near-lock on good acting and literate and clever writing.

Enjoy!

Conservatism, Society, and the End of America

To be wary or skeptical of that which is different is not a matter of close-mindedness, hate, racism, or other such “evil” tendencies. Wariness and skepticism, rather, are deep-seated and salutary survival instincts. They are evolved psychological responses analogous to the physiological phenomenon of foreign-body reaction.

Wariness and skepticism are the basis of conservatism: the preference for ideas, methods, materials, and customs which have been repeatedly tested in the acid of use. Conservatism is the opposite of novelty for novelty’s sake, thrill-seeking, and hope-based change — from which stem many an unwanted consequence.

Society, properly understood, is conservative. A society is an enduring and cooperating social group whose members have developed organized patterns of relationships through interaction with one another. No one who gives it much thought would say that America is or ever was a society. The word is used too loosely. But America was, from the aftermath of the Civil War until the early 1960s, at least, an interlocking set of societies, bound more or less tightly by shared social norms (not the least of them being an unashamed belief in the Judeo-Christian God), a common language (most immigrants sought to assimilate), pride in what “America” stood for (remember the Pledge of Allegiance?), and a willingness to defend a nation under the Constitution and laws of which Americans enjoyed a great deal more freedom and prosperity than the denizens of most other nations.

“Liberalism” of the kind fomented by the Enlightenment, by political philosophers like J.S. Mill, and by today’s leftists (including most so-called libertarians), is insidiously destructive of society. “Liberalism” denigrates and attacks the things that bind people, most notably social norms (which include religious ones) and patriotism. (Leftists, ironically, attack identification with America and its history — much of it proud — while touting the virtues of various and sundry identity groups.)

It is no great exaggeration to say that America is no more. Where once upon a time products could be sold by appealing to “baseball, mom, hot dogs, and apple pie”, the slogan would now invite scorn and ridicule throughout much of the land, and especially on the two Left Coasts.

America (taking it as a collective for the moment) has lost its soul, like continental Europe and the British Isles before it. By soul, I mean the common beliefs and norms that bound most Americans as Americans.

There are still remnants of “Old America” where “baseball, mom, hot dogs, and apple pie” hold appeal — especially when coupled with God. But the left, with the connivance of the internet-media-academic complex, has marginalized “Old America”. Even to speak of traditional marriage, personal responsibility, limited government, color-blind justice, the importance of two-parent families, genetic inheritance, performance-based advancement, religion as a civilizing influence, science as a method (not a producer of “truth” to be worshiped), etc., is to be branded a far-right, fanatic who is unfit to hold public office and who should be publicly and vocally scolded (or worse) as a privileged white racist, sexist, homophobe, transphobe, Islamophobe, hater, and science-denier.

How did it happen? How did “Old America” spawn something that is its opposite, nay, its enemy? How did “Old America” spawn forces of suppression that daily seem to grow more powerful in their ability to ostracize, penalize, and dictate to the rest of us? How did this new dispensation come to dominate the institutions that shape culture: academia, public education, (many) churches, the media (including “news” and “entertainment”), and much of the political machinery of America?

I would say these three things, for a start:

Prosperity has separated most Americans from the “real life” and thus from the need for wariness and caution.

The decades-long dominance of leftist ideas in most public schools has fostered the emergence of the left-biased and vastly influential information-entertainment-media-academic complex.

Politicians, whose power has been elevated to undreamed of heights by the abrogation of the Constitution, have joined the leftward throng, when they haven’t been leading it. In particular, government has subverted conservative ideals (marriage before family, hard work rather than handouts and crime, etc.).

I believe that the situation is irredeemable. Many interconnected trends are at work, and they will not cease their work unless they are interrupted by a cataclysm of some kind that forces most Americans to confront “real life” and cooperate in survival.

The Complexity of Race

Bill Vallicella takes issue with Pat Buchanan’s recent discussion of Trump’s supposed racism. Buchanan says:

[W]hat is racism?

Is it not a manifest dislike or hatred of people of color because of their color? Trump was not denouncing the ethnicity or race of Ilhan Omar in his rally speech. He was reciting and denouncing what Omar said, just as Nancy Pelosi was denouncing what Omar and the Squad were saying and doing when she mocked their posturing and green agenda.

BV says:

Buchanan’s definition is on the right track except that he conflates race with skin color, which is but a superficial phenotypical indicator of race.

True enough. Race is more than skin deep, and skin color is among the least significant manifestations of racial differences. More significant manifestations include certain physical proclivities (e.g., “white men can’t jump”) and marked differences in the distribution of intelligence.

Races are nothing more (or less) than subspecies of Homo sapiens under this taxonomy:

Kingdom: Animalia
Phylum: Chordata
Class: Mammalia
Order: Primates
Suborder: Haplorhini
Infraorder: Simiiformes
Family: Hominidae
Subfamily: Homininae
Tribe: Hominini
Genus: Homo
Species: Homo sapiens

It is hard to pin down races (subspecies) with great precision. There are gradations of differences within the broadly defined races (Caucasoid, Mongoloid, and Negroid). Consider, for one example, the range of “subraces” comprised in the Mongoloid category.

There will be even more gradations as a result of international mobility and the erosion of social barriers between whites, Asians, blacks, and Latinos (or Hispanics) — the latter of which include groups that are admixtures of Caucasoids (mainly Spaniards) and Mongoloids (various Amerindian strains of long-ago migrants from Asia).

The subtlety of racial gradations is captured by the (not uncontroversial) study of genetic clustering:

Genetic structure studies are carried out using statistical computer programs designed to find clusters of genetically similar individuals within a sample of individuals….

These clusters are based on multiple genetic markers that are often shared between different human populations even over large geographic ranges. The notion of a genetic cluster is that people within the cluster share on average similar allele frequencies to each other than to those in other clusters….

A major finding of Rosenberg and colleagues … was that when five clusters were generated by the program (specified as K=5), “clusters corresponded largely to major geographic regions.” Specifically, the five clusters corresponded to Africa, Europe plus the Middle East plus Central and South Asia, East Asia, Oceania, and the Americas. The study also confirmed prior analyses by showing that, “Within-population differences among individuals account for 93 to 95% of genetic variation; differences among major groups constitute only 3 to 5%.” [But significant differences flow from the 3 to 5 percent, such as the aforementioned differences in athletic ability and intelligence.] …

Rosenberg and colleagues … have argued, based on cluster analysis, that populations do not always vary continuously and a population’s genetic structure is consistent if enough genetic markers (and subjects) are included. “Examination of the relationship between genetic and geographic distance supports a view in which the clusters arise not as an artifact of the sampling scheme, but from small discontinuous jumps in genetic distance for most population pairs on opposite sides of geographic barriers, in comparison with genetic distance for pairs on the same side….

Think of “K” as representative of the degree of commonality, where K=1 represents a loose relationship and K=7 represents a much tighter one. Here, for example, is a graphical presentation of the result of a K=7 analysis:

From: Low Levels of Genetic Divergence across Geographically and Linguistically Diverse Populations from India Rosenberg NA, Mahajan S, Gonzalez-Quevedo C, Blum MGB, Nino-Rosales L, et al.PLoS Genetics Vol. 2, No. 12, e215 doi:10.1371/journal.pgen.0020215[1] (fig. 2A) original caption: “Representative estimate of population structure for 1,384 individuals from worldwide populations, including 432 individuals from India. The plot represents the highest-likelihood run among ten STRUCTURE runs with K = 7 clusters. Eight of the other nine runs identified a cluster largely corresponding to India, and five of these eight produced plots nearly identical to the one shown.”

The presence of distinct physical and political-cultural boundaries is obvious in the sharp breaks that occur at five points (reading down from the top). Also striking is the closeness of the clustering patterns for Europe, North Africa, and the near Middle East. (The inhabitants of those areas used to be identified as Caucasiod.)

Where an American belongs on the graph depends on where his ancestors came from. Despite much genetic mixing, the origins of most Americans are still readily identifiable — especially “Americans” like Ilhan Omar and Alexandria Occasio-Cortez. And so race — as we have known it — is still an important distinction that can’t be removed simply by saying that “race is a social construct” or “race is only skin deep”.

(See also “Race and Reason: The Achievement Gap — Causes and Implications“, “The IQ of Nations“, “Why Race Matters“, “Is Race a Social Construct?“, and “Real Americans“.)

Did Nixon Get a Bum Rap?

Geoff Shephard, writing at The American Spectator (“Troubling Watergate Revelations, Too Late to Matter“), argues in the affirmative:

August 9 is the 45th anniversary of the resignation of Richard Nixon [on this date in 1974], the only president in American history to resign or be removed from office. We know what triggered his resignation. He was already on the ropes after two and a half years of Watergate revelations, but what ended any and all defense was the release of the “smoking gun” transcript on August 5 [1974]. It showed that Nixon had concurred with his staff’s suggestion that they get the CIA to tell the FBI not to interview two Watergate witnesses.

As astonishing as it may be to Americans, who have been assured that the smoking gun tape is proof positive of Nixon’s early cover-up involvement, every person connected to that particular conversation now agrees that the CIA gambit was an effort to prevent disclosure of prominent Democrats who had made substantial contributions to Nixon’s re-election campaign under assurances of absolute secrecy.

I should know. I was there: a member of Nixon’s Watergate defense team, the third person to hear the smoking gun tape, the one who first transcribed it, and the one who termed it “the smoking gun.” Here is a much fuller explanation of what actually happened. But the bottom line remains unchanged. Nixon’s Watergate defense lawyers completely misinterpreted the tape, and their mistake ended his presidency….

But Nixon did resign — in the aftermath of the release of the smoking gun transcript. Three months later, when prosecutors sought to prove their allegation of Nixon’s personal approval during the course of the Watergate cover-up trial — with their witnesses having to testify under oath and subject to cross-examination — they were totally unable to do so….

By this time, however, the total refutation of their secret allegation concerning Nixon’s payoff instructions had become irrelevant. Nixon had resigned the previous August, and the smoking gun tape seemed to prove his early cover-up involvement in any event. Since no one knew of their allegation of Nixon’s personal wrongdoing, it was as though it had never happened, and no one could claim that Nixon had been unfairly hounded from office. The underlying facts — and their significance — have only emerged in recent years.

I will leave it to the reader to parse Mr. Shepard’s full argument, which includes portions of the transcript of the “smoking gun” conversation, which occurred on June 23, 1972. (There is a more complete version here.) I will say only this: If the “smoking gun” was not really a “smoking gun”, as Mr. Shepard argues, then Mr. Nixon probably got a bum rap.

Why? Because The New York Times published, in May 1974, The White House Transcripts, a compendium of the transcripts of Oval Office conversations pertaining to Watergate that had been released before the so-called smoking gun tape emerged. In those days, when the Times was still relatively fair and balanced — and dealt mainly in news rather than opinion — R.W. Apple concluded the book’s introduction with this:

Throughout the period of the Watergate affair the raw material of these recorded confidential conversations establishes that the President had no prior knowledge of the break-in and that he had no knowledge of any cover-up prior to March 21, 1973. In all of the thousands of words spoken, even though they often are undlear and ambiguous, not once does it appear that the President of the United States was engaged in a criminal plot to obstruct justice.

On March 21, 1973, when the President learned for the first time of allegations of such a plot and an alleged attempt to blackmail the White House, he sought to find out the facts first from John Dean and then others. When it appeared as a result of these investigations that there was reason to believe that there may have been some wrongdoing he conferred with the Attorney General and with the Assistant in charge of the criminal division of the Department of Justice and cooperated fully to bring the matter expeditiously before the grand jury.

Ultimately Dean has pled guilty to a felony and seven former White House officials stand indicted. their innocence or guilt will be determined in a court of law.

This is as it should be.

The recent acquittals of former Secretary Stans and former Attorney General Mitchell in the Vesco case demonstrate the wisdom of the President’s actions in insisting that the orderly process of the judicial system be utilized to determine the guilt or innocence of individuals charged with crime, rather than participating in trials in the public media [emphasis added].

In any event, the “smoking gun” tape proved to be Nixon’s undoing:

Once the “smoking gun” transcript was made public, Nixon’s political support practically vanished. The ten Republicans on the House Judiciary Committee who had voted against impeachment in committee announced that they would now vote for impeachment once the matter reached the House floor. He lacked substantial support in the Senate as well; Barry Goldwater and Hugh Scott estimated no more than 15 Senators were willing to even consider acquittal. Facing certain impeachment in the House of Representatives and equally certain conviction in the Senate, Nixon announced his resignation on the evening of Thursday, August 8, 1974, effective as of noon the next day.

A gross miscarriage of justice? I report, you decide.

Back in Business

I have returned from Realities, where I set up shop in April 2019. I had hoped that the change of venue would result in short, punchy posts. And that’s the way I started, but it’s not in my nature to say a little bit when there’s a lot to be said. So here I am again.

I’ve brought back with me all of the posts that I published at Realities from April 2, 2019, through August 5, 2019. They’re reproduced here, bearing the same dates as they had at Realities. If you didn’t read them there, now’s your chance to catch up.

The Learning Curve and the Flynn Effect

I first learned of the learning curve when I was a newly hired analyst at a defense think-tank. A learning curve

is a graphical representation of how an increase in learning (measured on the vertical axis) comes from greater experience (the horizontal axis); or how the more someone (or something) performs a task, the better they [sic] get at it.

In my line of work, the learning curve figured importantly in the estimation of aircraft procurement costs. There was a robust statistical relationship between the cost of making a particular model of aircraft and the cumulative number of such aircraft produced. Armed with the learning-curve equation and the initial production cost of an aircraft, it was easy to estimate of the cost of producing any number of the same aircraft.

The learning curve figures prominently in tests that purport to measure intelligence. Two factors that may explain the Flynn effect — a secular rise in average IQ scores — are aspects of learning: schooling and test familiarity and a generally more stimulating environment in which one learns more. The Flynn effect doesn’t measure changes in intelligence, it measures changes in IQ scores resulting from learning. There is an essential difference between ignorance and stupidity. The Flynn effect is about the former, not the latter.

Here’s a personal example of the Flynn effect in action. I’ve been doing The New York Times crossword puzzle online since February 18 of this year. I have completed all 170 puzzles published by TNYT from that date through today, with generally increasing ease:

The difficulty of the puzzle varies from day to day, with Monday puzzles being the easiest and Sunday puzzles being the hardest (as measured by time to complete). For each day of the week, my best time is more recent than my worst time, and the trend of time to complete is sharply downward for every day of the week (as reflected in the graph above).

I know that that I haven’t become more intelligent in the last 24 weeks. And being several decades past the peak of my intelligence, I am certain that it diminishes daily, though only fractionally so (I hope). I have simply become more practiced at doing the crossword puzzle because I have learned a lot about it. For example, certain clues recur with some frequency, and they always have the same answers. Clues often have double meanings, which were hard to decipher at first, but which have become easier to decipher with practice. There are other subtleties, all of which reflect the advantages of learning.

In a nutshell, I am no smarter than I was 24 weeks ago, but my ignorance of TNYT crossword puzzle has diminished significantly.

(See also “More about Intelligence“, “Selected Writings about Intelligence“, and especially “Intelligence“.)

Modeling Is Not Science: Another Demonstration

The title of this post is an allusion to an earlier one: “Modeling Is Not Science“. This post addresses a model that is the antithesis of science. Tt seems to have been extracted from the ether. It doesn’t prove what its authors claim for it. It proves nothing, in fact, but the ability of some people to dazzle other people with mathematics.

In this case, a writer for MIT Technology Review waxes enthusiastic about

the work of Alessandro Pluchino at the University of Catania in Italy and a couple of colleagues. These guys [sic] have created a computer model of human talent and the way people use it to exploit opportunities in life. The model allows the team to study the role of chance in this process.

The results are something of an eye-opener. Their simulations accurately reproduce the wealth distribution in the real world. But the wealthiest individuals are not the most talented (although they must have a certain level of talent). They are the luckiest. And this has significant implications for the way societies can optimize the returns they get for investments in everything from business to science.

Pluchino and co’s [sic] model is straightforward. It consists of N people, each with a certain level of talent (skill, intelligence, ability, and so on). This talent is distributed normally around some average level, with some standard deviation. So some people are more talented than average and some are less so, but nobody is orders of magnitude more talented than anybody else….

The computer model charts each individual through a working life of 40 years. During this time, the individuals experience lucky events that they can exploit to increase their wealth if they are talented enough.

However, they also experience unlucky events that reduce their wealth. These events occur at random.

At the end of the 40 years, Pluchino and co rank the individuals by wealth and study the characteristics of the most successful. They also calculate the wealth distribution. They then repeat the simulation many times to check the robustness of the outcome.

When the team rank individuals by wealth, the distribution is exactly like that seen in real-world societies. “The ‘80-20’ rule is respected, since 80 percent of the population owns only 20 percent of the total capital, while the remaining 20 percent owns 80 percent of the same capital,” report Pluchino and co.

That may not be surprising or unfair if the wealthiest 20 percent turn out to be the most talented. But that isn’t what happens. The wealthiest individuals are typically not the most talented or anywhere near it. “The maximum success never coincides with the maximum talent, and vice-versa,” say the researchers.

So if not talent, what other factor causes this skewed wealth distribution? “Our simulation clearly shows that such a factor is just pure luck,” say Pluchino and co.

The team shows this by ranking individuals according to the number of lucky and unlucky events they experience throughout their 40-year careers. “It is evident that the most successful individuals are also the luckiest ones,” they say. “And the less successful individuals are also the unluckiest ones.”

The writer, who is dazzled by pseudo-science, gives away his Obamanomic bias (“you didn’t build that“) by invoking fairness. Luck and fairness have nothing to do with each other. Luck is luck, and it doesn’t make the beneficiary any less deserving of the talent, or legally obtained income or wealth, that comes his way.

In any event, the model in question is junk. To call it junk science would be to imply that it’s just bad science. But it isn’t science; it’s a model pulled out of thin air. The modelers admit this in the article cited by the Technology Review writer, “Talent vs. Luck, the Role of Randomness in Success and Failure“:

In what follows we propose an agent-based model, called “Talent vs Luck” (TvL) model, which builds on a small set of very simple assumptions, aiming to describe the evolution of careers of a group of people influenced by lucky or unlucky random events.

We consider N individuals, with talent Ti (intelligence, skills, ability, etc.) normally distributed in the interval [0; 1] around a given mean mT with a standard deviation T , randomly placed in xed positions within a square world (see Figure 1) with periodic boundary conditions (i.e. with a toroidal topology) and surrounded by a certain number NE of “moving” events (indicated by dots), someone lucky, someone else unlucky (neutral events are not considered in the model, since they have not relevant effects on the individual life). In Figure 1 we report these events as colored points: lucky ones, in green and with relative percentage pL, and unlucky ones, in red and with percentage (100􀀀pL). The total number of event-points NE are uniformly distributed, but of course such a distribution would be perfectly uniform only for NE ! 1. In our simulations, typically will be NE N=2: thus, at the beginning of each simulation, there will be a greater random concentration of lucky or unlucky event-points in different areas of the world, while other areas will be more neutral. The further random movement of the points inside the square lattice, the world, does not change this fundamental features of the model, which exposes dierent individuals to dierent amount of lucky or unlucky events during their life, regardless of their own talent.

In other words, this is a simplistic, completely abstract model set in a simplistic, completely abstract world, using only the authors’ assumptions about the values of a small number of abstract variables and the effects of their interactions. Those variables are “talent” and two kinds of event: “lucky” and “unlucky”.

What could be further from science — actual knowledge — than that? The authors effectively admit the model’s complete lack of realism when they describe “talent”:

[B]y the term “talent” we broadly mean intelligence, skill, smartness, stubbornness, determination, hard work, risk taking and so on.

Think of all of the ways that those various — and critical — attributes vary from person to person. “Talent”, in other words, subsumes an array of mostly unmeasured and unmeasurable attributes, without distinguishing among them or attempting to weight them. The authors might as well have called the variable “sex appeal” or “body odor”. For that matter, given the complete abstractness of the model, they might as well have called its three variables “body mass index”, “elevation”, and “race”.

It’s obvious that the model doesn’t account for the actual means by which wealth is acquired. In the model, wealth is just the mathematical result of simulated interactions among an arbitrarily named set of variables. It’s not even a multiple regression model based on statistics. (Although no set of statistics could capture the authors’ broad conception of “talent”.)

The modelers seem surprised that wealth isn’t normally distributed. But that wouldn’t be a surprise if they were to consider that wealth represents a compounding effect, which naturally favors those with higher incomes over those with lower incomes. But they don’t even try to model income.

So when wealth (as modeled) doesn’t align with “talent”, the discrepancy — according to the modelers — must be assigned to “luck”. But a model that lacks any nuance in its definition of variables, any empirical estimates of their values, and any explanation of the relationship between income and wealth cannot possibly tell us anything about the role of luck in the determination of wealth.

At any rate, it is meaningless to say that the model is valid because its results mimic the distribution of wealth in the real world. The model itself is meaningless, so any resemblance between its results and the real world is coincidental (“lucky”) or, more likely, contrived to resemble something like the distribution of wealth in the real world. On that score, the authors are suitably vague about the actual distribution, pointing instead to various estimates.

(See also “Modeling, Science, and Physics Envy” and “Modeling Revisited“.)

“That’s Not Who We Are”

I had been thinking recently about that meaningless phrase, and along came Bill Vallicella’s post to incite this one. As BV says, it’s a stock leftist exclamation. I don’t know when or where it originated. But I recall that it was used a lot on The West Wing, about which I say this in “Sorkin’s Left-Wing Propaganda Machine“:

I endured The West Wing for its snappy dialogue and semi-accurate though cartoonish, depictions of inside politics. But by the end of the series, I had tired of the show’s incessant propagandizing for leftist causes….

[The] snappy dialogue and semi-engaging stories unfold in the service of bigger government. And, of course, bigger is better because Aaron Sorkin makes it look that way: a wise president, crammed full of encyclopedic knowledge; staffers whose IQs must qualify them for the Triple Nine Society, and whose wit crackles like lightning in an Oklahoma thunderstorm; evil Republicans whose goal in life is to stand in the way of technocratic progress (national bankruptcy and the loss of individual freedom don’t rate a mention); and a plethora of “worthy” causes that the West-Wingers seek to advance, without regard for national bankruptcy and individual freedom.

The “hero” of The West Wing is President Josiah Bartlet[t], who — as played by Martin Sheen — is an amalgam of Bill Clinton (without the sexual deviancy), Charles Van Doren (without the venality), and Daniel Patrick Moynihan (without the height).

Getting back to”That’s not who we are”, it refers to any policy that runs afoul of leftist orthodoxy: executing murderers, expecting people to work for a living, killing terrorists with the benefit of a jury trial, etc., etc., etc.

When you hear “That’s not who we are” you can be sure that whatever it refers to is a legitimate defense of liberty. An honest leftist (oxymoron alert) would say of liberty: “That’s not who we (leftists) are.”

Thoughts about Mass Shootings

The shootings yesterday and today in El Paso and Dayton have, of course, redoubled the commitment of Democrats to something called “gun control”. This is nothing more than another instance of the left’s penchant for magical thinking.

The root of the problem isn’t a lack of “gun control”, it’s a lack of self-control — a lack that has become endemic to America since the 1960s. As I say in “Mass Murder: Reaping What Was Sown“, that lack is caused by (among other things):

  • governmental incentives to act irresponsibly, epitomized by the murder of unborn children as a form of after-the-fact birth control, and more widely instituted by the vast expansion of the “social safety net”
  • treatment of bad behavior as an illness (with a resulting reliance on medications), instead of putting a stop to it and punishing it
  • the erosion and distortion of the meaning of justice, beginning with the virtual elimination of the death penalty, continuing on to the failure to put down and punish riots, and culminating in the persecution and prosecution of persons who express the “wrong” opinions
  • governmental encouragement and subsidization of the removal of mothers from the home to the workplace
  • the decline of two-parent homes and the rise of illegitimacy
  • the complicity of government officials who failed to enforce existing laws and actively promoted leniency in their enforcement (see this and this, for example).

It is therefore

entirely reasonable to suggest that mass murder … is of a piece with violence in America, which increased rapidly after 1960s and has been contained only by dint of massive incarceration. Violence in general and mass-murder in particular flow from the subversion and eradication of civilizing social norms, which began in earnest in the 1960s. The numbers bear me out.

Drawing on Wikipedia, I compiled a list of 317 incidents of mass murder in the United States from the early 1800s through 2017….

These graphs are derived from the consolidated list of incidents:


The vertical scale is truncated to allow for a better view of the variations in the casualty rate. In 1995, there were 869 casualties in 3 incidents (an average of 290); about 850 of the casualties resulted from the Oklahoma City bombing.

The federal assault weapons ban — really a ban on the manufacture of new weapons of certain kinds — is highlighted because it is often invoked as the kind of measure that should be taken to reduce the incidence of mass murders and the number of casualties they produce. Even Wikipedia — which is notoriously biased toward the left — admits (as of today) that “the ban produced almost no significant results in reducing violent gun crimes and was allowed to expire.”

There is no compelling, contrary evidence in the graphs. The weapons-ban “experiment” was too limited in scope and too-short lived to have had any appreciable effect on mass murder. For one thing, mass-murderers are quite capable of using weapons other than firearms. The years with the three highest casualty rates (second graph) are years in which most of the carnage was caused by arson (1958) and bombing (1995 and 2013).

The most obvious implication of this analysis is found in the upper graph. The incidence of mass murders was generally declining from the early 1900s to the early 1960s. Then all hell broke loose.

I rest my case.

(See also “Reductio ad Sclopetum, or Getting to the Bottom of ‘Gun Control’“, “‘This Has to Stop’“, and “Utilitarianism vs. Liberty“, especially UTILITARIANISM AND GUN CONTROL VS. LIBERTY.)

Thaler’s Fatuousness

Richard  Thaler, with whom I had a nodding acquaintance many years ago, is one of my least favorite economists — and a jerk, to boot. (See, for example, “The Perpetual Nudger“, “Richard Thaler, Nobel Laureate“, “Thaler’s Non-Revolution in Economics“, “Another (Big) Problem with ‘Nudging’“, and ” Thaler on Discounting“.) What the world needs isn’t a biography of the nudger-in-chief, but that’s what the world now has, no thanks to The Library of Economics and Liberty, where the mercifully brief bio is posted.

In it, the reader is treated to such “wisdom” as this:

Economists generally assume that more choices are better than fewer choices. But if that were so, argues Thaler, people would be upset, not happy, when the host at a dinner party removes the pre-dinner bowl of cashews. Yet many of us are happy that it’s gone. Purposely taking away our choice to eat more cashews, he argues, makes up for our lack of self-control.

Notice the sleight of hand by which the preferences of a few (including Thaler, presumably) are pushed front and center: “many of us are happy”. Who is “us”? And what about the preferences of everyone else, who may well comprise a majority? Thaler is happy because the the host has taken an action of which he (Thaler) approves, because he (Thaler) wants to tell the rest of us what makes us happy.

There’s more:

Thaler … noticed another anomaly in people’s thinking that is inconsistent with the idea that people are rational. He called it the “endowment effect.” People must be paid much more to give something up (their “endowment”) than they are willing to pay to acquire it. So, to take one of his examples from a survey, people, when asked how much they are willing to accept to take on an added mortality risk of one in one thousand, would give, as a typical response, the number $10,000. But a typical response by people, when asked how much they would pay to reduce an existing risk of death by one in one thousand, was $200.

Surveys are meaningless. Talk is cheap (see #5 here).

Even if the survey results are somewhat accurate, in that there is a significant gap between the two values, there is a rational explanation for such a gap. In the first instance, a person is (in theory) accepting an added risk, one that he isn’t already facing. In the second instance, the existing risk may be one that the person being asked considers to be very low, as applied to himself. The situations clearly aren’t symmetrical, so it’s unsurprising that the price of accepting a new risk is higher than the payment for reducing a possible risk.

That’s enough of Thaler. More than enough.

Why Populism Is Popular

Affluent “elites” are well-insulated from the consequences of the policies that they promote and enact. Sometimes the elites justify those policies because they are “good” for an amorphous aggregation (“the country”, “the people”, GDP). Thus, for example, elites favor “free trade” and “open borders” because some they (supposedly) result in a net gain in GDP. That the gain is net of the losses incurred by many taxpayers and victims of crime is irrelevant to the elites.

And when elites are promoting “social justice” they favor policies that are best for a particular group, to the exclusion of other groups. It doesn’t start that way, but that’s how it ends up, because the easiest way to “make things right” for a particular group is to penalize others, as in “affirmative action”, “affordable (tax-funded) housing”, suppression of speech, etc.

You get the idea. Elites stroke their own egos in the pursuit of abstract measures of “good”, and disdain those who are harmed, calling them — among many things — “bitter clingers”, “deplorables”, and denizens of “flyover country”.

I have been guilty of elitism, but I am cured of it. I have joined the no-longer-silent majority. No-longer-silent thanks to Donald Trump — praise be!

(See also “Modern Utilitarianism“, “Rethinking Free Trade“, and “Rooted in the Real World of Real People“.)