Here, with good news for Trump supporters.
I am reading (thanks to my son) Warning to the West, a collection of five speeches given by Alexander Solzhenitsyn in 1975 and 1976.
Solzhenitsyn, in the second of the speeches, says:
Communism is as crude an attempt to explain society and the individual as if a surgeon were to perform his delicate operations with a meat ax. All that is subtle in human psychology and in the structure of society (which is even more complex), all of this is reduced to crude economic processes.
It is also reduced to “class”, in a simplistic way that would be laughable were it not taken as gospel by so many. Thus the opening sentence of Part I of the The Communist Manifesto:
The history of all hitherto existing societies is the history of class struggles.
Oh, really? Only if one omits war, flood, famine, invention, innovation, entrepreneurship, religion, morality, science, and myriad other facets of human life.
Or, as Solzhenitsyn puts it in the same speech:
Communism is so devoid of arguments that it has none to advance against its opponents in … Communist countries. It lacks arguments and hence there is the club, the prison, the concentration camp, and insane asylums with forced confinement.
Marxism has always opposed freedom…. In their correspondence Marx and Engels frequently stated that terror would be indispensable after achieving power, that “it will be necessary to repeat the year 1793 [the year of the Reign of Terror following the French Revolution of 1789]. After achieving power we’ll be considered monsters, but we couldn’t care less.”
None of this should come as a surprise to anyone who takes a bit of time to read the Manifesto, which at 11,000 words is just a long essay. The heart of it is in Part II:
[T]he first step in the revolution by the working class, is to raise the proletariat to the position of ruling as to win the battle of democracy.
The proletariat will use its political supremacy to wrest, by degrees, all capital from the bourgeoisie, to centralise all instruments of production in the hands of the State, i.e., of the proletariat organised as the ruling class; and to increase the total of productive forces as rapidly as possible.
Of course, in the beginning, this cannot be effected except by means of despotic inroads on the rights of property, and on the conditions of bourgeois production; by means of measures, therefore, which appear economically insufficient and untenable, but which, in the course of the movement, outstrip themselves, necessitate further inroads upon the old social order, and are unavoidable as a means of entirely revolutionising the mode of production.
The state is supposed to wither away when nirvana is attained. But Marx and Engels, having already shown themselves to be socially and economically ignorant, thereby reveal their ignorance of human nature. Power is a prize unto itself. And, as the subsequent history of Communism demonstrates amply, those who hold it will use force or coercion (e.g., the incumbent-protection racket known as campaign-finance “reform”) to perpetuate their grip on it.
Of all the frightening parts of the Manifesto, I find what comes next to be most frightening:
These measures [referred to above] will of course be different in different countries. Nevertheless in the most advanced countries, the following will be pretty generally applicable.
1. Abolition of property in land and application of all rents of land to public purposes.
2. A heavy progressive or graduated income tax.
3. Abolition of all right of inheritance.
4. Confiscation of the property of all emigrants and rebels.
5. Centralisation of credit in the hands of the State, by means of a national bank with State capital and an exclusive monopoly.
6. Centralisation of the means of communication and transport in the hands of the State.
7. Extension of factories and instruments of production owned by the State; the bringing into cultivation of waste-lands, and the improvement of the soil generally in accordance with a common plan.
8. Equal liability of all to labour. Establishment of industrial armies, especially for agriculture.
9. Combination of agriculture with manufacturing industries; gradual abolition of the distinction between town and country, by a more equable distribution of the population over the country.
10. Free education for all children in public schools…. Combination of education with industrial production, &c., &c.
Those measures that haven’t been fully enacted in the United States have either been partially enacted or are taken seriously by leading politicians and a goodly fraction of the populace.
The first sentence of item 10 may seem unexceptionable, but the next sentence (in the quotation) betrays it; that is, education is to be the means of inculcating Communism. Which is exactly the model of public education in America, which began as an anti-Catholic measure and has evolved into a machine of leftist indoctrination.
As for the entire list, it matters not whether the ideas come from Marx and Engels or “enlightened” American politicians. Their thrust is the destruction of liberty — and, with it, prosperity. The frightening thing is that the American politicians who advance the ideas and the public that swallows them cannot see (or don’t care about) the consequences. This, of course, can be counted as a “victory” for public education in America.
The essence of it is captured in the first verse of “Dem Bones“:
Toe bone connected to the foot bone
Foot bone connected to the heel bone
Heel bone connected to the ankle bone
Ankle bone connected to the shin bone
Shin bone connected to the knee bone
Knee bone connected to the thigh bone
Thigh bone connected to the hip bone
Hip bone connected to the back bone
Back bone connected to the shoulder bone
Shoulder bone connected to the neck bone
Neck bone connected to the head bone …
The final line gets to the bottom of it (if you are a deist or theist):
… Now hear the word of the Lord.
But belief in the connectedness of everything in the universe doesn’t depend on one’s cosmological views. In fact, a strict materialist who holds that the universe “just is” will be obliged to believe in universal connectedness because everything is merely physical (or electromagnetic), and one thing touches other things, which touch other things, ad infinitum. (The “touching” may be done by light.)
Connectedness isn’t necessarily causality; it can be mere observation. Though observation — which involves the electromagnetic spectrum — is thought to be causal with respect to sub-atomic particles. As I put it here:
There’s no question that [a] particle exists independently of observation (knowledge of the particle’s existence), but its specific characteristic (quantum state) is determined by the act of observation. Does this mean that existence of a specific kind depends on knowledge? No. It means that observation determines the state of the particle, which can then be known.
I should have been clear about the meaning of “determine”, as used above. In my view it isn’t that observation causes the quantum state that is observed. Rather, observation measures (determines) the quantum state at the instant of measurement. Here’s an illustration of what I mean:
A die is rolled. Its “quantum state” is determined (measured) when it stops rolling and is readily observed. But the quantum state isn’t caused by the act of observation. In fact, the quantum state can be observed (determined, measured) — but not caused — at any point while the die is rolling by viewing it, sufficiently magnified, with the aid of a high-speed camera.
Connectedness can also involve causality, of course. The difficult problem — addressed at the two links in the opening paragraph — is sorting out causal relationships given so much connectedness. Another term for the problem is “causal density“, which leads to spurious findings:
When there are many factors that have an impact on a system, statistical analysis yields unreliable results. Computer simulations give you exquisitely precise unreliable results. Those who run such simulations and call what they do “science” are deceiving themselves.
Is it any wonder that “scientists” tell us that one thing or another is bad for us, only to tell us at a later date that it isn’t bad for us and may even be good for us? This is a widely noted phenomenon (though insufficiently documented). But its implications for believing in, say, anthropogenic global warming seem to be widely ignored — most unfortunately.
Yesterday I argued — based on experience — that the GAO’s about the illegality of withholding of military aide to Ukraine was politically expedient. Today, along comes Josh Blackman of The Volokh Conspiracy with this take:
The decision does not conclude that President Trump violated the ICA with respect to the withholding of funds. GAO did not, and indeed could not, find that President Trump personally violated the ICA with respect to withholding the funds….
[D]id GAO provide any evidence to show that President Trump personally directed his subordinates to withhold the funds? I hesitate before concluding that the President ordered his subordinates to violate the law, when there is a dispute about what exactly the law requires. Several people have cited Mick Mulvaney’s press conference, wherein he relayed a conversation with President Trump:
Mulvaney: “(Trump’s) like, ‘Look, this is a corrupt place. I don’t want to send them a bunch of money and have them waste it, have them spend it, have them use it to line their own pockets. Plus, I’m not sure that the other European countries are helping them out either.’
This is not evidence that Trump ordered his subordinates to withhold any funding. Trump merely expressed an opinion that he didn’t want to send money to Ukraine, which he viewed as a corrupt country….
Third, did GAO provide any evidence to show that President Trump directed his subordinates to deliberately violate the ICA? This question is premised on a disputed legal issue: was the withholding of certain funds, for some period of time, a violation of the ICA….
Four, did GAO find that President Trump violated the Constitution’s Take Care Clause? No. The decision states, “Faithful execution of the law does not permit the President to substitute his own policy priorities for those that Congress has enacted into law.” Did Trump violate the Take Care Clause? GAO will not say, but they are all too happy to insinuate a constitutional ruling. This driveby dictum is entirely unsupported.
There are … unresolved factual questions: (1) what precisely the President did, (2) what he intended to do, (3) when did he do it, (4) what were the consequences (if any) of those decisions, and finally, (4) what were the likely consequences to follow when those decisions were taken. But GAO has not come close to resolving these factual issues or analyzing the complex legal issues in this situation. And it was truly reckless for GAO to suggest otherwise. They offered only a threadbare constitutional analysis, during this heated and polarized time, hours before the impeachment trial began.
Case (against GAO) closed.
I ended that post with this:
Every line of human endeavor reaches a peak, from which decline is sure to follow if the things that caused it to peak are mindlessly rejected for the sake of novelty (i.e., rejection of old norms just because they are old). This is nowhere more obvious than in the arts.
It should be equally obvious to anyone who takes an objective look at the present state of American society and is capable of comparing it with American society of the 1940s and 1950s. For all of its faults it was a golden age. Unfortunately, most Americans now living (Noah Smith definitely included) are too young and too fixated on material things to understand what has been lost — irretrievably, I fear.
My point is underscored by Annebelle Timsit, writing at Quartz:
The endless stretch of a lazy summer afternoon. Visits to a grandparent’s house in the country. Riding your bicycle through the neighborhood after dark. These were just a few of the revealing answers from more than 400 Twitter users in response to a question: “What was a part of your childhood that you now recognize was a privilege to have or experience?”
That question, courtesy of writer Morgan Jerkins, revealed a poignant truth about the changing nature of childhood in the US: The childhood experiences most valued by people who grew up in the 1970s and 1980s are things that the current generation of kids are far less likely to know.
That’s not a reference to cassette tapes, bell bottoms, Blockbuster movies, and other items popular on BuzzFeed listicles. Rather, people are primarily nostalgic for a youthful sense of independence, connectedness, and creativity that seems less common in the 21st century. The childhood privileges that respondents seemed to appreciate most in retrospect fall into four broad categories:
“Riding my bike at all hours of the day into the evening throughout many neighborhoods without being stopped or asked what I was doing there,” was one Twitter user’s answer to Jerkins’ question. Another commenter was grateful for “summer days & nights spent riding bikes anywhere & everywhere with friends, only needing to come home when the streetlights came on,” while yet another recalled “having a peaceful, free-range childhood.” Countless others cited the freedom to explore—with few restrictions—as a major privilege of their childhood.American children have less independence and autonomy today than they did a few generations ago.
For many of today’s children, that privilege is disappearing. American children have less independence and autonomy today than they did a few generations ago. As parents have become increasingly concerned with safety, fewer children are permitted to go exploring beyond the confines of their own backyard. Some parents have even been prosecuted or charged with neglect for letting their children walk or play unsupervised. Meanwhile, child psychologists say that too many children are being ushered from one structured activity to the next, always under adult supervision—leaving them with little time to play, experiment, and make mistakes.
That’s a big problem. Kids who have autonomy and independence are less likely to be anxious, and more likely to grow into capable, self-sufficient adults. In a recent video for The Atlantic, Julie Lythcott-Haims, author of How to Raise an Adult, argues that so-called helicopter parents “deprive kids the chance to show up in their own lives, take responsibility for things and be accountable for outcomes.”
That message seems to be gaining traction. The state of Utah, for example, recently passed a “free-range” parenting law meant to give parents the freedom to send kids out to play on their own.”
“Bravo!” to the government of Utah.
Transport yourself back three decades from the 1970s and 1980s to the 1940s and 1950s, when I was a child and adoslescent, and the contrast between then and now is even more stark than the contrast noted by Timsit.
And it has a lot to do with the social ruin that has been visited upon America by the spoiled (cosseted) children of capitalism.
Other related posts:
Ghosts of Thanksgiving Past
The Passing of Red Brick Schoolhouses and a Way of Life
An Ideal World
‘Tis the Season for Nostalgia
Another Look into the Vanished Past
Whither (Wither) Classical Liberalism and America?
The latest hit on Trump’s dealings with Ukraine comes from the GAO (Government Accountability Office, formerly known as Government Accounting Office). Chrissy Clark at The Federalist has the story:
The GAO determined the Trump administration broke the Impoundment Control Act of 1974. This law creates a mechanism for the executive branch to ask Congress to reconsider a funding decision or appropriation that’s already been signed into law.
“Faithful execution of the law does not permit the President to substitute his own policy priorities for those that Congress has enacted into law. [The Office of Management and Budget] withheld funds for a policy reason, which is not permitted under the Impoundment Control act,” said Thomas Armstrong, General Counsel for the GAO.
The report also concluded that the Trump Office of Management and Budget did not provide all the information needed to fulfill their duties.
“[The Office of Management and Budget] and State have failed, as of yet, to provide the information we need to fulfill our duties under the ICA regarding potential impoundments of [Foreign Military Financing] funds,” Armstrong said.
Democratic Senator Chris Van Hollen of Maryland requested the GAO investigation in December.
“This bombshell legal opinion from the independent Government Accountability Office demonstrates, without a doubt, that the Trump administration illegally withheld security assistance from Ukraine,” Van Hollen told the Washington Post.
They key phrase in Van Hollen’s statement is that the GAO’s report is simply an “opinion.” Rachel Semmel, director of communications for the Office of Management and Budget, said the GAO’s decision has no legal baring on the Trump administration.
“We disagree with GAO’s opinion. OMB uses its apportionment authority to ensure taxpayer dollars are properly spent consistent with the President’s priorities and with the law,” Semmel said.
The GAO has a long history of attempting to stay relevant in the executive branch, even long before the current impeachment of President Trump. The GAO also has a record of flip-flops. They were forced to reverse a faulty opinion on legal grounds when they opposed the reimbursement of federal employee travel costs. They have consistently rushed to insert themselves into the impeachment discussion and the OMB is hopeful they will be forced to reverse their opinion again.
Notably, the Obama administration was also at fault under the GAO’s rules. A 2014 report found Obama broke the law when he failed to notify Congress about the impending prisoner swap between Sgt. Bowe Bergdahl and five Taliban leaders.
There was no legal action taken against the Obama administration for not abiding by the GAO in 2014. That same precedent should be upheld during the Trump administration as well. The OMB and Trump administration have no legal duty to abide by the GAO and their legal opinions.
I can tell you, from experience, that the GAO is once again “attempting to stay relevant”, that is, to get the “right” answer — the answer sought by the politician who requested the investigation. In this case, the “right” answer was a finding against the Trump administration.
Almost 30 years ago, the “right” answer was a finding against a kind of think-tank known as a federally funded research and development center (FFRDC). FFRDCs are an outgrowth of the operations research groups that were formed in World War II. The groups were staffed by civilian scientists, who were given access to highly classified and sensitive information that enabled them to provide timely and objective evaluations of military systems, operations, and tactics. The perceived success of the groups led to their continuation and expansion after the war. Their privileged relationship with the various armed services was denoted by their designation as FFRDCs.
Almost everything government does is either unnecessary or wasteful. The glaring exceptions are the provision of justice (when it is in fact provided) and national defense (ditto). Until the early 1990s, FFRDCs had been an integral and valuable part of the defense effort. Their privileged relationship with the armed services enabled them to do something that I would not ordinarily admit: They were superior to private, for-profit analysis firms.
I emphasize were because something happened in the early 1990s to undermine that privileged relationship. The something was a GAO investigation, instigated by some members of Congress at the behest of the Professional Services Council, a lobbying organization that represents for-profit analysis firms.
As the chief financial officer of an FFRDC sponsored by the Navy, I — like many other FFRDC officials — had to respond to long, probing questionnaires from the GAO’s investigative team. There were subsequent interviews to probe the written answers, and I recall that my interview (in company with a Navy rep) lasted at least a few hours.
I was struck by the GAO team’s studied inability to grasp the value of the privileged relationship between FFRDCs and their sponsors. What was the relationship like? And why was it so valuable to the defense effort? To answer the first question is to answer the second one as well.
FFRDCs were “bucket funded”, meaning that most of their funding came in a lump sum instead of being fought for and won project by project The continuity of funding had several beneficial results:
- Analysts could be hired for their analytical skills; sales skills were irrelevant. This meant, in practice, that Ph.D. and Master’s degrees in quantitative sciences were more prevalent at FFRCDs than at for-profit firms.
- Analysts could devote their efforts to analysis instead of scrambling for new contracts, often in areas where they had no great expertise (if any).
- Analysts could work on similar problems for long periods of time, developing expertise and accumulating valuable data along the way.
- Clients (the offices for which particular projects were conducted) could call on analysts to address emerging issues without having to go through lengthy contracting processes.
- Funding wasn’t controlled by clients, but by a separate office. Thus there was little if any pressure to get the “right” answers — those that the clients might have preferred.
But all of that was irrelevant to the GAO team, which was bent on emphasizing the obvious fact that FFRDCs didn’t have to compete for contracts. Regardless of the benefits of the arrangement — and the minuscule fraction of the defense budget allocated to FFRDCs — the Defense Department had to change it ways because FFRDCs had an “unfair” advantage over for-profit firms.
The rest is history: FFRDCs were forced into the mold of for-profit firms, and much of the valuable continuity and analyst-client trust was destroyed in the process. “We’re just another contractor now”, is a refrain now commonly heard at my old FFRDC.
And why? Because GAO — certainly not for the first or last time — got the “right” answer, that is, the answer sought by the congressional sponsor of the investigation.
The bottom line of this post is an an assessment of the prospects for Trump’s re-election, which have improved markedly in the past four days.
To reach that assessment, I review Trump’s poll numbers, the effect of the impeachment on those numbers, and the economic outlook as reflected in the stock market. I derive the poll numbers from a reliable source: Rasmussen Reports:
Trump’s approval ratings are solidly within the range of the past two years, following the post-honeymoon, media-fueled decline in 2017:
Derived from Rasmussen Reports approval ratings for Trump.
Trump continues to be more popular than Obama was at the same point in his presidency:
Trump’s relatively good standing is also obvious in a straightforward comparison of strong-approval ratings, averaged over 7 days. Note that Trump’s strong-approval rating is as high it has been since the early, honeymoon weeks of his presidency:
I also compute an enthusiasm ratio, which is the 7-day average of the following ratio: the fraction of likely voters expressing strong approval divided by the fraction of likely voters responding. Here again, Trump holds a marked advantage over Obama:
Every week since the first inauguration of Obama, Rasmussen Reports has asked 2,500 likely voters whether they see the country as going in the right direction or being on the wrong track. The following graph shows the ratios of right direction/wrong track for Trump and Obama:
Source: Rasmussen Reports, “Right Direction or Wrong Track“.
The ratio for Trump, after a quick honeymoon start, fell into the same range as Obama’s. It jumped with the passage of the tax cut in December 2017, and remained high after that, until the shutdown. The post-shutdown rebound gave way to a slump that ended in October 2019. The recent rise in the ratio parallels the rise eight years earlier, when Obama was in office, but at a markedly higher level.
The Impeachment Effect
The following graph depicts Trump’s approval ratings, according to Rasmussen Reports, since the onset of the current effort to remove Trump from office by impeachment and trial:
Rasmussen’s polling method covers all respondents (a sample of likely voters) over a span of three days. The gaps represent weekends, when Rasmussen doesn’t publish the results of the presidential approval poll.
The Washington Post broke the story on September 20 about Trump’s July 25 phone conversation with the president of Ukraine. Thus the results for September 16 through September 20 didn’t reflect the effects of the story on the views of Rasmussen’s respondents. Trump’s approval ratings continued to rise after September 20, and peaked on September 24, the day on which the House officially initiated an impeachment inquiry. Trump’s approval ratings bottomed on October 25 but since then — despite much sound and fury in the House, culminating in articles of impeachment — they about where they were on September 16, given the range of error advertised by Rasmussen (±2.5 percentage points with a 95-percent level of confidence.).
Meanwhile, the stock market keeps climbing — a good sign of confidence in Trump’s political survival:
Trump’s popularity, relative to Obama’s, is high. (See figure 2.)
Trump’s support is stronger than Obama’s was. (See figures 3 and 4.)
Voters currently have a rosier view of the state of the nation than they did when Obama was re-elected. (See figure 5.)
The effort to remove Trump from office by impeachment hasn’t affected his popularity thus far. (See figure 6.)
The economy continues to grow steadily, and the stock market reflects economic optimism. (See figure 7.)
Minuses: By my reckoning, there are none at the moment.
But there are some wild cards:
The effect of the impeachment trial on voter sentiment vis-a-vis Trump, which could go in either direction.
The pace of economic growth and job creation.
The next phase of trade negotiations with China.
The possibility of a military confrontation with Iran (or even Russia or China).
I’m sure I’m not the only one who’s saying this, but Trump’s handling of the latest Iran crisis is masterful.
First, he did a necessary but provocative thing by taking out General Soleimani. Retaliation was to be expected, which would have given Trump a good excuse to blast a number of strategic sites in Iran (no, not cultural sites).
But when retaliation — or the first wave of it — fizzled in Iraq, Trump took the high ground and left it to Iran to make the next move. Iran surely knows what will happen if there is further retaliation and it’s damaging to U.S. forces, Americans generally, key allies, or the oil pipeline. Trump’s deliberate refusal to retaliate after the missile attack will make Iran the aggressor. And that will give Trump a green light to slam Iran.
In the meantime, Iran can be squeezed by tighter economic sanctions. That, too, might lure Iran into aggressive action.
And if the ayatollahs decide to bide their time and continue the development of nuclear weapons, they will just invite a preemptive strike. If the strike happens before election 2020, and if Democrats follow their script and side with Iran, they can kiss the election good-bye. Sympathy for Iran is confined to the leftist-academic-media-information technology complex. It has a loud voice but not many votes in the grand scheme of things.
This is the third installment of a long post. I may revise it as I post later parts. The whole will be published as a page, for ease of reference. If you haven’t read “Part I: What Is Economics About?“ or “Part II: Economic Principles in Perspective“, you may benefit from doing so before you embark on this part.
What follows isn’t meant to depict the historical evolution of economies and the role of governments in them. The idea, rather, is to contrast various degrees of complexity in economic activity, and the effect of government on that activity — for good and ill.
Communism: The Real Kind
Bands of hunter-gatherers roam widely, or as widely as they can on foot, with young children and old adults (perhaps in their 30s and 40s) in tow. The hunters and gatherers share with other members of the band what they catch, kill, and collect. The stronger members of the band presumably catch, kill, and collect more than their dependents do, and so they probably take more than their “share” because doing so gives them the strength to do what they do for everyone else.
This primitive arrangement — in which producers are necessarily consumer more than non-producers so that non-producers are able to survive — operates exactly in accordance with the maxim “from each according to his ability; to each according to his needs”. But that is not the system envisaged by Marxists and Millennials, in which the state takes from producers and given to non-producers because it’s “only fair” and in the spirit of “social justice”. Primitive peoples know on which side their bread is buttered, which is a lot more than can be said for modern “communists”, state socialists, and the parasites who believe that the goose will continue to lay golden eggs after it has been put down.
That’s what happens when people without “skin in the game” (i.e., political theorists, pundits, politicians, bureaucrats, naive students, and layabouts) get their hands on the levers of government power. But I am getting ahead of myself and will have much more to say about it later in this post.
Barter: An Economy of Relatives, Friends, and Acquaintances
Imagine a simple economy in which goods are exchanged through barter. Implicit in the transaction are the existence of property rights and gains from trade: The producers of the goods own them and can trade them to their mutual benefit.
There is, at this point, no money to clutter our understanding of the economy’s workings, though there could be credit. One producer, Arlo, could give some of his goods to another producer, Brenda, with the understanding that Brenda will repay the loan with a specified quantity of goods by a specified time.
Credit can exist in this barter economy because its participants know each other well, either personally or by reputation. Credit is therefore more firmly based on trust and knowledge than it is in economies that are more widely dispersed and involve total strangers, if not enemies. But credit always carries a cost because the creditor (a) usually has other uses for the goods (or money) that he lends, and must forgo those uses by lending, and (b) takes a risk that the borrower won’t repay the loan. The risk may be lower in a barter economy of friends, relatives, and acquaintances than in a dispersed, money-based economy, but it is nevertheless there.
Credit in a barter economy can finance investment. If Arlo is a baker and Brenda is a butter-maker, Arlo could offer to give Brenda additional bread in the future (over and above the amount that she would normally receive for a certain amount of butter) while he rebuilds his oven so that he can produce bread at a faster rate. (Here, we must assume that the capacity of Arlo’s oven is a bottleneck, and that the availability other resources — flour, for example — is not a constraint.)
Barter, whatever its social advantages — which shouldn’t be overlooked — is cumbersome. Even with the use of central marketplaces, much time and effort is required to arrange, in a timely way, all of the trades necessary to satisfy even a fairly simple menu of wants: food (of various kinds), clothing (of various kinds), construction services (of various kinds), personal-care services (e.g., haircuts) and products (e.g., soap). It is time and effort that could be put to better use in the enjoyment of the fruits of one’s labor and in the production of more goods (in order to enjoy even more fruits).
Then, too, there is the difficulty of saving in a barter economy. Arlo might stockpile bread, for instance, but how much bread can he stockpile before it spoils or loses value because Brenda can’t use as much as Arlo has on hand? Producers of services face more serious problems. For example, how would a barber save haircuts for a rainy day?
A Closed, Money-Based Economy
We are still in a close-knit economy, that is, a closed one. But money now enters the picture. It eases the task of acquiring goods by allowing the purchaser to acquire them at his leisure (subject to the risk of non-delivery, of course). This is called saving, which is also a form of credit. The purchaser of goods (who is also a producer of goods) needn’t trade all of his output for the output of others. He can defer his purchases, thus effectively giving credit to those who buy his goods while he puts off buying theirs.
How does it work? If Arlo makes bread and Brenda makes butter, Arlo, with Brenda’s consent, can give her some bread in exchange for money instead of butter. (Maybe Arlo doesn’t need butter at the moment, and would rather buy it from Brenda at a later date.) Arlo, at one stroke, is accepting money (as a measure of the value of the goods he can purchase in the future) and extending credit to Brenda.
The value of the money, to Arlo, depends on his confidence that Brenda will deliver to him the quantity of butter that he would have received by trading his bread for her butter on the spot. If Arlo is unsure about Brenda’s ability to deliver the desired quantity of butter at a future date, he will ask for the monetary equivalent of additional butter. This is equivalent to the issuance of credit by Arlo to Brenda; that is, he is giving her time in which to produce more butter, and getting a share of the additional output in return.
A money-based economy is, perforce, a credit-based economy. And the value of money depends on the holder’s assessment of his ability to get his money’s worth, so to speak.
The existence of money enables producers to save a portion of their income in a non-perishable, fungible form. This facilitates investment by, for example, enabling the investing party to subsist on what he can purchase from the money he has saved while turning his time and effort toward improving the way in which he produces his goods, devising new goods that might yield him more income, or even wandering far and wide to seek new buyers for his goods.
Thus money is a beneficial economic instrument — as long as the terms of its use are established by those who actually produce and exchange goods. This included the “middlemen” (i.e., wholesalers, retailers, bankers, lenders) whose services are sought and valued by producers of other goods. As I will discuss later, outside interference in the creation and valuation money will distort the terms of trade between producers, causing them to make choices that are less beneficial to them than the choices they would make in the absence of such interference.
In an economy where there is no outside interference in the issuance and valuation of money (and credit), defaults aren’t distorting; that is, they don’t change the “normal” flow of economic activity. Those who give and accept credit do so willingly and after balancing the risks involved (including the possibility of unforeseen calamities) against the gains from trade. Moreover, other “middlemen” known as insurers come to the fore. For a fee, which is paid willingly by the participants in this economy, they absorb the costs of losses from unforeseen calamities (personal injury and illness, fire, flood, etc.).
An Open, Money-Based Economy
An open economy is simply one in which goods are exchanged across territorial boundaries. This kind of exchange is inherently beneficial because it enables all parties to improve their lot by giving them access to a wider range of goods. It also fosters specialization, so that a greater abundance of goods is produced, given available resources. Though inter-territorial trade can be conducted through barter, money obviously facilitates inter-territorial trade, inasmuch as it is (by definition) conducted over a wider area, making direct trades even more difficult than they are within smaller area.
Inasmuch as government isn’t yet in the picture, there is practically no downside to inter-territorial trade. It is simply an expansion of what has gone before — voluntary exchanges of goods (usually through the medium of money) for the mutual benefit of the parties to the transactions. With government out of the picture, there are less likely to be distortions of the kind that are caused by tariffs and subsidization, both of which are aimed at benefiting the citizens (or elites) of one territory at the expense of persons in other territories.
An Open, Money-Based Economy with Government
It is time to introduce government. I am not suggesting that government is a necessary or inevitable outgrowth of a money-based economy. Government probably came first, in the guise of a tribal leader to whom certain decisions were referred and who was responsible for settling disputes within the tribe and seeing to its defense from outside force.
The point of introducing government here is to highlight its potential economic value, and to draw attention to the ways in which it can destroy economic value — and liberty as well. I must say, at the outset, that government, when it comes to domestic affairs, can do no better than enforce prevailing social norms that not only bind a people but also protect them from each other. Such norms include the prohibition of — and social punishment of — acts that cause harm, including the disruption of economic activity. They may be summarized as acts of force (e.g., murder, battery, theft, and vandalism) and fraud (e.g., lying and deliberate deception). There is a related peace-keeping function that is best performed by a third party, and that is the settlement of civil disputes, which in some cases must be done by government, as a referee of last resort.
The point of government with respect to such acts is to ensure the enforcement and punishment of prohibitions in an even-handed way by a party that is presumed to be impartial. (I won’t get into the many historical deviations from this ideal, but will later address how those deviations might have been minimized.) With the assurance that government will enforce and punish harmful acts, the populace as a whole — including its economic units — can more freely go about the business of life (and business) and spend less time, effort, and money on self-defense. In this way, government can be a boon to an economy, especially one that spans a large and diverse populace of strangers.
Ensuring that the business of business can be conducted freely (within the constraint that otherwise illegal transactions are prohibited and punished), requires the national government to prevent subsidiary governments from erecting barriers to trade between the territories of the subsidiary governments. The national government may, on the other hand, restrict trade between entities inside the nation and entities outside of it, where such restrictions (a) keep dangerous materials and technologies out of the hands of actual or potential enemies or (b) prevent foreign regimes from undermining parts of the national economy by subsidizing foreign producers directly or through tariffs on imports to the foreign country.
Government can also protect the populace (and the business of business) from attacks by outsiders. The ideal way of doing this is to mount a defense that is robust enough to deter such attacks. Failing that, the defense must be robust enough to defeat attacking outsiders in a way the prevents much of the damage that they might otherwise do to the populace and its economic activities.
(The problematic side of peace-keeping, both domestically and against outsiders, is that its costs must be borne in some manner by the people and economic units it protects. Further, those costs must be borne, in many cases, by persons who have some objection to peace-keeping; for example: outright pacifists, bleeding-hearts who loath to believe that certain classes of human beings are more prone to criminality than others, and yet-to-be-mugged innocents who simply believe the best of everyone. That said, there is no “fair” way to apportion the costs of peace-keeping, but there is a fairer way than the is now the case: the imposition of a truly flat tax.)
A government that is limited as outlined above must be subject to several checks if it is to remain limited:
- A written constitution that specifies the powers of the national government and subsidiary governments.
- Onerous provisions for amending the written constitution.
- A judiciary that is empowered to review all governmental actions to ensure their consistency with the written constitution.
- A mechanism for rejecting judicial decisions that are inconsistent with the written constitution.
- Regular elections through which qualified voters pass judgment on government officials.
- The restriction of voting to persons of mature age who have “skin in the game”.
The failure to institute and maintain any of these checks will result, eventually, in a system of government that routinely does more than defend the populace and ensure that the business of business can be conducted freely. In the United States, the lack of oversight of the judiciary and the expansion of the franchise (rather than its restriction) have proved fatal to the otherwise clever design of the original Constitution.
The result is an badly distorted economy, which produces things (or fails to produce them) in accordance with the desires (mostly) of unelected bureaucrats, and redistributes income and wealth (and such antecedents as jobs and university admissions) in accordance with the desires of persons without “skin in the game” (i.e., political theorists, pundits, politicians, bureaucrats, naive students, and layabouts). The economy isn’t only badly distorted, but as a result of myriad government interventions, it produces far less than it would otherwise produce, to the detriment of almost everyone, including the supposed beneficiaries of government interventions.
What I have discussed thus far is microeconomic activity — the actions of individuals and firms that result in the exchange of economic goods, either directly or with the aid of money and credit. I have also addressed the effects of government interventions, but mainly in terms of the microeconomic effects of such interventions.
What I have avoided, except in passing, is the thing called macroeconomics, which is supposed to deal with aggregate economic activity and things that influence it, such as the monetary and fiscal tools wielded by government.
Here is an oft-quoted observation, spuriously attributed to Socrates, about youth:
The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise. Children are now tyrants, not the servants of their households. They no longer rise when elders enter the room. They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannize their teachers.
Even though Socrates didn’t say it, the sentiment has nevertheless been stated and restated since 1907, when the observation was concocted, and probably had been shared widely for decades, and even centuries, before that. I use a form of it when I discuss the spoiled children of capitalism (e.g., here).
Is there something to it? No and yes.
No, because rebelliousness and disrespect for elders and old ways seem to be part of the natural processes of physical and mental maturation.
Not all adolescents and young adults are rebellious and disrespectful. But many rebellious and disrespectful adolescents and young adults carry their attitudes with them through life, even if less obviously than in youth, as they climb the ladders of various callings. The callings that seem to be most attractive to the rebellious are the arts (especially the written, visual, thespian, terpsichorial, musical, and cinematic ones), the professoriate, the punditocracy, journalism, and politics.
Which brings me to the yes answer, and to the spoiled children of capitalism. Rebelliousness, though in some persons never entirely outgrown or suppressed by maturity, will more often be outgrown or suppressed in economically tenuous conditions, the challenges of which which almost fully occupied their bodies and minds. (Opinionizers and sophists were accordingly much thinner on the ground in the parlous days of yore.)
However, as economic growth and concomitant technological advances have yielded abundance far beyond the necessities of life for most inhabitants of the Western world, the beneficiaries of that abundance have acquired yet another luxury: the luxury of learning about and believing in systems that, in the abstract, seem to offer vast improvements on current conditions. It is the old adage “Idle hands are the devil’s tools” brought up to date, with “minds” joining “hands” in the devilishness.
Among many bad things that result from such foolishness (e.g., the ascendancy of ideologies that crush liberty and, ironically, economic growth) is the loss of social cohesion. I was reminded of this by Noah Smith’s fatuous article, “The 1950s Are Greatly Overrated“.
Smith is an economist who blogs and writes an opinion column for Bloomberg News. My impression of him is that he is a younger version of Paul Krugman, the former economist who has become a left-wing whiner. The difference between them is that Krugman remembers the 1950s fondly, whereas Smith does not.
[The nostalgia] is probably rooted in golden memories of his childhood in a prosperous community, though he retrospectively supplies an economic justification. The 1950s were (according to him) an age of middle-class dominance before the return of the Robber Barons who had been vanquished by the New Deal. This is zero-sum economics and class warfare on steroids — standard Krugman fare.
Smith, a mere toddler relative to Krugman and a babe in arms relative to me, takes a dim view of the 1950s:
For all the rose-tinted sentimentality, standards of living were markedly lower in the ’50s than they are today, and the system was riddled with vast injustice and inequality.
Women and minorities are less likely to have a wistful view of the ’50s, and with good reason. Segregation was enshrined in law in much of the U.S., and de facto segregation was in force even in Northern cities. Black Americans, crowded into ghettos, were excluded from economic opportunity by pervasive racism, and suffered horrendously. Even at the end of the decade, more than half of black Americans lived below the poverty line:
Women, meanwhile, were forced into a narrow set of occupations, and few had the option of pursuing fulfilling careers. This did not mean, however, that a single male breadwinner was always able to provide for an entire family. About a third of women worked in the ’50s, showing that many families needed a second income even if it defied the gender roles of the day:
For women who didn’t work, keeping house was no picnic. Dishwashers were almost unheard of in the 1950s, few families had a clothes dryer, and fewer than half had a washing machine.
But even beyond the pervasive racism and sexism, the 1950s wasn’t a time of ease and plenty compared to the present day. For example, by the end of the decade, even after all of that robust 1950s growth, the white poverty rate was still 18.1%, more than double that of the mid-1970s:
Nor did those above the poverty line enjoy the material plenty of later decades. Much of the nation’s housing stock in the era was small and cramped. The average floor area of a new single-family home in 1950 was only 983 square feet, just a bit bigger than the average one-bedroom apartment today.
To make matters worse, households were considerably larger in the ’50s, meaning that big families often had to squeeze into those tight living spaces. Those houses also lacked many of the things that make modern homes comfortable and convenient — not just dishwashers and clothes dryers, but air conditioning, color TVs and in many cases washing machines.
And those who did work had to work significantly more hours per year. Those jobs were often difficult and dangerous. The Occupational Safety and Health Administration wasn’t created until 1971. As recently as 1970, the rate of workplace injury was several times higher than now, and that number was undoubtedly even higher in the ’50s. Pining for those good old factory jobs is common among those who have never had to stand next to a blast furnace or work on an unautomated assembly line for eight hours a day.
Outside of work, the environment was in much worse shape than today. There was no Environmental Protection Agency, no Clean Air Act or Clean Water Act, and pollution of both air and water was horrible. The smog in Pittsburgh in the 1950s blotted out the sun. In 1952 the Cuyahoga River in Cleveland caught fire. Life expectancy at the end of the ’50s was only 70 years, compared to more than 78 today.
So life in the 1950s, though much better than what came before, wasn’t comparable to what Americans enjoyed even two decades later. In that space of time, much changed because of regulations and policies that reduced or outlawed racial and gender discrimination, while a host of government programs lowered poverty rates and cleaned up the environment.
But on top of these policy changes, the nation benefited from rapid economic growth both in the 1950s and in the decades after. Improved production techniques and the invention of new consumer products meant that there was much more wealth to go around by the 1970s than in the 1950s. Strong unions and government programs helped spread that wealth, but growth is what created it.
So the 1950s don’t deserve much of the nostalgia they receive. Though the decade has some lessons for how to make the U.S. economy more equal today with stronger unions and better financial regulation, it wasn’t an era of great equality overall. And though it was a time of huge progress and hope, the point of progress and hope is that things get better later. And by most objective measures they are much better now than they were then.
See? A junior Krugman who sees the same decade as a glass half-empty instead of half-full.
In the end, Smith admits the irrelevance of his irreverence for the 1950s when he says that “the point of progress and hope is that things get better later.” In other words, if there is progress the past will always look inferior to the present. (And, by the same token, the present will always look inferior to the future when it becomes the present.)
I could quibble with some of Smith’s particulars (e.g., racism may be less overt than it was in the 1950s, but it still boils beneath the surface, and isn’t confined to white racism; stronger unions and stifling financial regulations hamper economic growth, which Smith prizes so dearly). But I will instead take issue with his assertion, which precedes the passages quoted above, that “few of those who long for a return to the 1950s would actually want to live in those times.”
It’s not that anyone yearns for a return to the 1950s as it was in all respects, but for a return to the 1950s as it was in some crucial ways:
There is … something to the idea that the years between the end of World War II and the early 1960s were something of a Golden Age…. But it was that way for reasons other than those offered by Krugman [and despite Smith’s demurrer].
Civil society still flourished through churches, clubs, civic associations, bowling leagues, softball teams and many other voluntary organizations that (a) bound people and (b) promulgated and enforced social norms.
Those norms proscribed behavior considered harmful — not just criminal, but harmful to the social fabric (e.g., divorce, unwed motherhood, public cursing and sexuality, overt homosexuality). The norms also prescribed behavior that signaled allegiance to the institutions of civil society (e.g., church attendance, veterans’ organizations) , thereby helping to preserve them and the values that they fostered.
Yes, it was an age of “conformity”, as sneering sophisticates like to say, even as they insist on conformity to reigning leftist dogmas that are destructive of the social fabric. But it was also an age of widespread mutual trust, respect, and forbearance.
Those traits, as I have said many times (e.g., here) are the foundations of liberty, which is a modus vivendi, not a mystical essence. The modus vivendi that arises from the foundations is peaceful, willing coexistence and its concomitant: beneficially cooperative behavior — liberty, in other words.
The decade and a half after the end of World War II wasn’t an ideal world of utopian imagining. But it approached a realizable ideal. That ideal — for the nation as a whole — has been put beyond reach by the vast, left-wing conspiracy that has subverted almost every aspect of life in America.
What happened was the 1960s — and its long aftermath — which saw the rise of capitalism’s spoiled children (of all ages), who have spat on and shredded the very social norms that in the 1940s and 1950s made the United States of America as united they ever would be. Actual enemies of the nation — communists — were vilified and ostracized, and that’s as it should have been. And people weren’t banned and condemned by “friends”, “followers”, Facebook, Twitter, etc. etc., for the views that they held. Not even on college campuses, on radio and TV shows, in the print media, or in Hollywood moves.
What do the spoiled children have to show for their rejection of social norms — other than economic progress that is actually far less robust than it would have been were it not for the interventions of their religion-substitute, the omnipotent central government? Well, omnipotent at home and impotent (or drastically weakened) abroad, thanks to rounds of defense cuts and perpetual hand-wringing about what the “world” might think or some militarily inferior opponents might do if the U.S. government were to defend Americans and protect their interests abroad?
The list of the spoiled children’s “accomplishments” is impossibly long to recite here, so I will simply offer a very small sample of things that come readily to mind:
California wildfires caused by misguided environmentalism.
The excremental wasteland that is San Francisco. (And Blue cities, generally.)
Flight from California wildfires, high taxes, excremental streets, and anti-business environment.
The killing of small businesses, especially restaurants, by imbecilic Blue-State minimum wage laws.
The killing of businesses, period, by oppressive Blue-State regulations.
The killing of jobs for people who need them the most, by ditto and ditto.
Bloated pension schemes for Blue-State (and city) employees, which are bankrupting those States (and cities) and penalizing their citizens who aren’t government employees.
The hysteria (and even punishment) that follows from drawing a gun or admitting gun ownership
The idea that men can become women and should be allowed to compete with women in athletic competitions because the men in question have endured some surgery and taken some drugs.
The idea that it doesn’t and shouldn’t matter to anyone that a self-identified “woman” uses women’s rest-rooms where real women and girls became prey for prying eyes and worse.
Mass murder on a Hitlerian-Stalinist scale in the name of a “woman’s right to choose”, when she made that choice by (in almost every case) engaging in consensual sex.
Disrespect for he police and military personnel who keep them safe in their cosseted existences.
Applause for attacks on the same.
Applause for America’s enemies, which the delusional, spoiled children won’t recognize as their enemies until it’s too late.
Longing for impossible utopias (e.g., “true” socialism) because they promise what is actually impossible in the real world — and result in actual dystopias (e.g., the USSR, Cuba, Britain’s National Health Service).
Noah Smith is far too young to remember an America in which such things were almost unthinkable — rather than routine. People then didn’t have any idea how prosperous they would become, or how morally bankrupt and divided.
Every line of human endeavor reaches a peak, from which decline is sure to follow if the things that caused it to peak are mindlessly rejected for the sake of novelty (i.e., rejection of old norms just because they are old). This is nowhere more obvious than in the arts.
It should be equally obvious to anyone who takes an objective look at the present state of American society and is capable of comparing it with American society of the 1940s and 1950s. For all of its faults it was a golden age. Unfortunately, most Americans now living (Noah Smith definitely included) are too young and too fixated on material things to understand what has been lost — irretrievably, I fear.
I was going to append a list of related posts, but the list would be so long that I can only refer you to “Favorite Posts” — especially those listed in the following sections:
I. The Academy, Intellectuals, and the Left
II. Affirmative Action, Race, and Immigration
IV. Conservatism and Other Political Philosophies
V. The Constitution and the Rule of Law
VI. Economics: Principles and Issues
VIII. Infamous Thinkers and Political Correctness
IX. Intelligence and Psychology
XI. Politics, Politicians, and the Consequences of Government
XII. Science, Religion, and Philosophy
XIII. Self-Ownership (abortion, euthanasia, marriage, and other aspects of the human condition)
XIV. War and Peace
In “The Subtle Authoritarianism of the ‘Liberal Order’“, I take on the “liberals” of all parties who presume to know what’s best for all of us, and are bent on making it so through the power of the state. I also had in mind, but didn’t discuss, the smug “liberals” who have long presided over U.S. foreign policy.
One of the smuggies whom I most despise for his conduct of foreign policy is the sainted George H.W. Bush. War hero or not, he failed to protect America and its interests on two notable occasions during his presidency.
The first occasion came during the Gulf War. I have this to say about it in “The Modern Presidency: From TR to DJT”:
The main event of Bush’s presidency was the Gulf War of 1990-1991. Iraq, whose ruler was Saddam Hussein, invaded the small neighboring country of Kuwait. Kuwait produces and exports a lot of oil. The occupation of Kuwait by Iraq meant that Saddam Hussein might have been able to control the amount of oil shipped to other countries, including Europe and the United States. If Hussein had been allowed to control Kuwait, he might have moved on to Saudi Arabia, which produces much more oil than Kuwait. President Bush asked Congress to approve military action against Iraq. Congress approved the action, although most Democrats voted against giving President Bush authority to defend Kuwait. The war ended in a quick defeat for Iraq’s armed forces. But President Bush decided not to allow U.S. forces to finish the job and end Saddam Hussein’s reign as ruler of Iraq.
And the rest is a long, sad history of what probably wouldn’t have happened in 2003 and the years since then.
What I didn’t appreciate when I wrote about Bush’s misadventure in Iraq was his utter fecklessness as the Soviet Union was collapsing. I learned about it from Vladimir Bukovsky‘s Judgment in Moscow: Soviet Crimes and Western Complicity. Bukovsksy is the “I” in the following passages from chapter 6 of the book:
George Bush and his Secretary of State Jim Baker … outdid everyone [including Margaret Thatcher and Ronald Reagan], [in] opposing the inevitable disintegration of the USSR until the very last day.
“Yes, I think I can trust Gorbachev,”—said George Bush to Time magazine just when Gorbachev was beginning to lose control and was tangled hopelessly in his own lies—“I looked him in the eye, I appraised him. He was very determined. Yet there was a twinkle. He is a guy quite sure of what he is doing. He has got a political feel.” [Like father, like son.]
It is notable that this phrase is illogical: if your opponent “believes deeply in what he is doing” does not necessarily mean that he is trustworthy. After all, Hitler also “believed deeply in what he was doing.” But the thought that their aims were diametrically opposed did not enter George Bush’s head. It is not surprising that with such presidential perspicacity, their top-level meeting in Malta (2-3 December 1989) was strongly reminiscent of a second Yalta: in any case, after this the US Department of State invariably maintained that the growing Soviet pressure on the Baltics was “an internal USSR matter.” Even two months prior to the collapse of the Soviet Union Bush, on a visit to Kiev, exhorted Ukraine not to break away.
The extent to which Bush’s administration did not understand the Soviet games in Europe is clear from its position on the reunification of Germany. Secretary of State Baker, who hurried to Berlin immediately after the fall of the Wall, evaluated this event as a demonstration of Gorbachev’s “remarkable realism. To give President Gorbachev his due, he was the first Soviet leader to have the daring and foresight to allow the revocation of the policy of repressions in Eastern Europe.”
And possibly in gratitude for this, Baker’s main interest was to respect the “lawful concern” of his eastern partner by slowing down the process of reunification by all means [quoting Baker:]
In the interest of overall stability in Europe, the move toward reunification must be of a peaceful nature, it must be gradual, and part of a step-by-step process.
The plan he proposed was a total disaster, for it corresponded completely to the Soviet scheme of the creation of a “common European home”: it was envisaged at first to reinforce the European Community, the Helsinki process and promote the further integration of Europe. All this, naturally, without undue haste but “step by step” over the passage of years [again quoting Baker:]
As these changes proceed, as they overcome the division of Europe, so too will the divisions of Germany and Berlin be overcome in peace and freedom.
Furthermore, even without consulting Bonn, he rushed to embrace the Kremlin’s new puppets in Eastern Germany in order to signal “US intentions to try to improve the credibility of the East German political leadership and to forestall a power vacuum that could trigger a rush to unification.” And this was in January 1990, i.e. shortly before the elections in the GDR that actually solved the key question: would Germany reunite on Soviet conditions, or Western ones? Luckily the East Germans were less “patient” and smarter: knowing well what they were dealing with, they voted for immediate reunification, ignoring Baker and the pressure of the whole world.
Why, then, did the West and the USA with its seemingly conservative, even anti-communist administration, yearn for this “stabilization” or, to put it more simply, salvation of the Soviet regime?
Let us allow that Baker was ignorant, pompous and big-headed, dreaming of some kind of global structures “from Vancouver to Vladivostok”, of which he would be the architect362 (“the Baker doctrine”). I remember at one press-conference I even suggested introducing a unit of measurement for political brainlessness—one baker (the average man in the street would be measured in millibakers). At the very height of the bloody Soviet show in Bucharest at Christmas in 1989, he stated that “They are attempting to pull off the yoke of a very oppressive and repressive dictatorship. So I think that we would be inclined probably to follow the example of France, who today has said that if the Warsaw Pact felt it necessary to intervene on behalf of the opposition, that it would support that action.” The new pro-Soviet policy of the USA after the top-level meeting in Malta he explained by saying that “the Soviet Union has switched sides, from that of oppression and dictatorships to democracy and change.” This was said at the moment when the Soviet army was smashing the democratic opposition in Baku, killing several hundred people people (which Baker also “treated with understanding”). But Baker was not alone, and this cannot be explained away by sheer stupidity. That is the tragedy, that such an idiotic position was shared by practically all Western governments, including the conservative ones.
Baker and Bush, what a team.
America’s enemies will do what they will do, whether our “leaders” are nice to them or confront them. And when they are confronted forcefully (and even forcibly), they are more likely to be deterred (and even prevented) from acting against America.
For most of the past century, U.S. foreign policy has been run by smug “liberals” who have projected their own feelings onto the likes of Hitler, Stalin, Mao, Ho, Putin, Saddam, and the ayatollahs. And where has it landed us? Scrambling from behind to win in World War II, on the defensive against Communist expansion, losing or failing to win wars against vastly inferior enemies, and giving our enemies time (and money) in which to arm themselves to attack our overseas interests and even our homeland. This tragic history has been abetted by hand-wringing from the usual suspects in the academy, the media, the foreign-policy tea-leaf-reading-signal-sending society, the left generally (though I am being redundant), and “liberals” of all political persuasions who are feckless to the core when it comes to dealing with domestic and foreign thugs.
Enough! I hope and believe that’s what President Trump just said, in effect, when he authorized the killing of Iran’s General Soleimani.
A Grand Strategy for the United States
The Folly of Pacifism
Transnationalism and National Defense
The Folly of Pacifism, Again
September 20, 2001: Hillary Clinton Signals the End of “Unity”
Patience as a Tool of Strategy
The War on Terror, As It Should Have Been Fought
The Cuban Missile Crisis, Revisited
Preemptive War and Iran
Some Thoughts and Questions about Preemptive War
Defense as an Investment in Liberty and Prosperity
The Barbarians Within and the State of the Union
The World Turned Upside Down
Utilitarianism and Torture
Defense Spending: One More Time
The President’s Power to Kill Enemy Combatants
My Defense of the A-Bomb
LBJ’s Dereliction of Duty
Terrorism Isn’t an Accident
The Ken Burns Apology Tour Continues
Planning for the Last War
A Rearview Look at the Invasion of Iraq and the War on Terror
Preemptive War Revisited
It’s a MAD, MAD, MAD, MAD World
The Folly of Pacifism (III)
“MAD, Again”: A Footnote
More MADness: Mistaking Bureaucratic Inertia for Strategy
World War II As an Aberration
Reflections on the “Feel Good” War
John O. McGinnis, in “The Waning Fortunes of Classical Liberalism“, bemoans the state of the ideology which was born in the Enlightenment, came to maturity in the writings of J.S. Mill, had its identity stolen by modern “liberalism”, and was reborn (in the U.S.) as the leading variety of libertarianism (i.e., minarchism). McGinnis says, for example that
the greatest danger to classical liberalism is the sharp left turn of the Democratic Party. This has been the greatest ideological change of any party since at least the Goldwater revolution in the Republican Party more than a half a century ago….
It is certainly possible that such candidates [as Bernie Sanders, Elizabeth Warren, and Pete Buttigieg] will lose to Joe Biden or that they will not win against Trump. But they are transforming the Democratic Party just as Goldwater did the Republican Party. And the Democratic Party will win the presidency at some time in the future. Recessions and voter fatigue guarantee rotation of parties in office….
Old ideas of individual liberty are under threat in the culture as well. On the left, identity politics continues its relentless rise, particularly on university campuses. For instance, history departments, like that at my own university, hire almost exclusively those who promise to impose a gender, race, or colonial perspective on the past. The history that our students hear will be one focused on the West’s oppression of the rest rather than the reality that its creation of the institutions of free markets and free thought has brought billions of people out of poverty and tyranny that was their lot before….
And perhaps most worrying of all, both the political and cultural move to the left has come about when times are good. Previously, pressure on classical liberalism most often occurred when times were bad. The global trend to more centralized forms of government and indeed totalitarian ones in Europe occurred in the 1920s and 1930s in the midst of a global depression. The turbulent 1960s with its celebration of social disorder came during a period of hard economic times. Moreover, in the United States, young men feared they might be killed in faraway land for little purpose.
But today the economy is good, the best it has been in at least a decade. Unemployment is at a historical low. Wages are up along with the stock market. No Americans are dying in a major war. And yet both here and abroad parties that want to fundamentally shackle the market economy are gaining more adherents. If classical liberalism seems embattled now, its prospects are likely far worse in the next economic downturn or crisis of national security.
McGinnis is wrong about the 1960s being “a period of hard economic times” — in America, at least. The business cycle that began in 1960 and ended in 1970 produced the second-highest rate of growth in real GDP since the end of World War II. (The 1949-1954 cycle produced the highest rate of growth.)
But in being wrong about that non-trivial fact, McGinnis inadvertently points to the reason that “the political and cultural move to the left has come about when times are good”. The reason is symbolized by main cause of social disorder in the 1960s (and into the early 1970s), namely, that “young men feared they might be killed in faraway land for little purpose”.
The craven behavior of supposedly responsible adults like LBJ, Walter Cronkite, Clark Kerr, and many other well-known political, media, educational, and cultural leaders — who allowed themselves to be bullied by essentially selfish protests against the Vietnam War — revealed the greatest failing of the so-called greatest generation: a widespread failure to inculcate personal responsibility in their children. The same craven behavior legitimated the now-dominant tool of political manipulation: massive, boisterous, emotion-laden appeals for this, that, and the other privilege du jour — appeals that left-wing politicians encourage and often lead; appeals that nominal conservatives often accede to rather than seem “mean”.
The “greatest” generation spawned the first generation of the spoiled children of capitalism:
The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.
As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it. And, sadly, they passed their principles, if one may use the term so loosely, down the generations to the point where young people today are scarcely worth using for fertilizer.
In 1919, or 1929, or especially 1939, the adolescents of 1969 would have had neither the leisure nor the money to create the Woodstock Nation. But mommy and daddy shelled out because they didn’t want their little darlings to be caught short, and consequently their little darlings became the worthless whiners who voted for people like Bill Clinton and Barack Obama [and who Bill Clinton and Barack Obama], with results as you see them. Now that history is catching up to them, a third generation of losers can think of nothing better to do than camp out on Wall Street in hopes that the Cargo will suddenly begin to arrive again.
Good luck with that.
[From “The Spoiled Children of Capitalism”, posted in October 2011 at Dyspepsia Generation but no longer available there.]
I have long shared that assessment of the Boomer generation, and subscribe to the view that the rot set in after World War II, and became rampant after 1963., when the post-World War II children of the “greatest generation” came of age.
Which brings me to Bryan Caplan’s post, “Poverty, Conscientiousness, and Broken Families”. Caplan — who is all wet when it comes to pacifism and libertarianism — usually makes sense when he describes the world as it is rather than as he would like it to be. He writes:
[W]hen leftist social scientists actually talk to and observe the poor, they confirm the stereotypes of the harshest Victorian. Poverty isn’t about money; it’s a state of mind. That state of mind is low conscientiousness.
Case in point: Kathryn Edin and Maria Kefalas‘ Promises I Can Keep: Why Poor Women Put Motherhood Before Marriage. The authors spent years interviewing poor single moms. Edin actually moved into their neighborhood to get closer to her subjects. One big conclusion:
Most social scientists who study poor families assume financial troubles are the cause of these breakups [between cohabitating parents]… Lack of money is certainly a contributing cause, as we will see, but rarely the only factor. It is usually the young father’s criminal behavior, the spells of incarceration that so often follow, a pattern of intimate violence, his chronic infidelity, and an inability to leave drugs and alcohol alone that cause relationships to falter and die.
Conflicts over money do not usually erupt simply because the man cannot find a job or because he doesn’t earn as much as someone with better skills or education. Money usually becomes an issue because he seems unwilling to keep at a job for any length of time, usually because of issues related to respect. Some of the jobs he can get don’t pay enough to give him the self-respect he feels he needs, and others require him to get along with unpleasant customers and coworkers, and to maintain a submissive attitude toward the boss.
These passages focus on low male conscientiousness, but the rest of the book shows it’s a two-way street. And even when Edin and Kefalas are talking about men, low female conscientiousness is implicit. After all, conscientious women wouldn’t associate with habitually unemployed men in the first place – not to mention alcoholics, addicts, or criminals.
Low conscientiousness was the bane of those Boomers who, in the 1960s and 1970s, chose to “drop out” and “do drugs”. It will be the bane of the Gen Yers who do the same thing. But, as usual, “society” will be expected to pick up the tab, with food stamps, subsidized housing, drug rehab programs, Medicaid, and so on.
Before the onset of America’s welfare state in the 1930s, there were two ways to survive: work hard or accept whatever charity came your way. And there was only one way for most persons to thrive: work hard. That all changed after World War II, when power-lusting politicians sold an all-too-willing-to-believe electorate a false and dangerous bill of goods, namely, that government is the source of prosperity — secular salvation. It is not, and never has been.
McGinnis is certainly right about the decline of classical liberalism and probably right about the rise of leftism. But why is he right? Leftism will continue to ascend as long as the children of capitalism are spoiled. Classical liberalism will continue to wither because it has no moral center. There is no there there to combat the allure of “free stuff“.
Scott Yenor, writing in “The Problem with the ‘Simple Principle’ of Liberty”, makes a point about J.S. Mill’s harm principle — the heart of classical liberalism — that I have made many times. Yenor begins by quoting the principle:
The sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. . . . The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. . . .The only part of the conduct of anyone, for which he is amenable to society, is that which concerns others. In the part that merely concerns himself, his independence is, of right, absolute. Over himself, over his own body and mind, the individual is sovereign.
This is the foundational principle of classical liberalism (a.k.a. minarchistic libertarianism), and it is deeply flawed, as Yenor argues (successfully, in my view). He ends with this:
[T]he simple principle of [individual] liberty undermines community and compromises character by compromising the family. As common identity and the family are necessary for the survival of liberal society—or any society—I cannot believe that modes of thinking based on the “simple principle” alone suffice for a governing philosophy. The principle works when a country has a moral people, but it doesn’t make a moral people.
Conservatism, by contrast, is anchored in moral principles, which are reflected in deep-seated social norms, at the core of which are religious norms — a bulwark of liberty. But principled conservatism (as opposed to the attitudinal kind) isn’t a big seller in this age of noise:
I mean sound, light, and motion — usually in combination. There are pockets of serenity to be sure, but the amorphous majority wallows in noise: in homes with blaring TVs; in stores, bars, clubs, and restaurants with blaring music, TVs, and light displays; in movies (which seem to be dominated by explosive computer graphics), in sports arenas (from Olympic and major-league venues down to minor-league venues, universities, and schools); and on an on….
The prevalence of noise is telling evidence of the role of mass media in cultural change. Where culture is “thin” (the vestiges of the past have worn away) it is susceptible of outside influence…. Thus the ease with which huge swaths of the amorphous majority were seduced, not just by noise but by leftist propaganda. The seduction was aided greatly by the parallel, taxpayer-funded efforts of public-school “educators” and the professoriate….
Thus did the amorphous majority bifurcate. (I locate the beginning of the bifurcation in the 1960s.) Those who haven’t been seduced by leftist propaganda have instead become resistant to it. This resistance to nanny-statism — the real resistance in America — seems to be anchored by members of that rapidly dwindling lot: adherents and practitioners of religion, especially between the two Left Coasts.
That they are also adherents of traditional social norms (e.g., marriage can only be between a man and a woman), upholders of the Second Amendment, and (largely) “blue collar” makes them a target of sneering (e.g., Barack Obama who called them “bitter clingers”; Hillary Clinton called them “deplorables”)….
[But as long] as a sizeable portion of the populace remains attached to traditional norms — mainly including religion — there will be a movement in search of and in need of a leader [after Trump]. But the movement will lose potency if such a leader fails to emerge.
Were that to happen, something like the old, amorphous society might re-form, but along lines that the remnant of the old, amorphous society wouldn’t recognize. In a reprise of the Third Reich, the freedoms of association, speech, and religious would have been bulldozed with such force that only the hardiest of souls would resist going over to the dark side. And their resistance would have to be covert.
Paradoxically, 1984 may lie in the not-too-distant future, not 36 years in the past. When the nation is ruled by one party (guess which one), foot–voting will no longer be possible and the nation will settle into a darker version of the Californian dystopia.
It is quite possible that the elections of 2020 or 2024 will bring about the end of the great experiment in liberty that began in 1776. And with that end, the traces of classical liberalism will all but vanish, along with liberty. Unless something catastrophic shakes the spoiled children of capitalism so hard that their belief in salvation through statism is destroyed. Not just destroyed, but replaced by a true sense of fellowship with other Americans (including “bitter clingers” and “deplorables”) — not the ersatz fellowship with convenient objects of condescension that elicits virtue-signaling political correctness.
This is the second installment of a long post. I may revise it as I post later parts. The whole will be published as a page, for ease of reference. If you haven’t read “Part I: What Is Economics About?“, you may benefit from doing so before you embark on this part.
What Drives Us
Humans are driven by the survival instinct and a host of psychological urges, which vary from person to person. Those urges include but are far from limited to the self-aggrandizement (ego), the need for love and friendship, and the need to be in control (which includes the needs to possess things and to control others, both in widely varying degrees). Economic activity, as I have said, excludes matters of love and friendship (though not calculated relationships that may seem like friendship), but aside from those things — which influence personal economic activity (e.g., the need to provide for loved ones) — there are more motivations for economic activity than can be dreamt of by economists. Those motivations are shaped genes and culture, which are so varied and malleable (in the case of culture) that specific knowledge about them is useful only to the purveyors of particular goods.
Therefore, economists long ago (and wisely) eschewed models of economic behavior that impute particular motivations to economic activity. Instead they said that individuals seek to maximize utility (something like happiness or satisfaction), whatever that might be for particular individuals. Similarly, they said that firms seek to maximize profits, which is easier to quantify because profit is measured in monetary units (dollars in America).
Further, economists used to say that individuals act rationally when they strive to maximize utility. Behavioral economists (e.g., Richard Thaler) have challenged the rationality hypothesis by showing that personal choices are often irrational (in the judgment of the behavioral economist). The case of “saving too little” for retirement is often invoked in support of interventions (including interventions by the state) to “nudge” individuals toward making the “right” choices (in the judgment of the behavioral economist). The behavioral economist would thus impose his own definition of rational behavior (e.g., wealth-maximization) on individuals. This is arrogance in the extreme. All that the early economists meant by rationality was that individuals strive to make choices that advance their particular preferences.
Wealth-maximization is one such preference, but far from the only one. A young worker, for example, may prefer buying a car (that enables him to get to work faster than he could by riding a bus) to saving for his retirement. There are many other objections to the imposition of behavioral economists’ views. The links at the end of “No Tears for Cass Sunstein” (Thaler’s co-conspirator) will lead you to some of them. That post and the posts linked to at the end of it also provide insights into the authoritarian motivations of Thaler, Sunstein, and their ilk.
The Rise of Corporate Irresponsibility
Turning to firms — the providers of goods that satisfy wants — I have to say that the profit-maximization motive has been eroded by the rise of huge firms that are led and managed by bureaucrats rather than inventors, innovators, and entrepreneurs. The ownership of large firms is, in most cases, widely distributed and indirect (i.e., huge blocks of stock are held in diversified mutual-fund portfolios). This makes it possible for top managers (enabled by compliant boards of directors) to adopt policies that harm shareholders’ financial interests for the sake of presenting a “socially responsible” (“woke”) image of the firm to … whom?
The firm’s existing customers aren’t the general public, they are specific segments of the general public, and some of those segments don’t take kindly to public-relations ploys that flout the values that they (the specific segments) hold dearly. (Gillette and Dick’s Sporting Goods are recent cases in point.) The “whom” might therefore consist of segments of the public that the firms’ managers hope will buy the firm’s products because of the firm’s pandering. and — more likely — influential figures in business, politics, the arts, the media, etc., whom the managers are eager to impress.
“Social responsibility” fiascos are only part of the picture. Huge, bureaucratic firms are no more efficient in their use of resources to satisfy consumers’ wants than are huge, bureaucratic governments that (at best) provide essential services (defense and justice) but in fact provide services that politicians and bureaucrats are “needed” in order to buy votes and make work for themselves.
The bottom line here is that the satisfaction of consumers’ wants has been compromised badly. And the combination of government interventions and corporate misfeasance has made the economy far less productive than it could be.
The Flip Side of Economics: Failure to Produce
Economics, therefore, is about the satisfaction of human wants through the production and exchange of goods, given available resources. It is also about the failure to maximize the satisfaction of wants, given available resources, because of government interventions and corporate misfeasance.
The gross underperformance of America’s economy illustrates an important but usually neglected principle of economics: Every decision has an opportunity cost. When you choose to buy a car, for example, you forgo the opportunity to buy something else for the same amount of money. That something else, presumably, would afford you less satisfaction (utility) than the car. Or so the theory goes. But whether it would or wouldn’t isn’t for a behavioral economist to say.
Individuals (and firms) often make choices that they later regret. It’s called learning from experience. But “nudging”, government interventions, and corporate sluggishness reduce the opportunity to learn from experience. (Government interventions and corporate sluggishness also prevent, as I have said, behaviors that are essential to economic vitality: invention, innovation, and entrepreneurship.)
Government interventions also incentivize economically and personally destructive behavior. There are many estimates of the costs of government interventions (e.g., this one and those documented quarterly in Regulation magazine) and a multitude of examples of the personally destructive behavior engendered by government interventions. It is impossible to say which intervention has been the most harmful to the citizenry, but if pressed I would choose the thing broadly called “welfare”, which disincentivizes work and is an important cause of the dissolution of black (welfare-dependent) families, with attendant (and dire) results (educational, occupational, criminal) that bleeding hearts prefer to attribute to “racism”. If not in second place, but high up on my list, is the counterproductive response (by government at the prodding of bleeding hearts) to homelessness.
Thus we have yet another principle: the “law” of unintended consequences. Unintended consequences are the things that aren’t meant to happen — but which do happen — when an actor (be it governmental, corporate, or individual) doesn’t think about (or chooses to minimize or ignore) when it or he focuses on a particular problem or desire to the exclusion of other problems or desires. Individuals can learn from unintended consequences; governments and, increasingly, corporations are too rule-bound and infested by special interests to do so.
None of what I have said about corporations should be taken as an endorsement of governmental interventions to make them somehow more efficient and responsible. (The law of unintended consequences applies in spades when it comes to government.) The only justification for state action with respect to firms is to keep them from doing things that are inimical to liberty and can’t be rectified by private action. In an extreme case, a business that specializes in murder for hire is (or should be) a target for closure and prosecution. A business that sells a potentially harmful product (e.g., guns, cigarettes) isn’t a valid target of state action because the harmful use of the product is the responsibility of the buyer, product-liability law to the contrary notwithstanding.
What about a business that collaborates (perhaps tacitly) with other businesses or special interests to prevent the expression of views that are otherwise protected by the First Amendment but which are opposed by the managers of the business and their political allies? There are good arguments for a hand-off approach, in that markets — if they are allowed to operate freely — will provide alternatives that allow the expression (and wide circulation) of “objectionable” views. If anti-trust actions against purveyors of oil and steel (two take two examples from the past) are inadvisable (as I have argued), aren’t anti-trust actions against purveyors of information and ideas equally inadvisable? There is a qualitative difference between economic rapacity and what amounts to a war that is being waged by one segment of the nation against other segments of the nation. (See for example, “The Subtle Authoritarianism of the ‘Liberal Order’“.) Government action to defend the besieged segments is therefore fitting and proper. (See “Preemptive (Cold) Civil War“.)
Economics and Liberty
This brings me to the gravest economic threat to liberty, which is state socialism and its variants: communism, fascism, and social democracy. All of them vest control of the economy in the state, when not through outright state ownership of the means of production, then through laws and regulations that dictate allowable types of economic output, the means and methods of its production, and its beneficiaries. The United States has long been burdened with what has been called a “mixed” economic system, which is in fact a social democracy — an economy that has many of the trappings of free-market capitalism but is in fact heavily managed by governments (federal, State, and local) in the service of “social justice” and various trendy causes.
The most recent of these is the puritanical, often hypocritical, and anti-scientific effort to rescue the planet from “climate change”. The opportunity cost of this futile undertaking, were it conducted according to the dictates of its most strident supporters, would be a vast share of the economic output of the the Western world (inasmuch as Russia, China, India, and even Japan are disinclined to participate), thus demoting America and Western Europe to Third-World status and rendering them vulnerable to economic and military blackmail by Russia and China. (Old grudges die hard.) You can be sure, however, that even in their vastly diminished state, the Western “democracies” would find the resources with which to cosset the ruling class of politicians and their favorites.
Proponents of state action often defend it by adverting to the paradox of collective action, which is that individuals and firms, acting in what they perceive to be their own interests, can bring about a disaster that engulfs them. “Climate change” is the latest such so-called disaster. What the proponents of state action always omit to consider (or mention) is that state action itself can bring about a disaster that engulfs all of us. The attempt to control “climate change” is just such an action, and it is of the more dangerous kind because government programs, once started, are harder to turn around than the relatively modest and inexpensive projects of individuals and firms.
You may think that I have strayed a long way from the principles of economics. But I haven’t, if you’ve been following closely. What I have done — or tried to do — is put economic activity in perspective. Which is to say that I’ve tried to show that economic activity may be important and even crucial to our lives, but it is not the only important and crucial thing in our lives. Economic activity is shaped by government and culture. If the battle to contain government is successful, and if the battle to preserve a culture of personal responsibility and respect for traditional norms is successful, economic activity will thrive and be worth the striving.
Philosophical musings by a non-philosopher which are meant to be accessible to other non-philosophers.
I submit (with no claim to originality) that existence (what really is) is independent of knowledge (proposition A), but knowledge is impossible without existence (proposition B).
In proposition A, I include in existence those things that exist in the present, those things that have existed in the past, and the processes (happenings) by which past existences either end (e.g., death of an organism, collapse of a star) or become present existences (e.g., an older version of a living person, the formation of a new star). That which exists is real; existence is reality.
In proposition B, I mean knowledge as knowledge of that which exists, and not the kind of “knowledge” that arises from misperception, hallucination, erroneous deduction, lying, and so on. Much of what is called scientific knowledge is “knowledge” of the latter kind because, as scientists know (when they aren’t advocates) scientific knowledge is provisional. Proposition B implies that knowledge is something that human beings and other living organisms possess, to widely varying degrees of complexity. (A flower may “know” that the Sun is in a certain direction, but not in the same way that a human being knows it.) In what follows, I assume the perspective of human beings, including various compilations of knowledge resulting from human endeavors. (Aside: Knowledge is self-referential, in that it exists and is known to exist.)
An example of proposition A is the claim that there is a falling tree (it exists), even if no one sees, hears, or otherwise detects the tree falling. An example of proposition B is the converse of Cogito, ergo sum, I think, therefore I am; namely, I am, therefore I (a sentient being) am able to know that I am (exist).
Here’s a simple illustration of proposition A. You have a coin in your pocket, though I can’t see it. The coin is, and its existence in your pocket doesn’t depend on my act of observing it. You may not even know that there is a coin in your pocket. But it exists — it is — as you will discover later when you empty your pocket.
Here’s another one. Earth spins on its axis, even though the “average” person perceives it only indirectly in the daytime (by the apparent movement of the Sun) and has no easy way of perceiving it (without the aid of a Foucault pendulum) when it is dark or when asleep. Sunrise (or at least a diminution of darkness) is a simple bit of evidence for the reality of Earth spinning on its axis without our having perceived it.
Now for a somewhat more sophisticated illustration of proposition A. One interpretation of quantum mechanics is that a sub-atomic particle (really an electromagnetic phenomenon) exists in an indeterminate state until an observer measures it, at which time its state is determinate. There’s no question that the particle exists independently of observation (knowledge of the particle’s existence), but its specific characteristic (quantum state) is determined by the act of observation. Does this mean that existence of a specific kind depends on knowledge? No. It means that observation determines the state of the particle, which can then be known. Observation precedes knowledge, even if the gap is only infinitesimal. (A clear-cut case is the autopsy of a dead person to determine his cause of death. The autopsy didn’t cause the person’s death, but came after it as an act of observation.)
Regarding proposition B, there are known knowns, known unknowns, unknown unknowns, and unknown “knowns”. Examples:
Known knowns (real knowledge = true statements about existence) — The experiences of a conscious, sane, and honest person: I exist; am eating; I had a dream last night; etc. (Recollections of details and events, however, are often mistaken, especially with the passage of time.)
Known unknowns (provisional statements of fact; things that must be or have been but which are not in evidence) — Scientific theories, hypotheses, data upon which these are based, and conclusions drawn from them. The immediate causes of the deaths of most persons who have died since the advent of homo sapiens. The material process by which the universe came to be (i.e., what happened to cause the Big Bang, if there was a Big Bang).
Unknown unknowns (things that exist but are unknown to anyone) — Almost everything about the universe.
Unknown “knowns” (delusions and outright falsehoods accepted by some persons as facts) — Frauds, scientific and other. The apparent reality of a dream.
Regarding unknown “knowns”, one might dream of conversing with a dead person, for example. The conversation isn’t real, only the dream is. And it is real only to the dreamer. But it is real, nevertheless. And the brain activity that causes a dream is real even if the person in whom the activity occurs has no perception or memory of a dream. A dream is analogous to a movie about fictional characters. The movie is real but the fictional characters exist only in the script of the movie and the movie itself. The actors who play the fictional characters are themselves, not the fictional characters.
There is a fine line between known unknowns (provisional statements of fact) and unknown “knowns” (delusions and outright falsehoods). The former are statements about existence that are made in good faith. The latter are self-delusions of some kind (e.g., the apparent reality of a dream as it occurs), falsehoods that acquire the status of “truth” (e.g., George Washington’s false teeth were made of wood), or statements of “fact” that are made in bad faith (e.g., adjusting the historic temperature record to make the recent past seem warmer relative to the more distant past).
The moral of the story is that a doubting Thomas is a wise person.
Arnold Kling points to a paper by Horst W. J. Rittel and Melvin M. Webber, “Dilemmas in a General Theory of Planning” (Policy Sciences, June 1973). As Kling says, the paper is “notable for the way in which it describes — in 1973 — the fallibility of experts relative to technocratic expectations”.
Among the authors’ many insights are these about government planning:
The kinds of problems that planners deal with-societal problems-are inherently different from the problems that scientists and perhaps some classes of engineers deal with. Planning problems are inherently wicked.
As distinguished from problems in the natural sciences, which are definable and separable and may have solutions that are findable, the problems of governmental planning-and especially those of social or policy planning-are ill-defined; and they rely upon elusive political judgment for resolution. (Not “solution.” Social problems are never solved. At best they are only re-solved-over and over again.) Permit us to draw a cartoon that will help clarify the distinction we intend.
The problems that scientists and engineers have usually focused upon are mostly “tame” or “benign” ones. As an example, consider a problem of mathematics, such as solving an equation; or the task of an organic chemist in analyzing the structure of some unknown compound; or that of the chessplayer attempting to accomplish checkmate in five moves. For each the mission is clear. It is clear, in turn, whether or not the problems have been solved.
Wicked problems, in contrast, have neither of these clarifying traits; and they include nearly all public policy issues-whether the question concerns the location of a freeway, the adjustment of a tax rate, the modification of school curricula, or the confrontation of crime….
In the sciences and in fields like mathematics, chess, puzzle-solving or mechanical engineering design, the problem-solver can try various runs without penalty. Whatever his outcome on these individual experimental runs, it doesn’t matter much to the subject-system or to the course of societal affairs. A lost chess game is seldom consequential for other chess games or for non-chess-players.
With wicked planning problems, however, every implemented solution is consequential. It leaves “traces” that cannot be undone. One cannot build a freeway to see how it works, and then easily correct it after unsatisfactory performance. Large public-works are effectively irreversible, and the consequences they generate have long half-lives. Many people’s lives will have been irreversibly influenced, and large amounts of money will have been spent-another irreversible act. The same happens with most other large-scale public works and with virtually all public-service programs. The effects of an experimental curriculum will follow the pupils into their adult lives.
Rittel and Webber address a subject about which I know a lot, from first-hand experience — systems analysis. This is a loose discipline in which mathematical tools are applied to broad and seemingly intractable problems in an effort to arrive at “optimal” solutions to those problems. In fact, as Rittel and Webber say:
With arrogant confidence, the early systems analysts pronounced themselves ready to take on anyone’s perceived problem, diagnostically to discover its hidden character, and then, having exposed its true nature, skillfully to excise its root causes. Two decades of experience have worn the self-assurances thin. These analysts are coming to realize how valid their model really is, for they themselves have been caught by the very same diagnostic difficulties that troubled their clients.
Remember, that was written in 1973, a scant five years after Robert Strange McNamara — that supreme rationalist — left the Pentagon, having discovered that the Vietnam War wasn’t amenable to systems analysis. McNamara’s demise as secretary of defense also marked the demise of the power that had been wielded by his Systems Analysis Office (though it lives on under a different name, having long since been pushed down the departmental hierarchy).
My own disillusionment with systems analysis came to a head at about the same time as Rittel and Webber published their paper. A paper that I wrote in 1981 (much to the consternation of my colleagues in the defense-analysis business) was an outgrowth of a memorandum that I had written in 1975 to the head of the defense think-tank where I worked. Here is the crux of the 1981 paper:
Aside from a natural urge for certainty, faith in quantitative models of warfare springs from the experience of World War II, when they seemed to lead to more effective tactics and equipment. But the foundation of this success was not the quantitative methods themselves. Rather, it was the fact that the methods were applied in wartime. Morse and Kimball put it well [in Methods of Operations Research (1946)]:
Operations research done separately from an administrator in charge of operations becomes an empty exercise. To be valuable it must be toughened by the repeated impact of hard operational facts and pressing day-by-day demands, and its scale of values must be repeatedly tested in the acid of use. Otherwise it may be philosophy, but it is hardly science. [p. 10]
Contrast this attitude with the attempts of analysts for the past twenty years to evaluate weapons, forces, and strategies with abstract models of combat. However elegant and internally consistent the models, they have remained as untested and untestable as the postulates of theology.
There is, of course, no valid test to apply to a warfare model. In peacetime, there is no enemy; in wartime, the enemy’s actions cannot be controlled….
Lacking pertinent data, an analyst is likely to resort to models of great complexity. Thus, if useful estimates of detection probabilities are unavailable, the detection process is modeled; if estimates of the outcomes of dogfights are unavailable, aerial combat is reduced to minutiae. Spurious accuracy replaces obvious inaccuracy; untestable hypotheses and unchecked calibrations multiply apace. Yet the analyst claims relative if not absolute accuracy, certifying that he has identified, measured, and properly linked, a priori, the parameters that differentiate weapons, forces, and strategies.
In the end, “reasonableness” is the only defense of warfare models of any stripe.
It is ironic that analysts must fall back upon the appeal to intuition that has been denied to military men — whose intuition at least flows from a life-or-death incentive to make good guesses when choosing weapons, forces, or strategies.
This generalizes to government planning of almost every kind, at every level, and certainly to the perpetually recurring — and badly mistaken — belief that an entire economy can be planned and its produce “equitably” distributed according to needs rather than abilities.
Thanks to someone (I don’t remember who it was), I found The Orthosphere, which I am now following. The first post that I read there is “Beware the Jaws of Ruthless Reason“, by Jonathan M. Smith. It is replete with statements that I fully endorse; for example:
I think we must grant that the Left is more slavishly addicted to Reason than the Right—or at least than the genuine Right. There are, needless to say, many spurious men of the Right who betray their spuriosity by boasting about their ruthless reasoning; but genuine men of the Right have always been chary of Reason because they see that Reason is ruthless.
And because Reason is ruthless, they see that it must be kept on a very stout chain.
When I say that Reason is ruthless, I mean that it respects nothing but itself, and that when it is let off its chain, it will therefore chew to pieces anything with which it disagrees. To see what this means, you have only to look at any specimen of modern architecture. Reason chewed away any ornament that did not answer the demands of Reason, and the naked box that remained was utterly inhuman….
A man of the Right does not deny that Reason is often a very good thing. But because it is not the only good thing, he knows it would be very bad to let it off of its chain to mutilate and maul everything else that is good. He finds that Reason turns up its nose at other things he approves, both in the world and in himself.
And that Reason will chew these things to pieces if he lets it….
Political theory is produced almost exclusively by the Left, for they have an idea that human felicity requires the discovery and universal application of a despotic principle. Equality is the despotic principle of the overt Left; Freedom is the despotic principle of the covert Left or spurious Right.
Now a genuine man of the Right does not deny that Equality and Freedom can often be very good things, but because they are not the only good things, he knows it would be very bad for them to become despotic principles that will mutilate and maul everything else that is good….
A genuine man of the Right will wish to conserve many principles. He sees that reason is good, but that despotic Reason will destroy loveliness and loyalty. He sees that equality is good, but that despotic Equality will destroy justice and love. He sees that freedom is good, but that despotic Freedom will destroy decency and solidarity.
This reminds me of one of my critiques of libertarianism, an offshoot of the enlightenment and an ideology based on “reason”:
The Enlightenment included a range of ideas centered on reason as the primary source of knowledge….
Where reason is
the capacity of consciously making sense of things, establishing and verifying facts, applying logic, and adapting or justifying practices, institutions, and beliefs based on new or existing information.
But reason is in fact shaped by customs, instincts, erroneous beliefs, faulty logic, venal motivations, and unexamined prejudices. Objectivism, for example, is just another error-laden collection of “religious” dogmas, as discussed here, here, and here.
Sir Roger Scruton underscores the shallowness of reason in On Human Nature. Scruton’s point applies not only to libertarianism (i.e., classical liberalism) but also to its offshoot — modern “liberalism” — neither of which, as rationalistic philosophies, bear any resemblance to conservatism, properly understood.
Here is the essential difference between conservatism and the varieties of liberalism, in Scruton’s words:
[W]e find near-universal agreement among American moral philosophers that individual autonomy and respect for rights are the root conceptions of moral order, with the state conceived either as an instrument for safeguarding autonomy or — if given a larger role — as an instrument for rectifying disadvantage in the name of “social justice.” The arguments given for these positions are invariably secular, egalitarian, and founded in an abstract idea of rational choice. And they are attractive arguments, since they justify both a public morality and a shared political order in ways that allow for the peaceful coexistence of people with different faiths, different commitments, and deep metaphysical disagreements. The picture of the moral life that I have presented is largely compatible with these arguments. But it also points to two important criticisms that might be made of them.
The first criticism is that the contractarian position fails to take our situation as organisms seriously. We are embodied beings, and our relations are mediated by our bodily presence. All of our most important emotions are bound up with this: erotic love, the love of children and parents, the attachment to home, the fear of death and suffering, the sympathy for others in their pain or fear — none of these things would make sense if it were not for our situation as organisms…. If we were disembodied rational agents — “noumenal selves“… — then our moral burdens would be lightly worn and would amount only to the side constraints required to reconcile the freedom of each of us with the equal freedom of our neighbors. But we are embodied beings, who are drawn to each other as such, trapped into erotic and familial emotions that create radical distinctions, unequal claims, fatal attachments, and territorial needs, and much of moral life is concerned with the negotiation of these dark regions of the psyche.
The second criticism is that our obligations are not and cannot be reduced to those that guarantee our mutual freedom. Noumenal selves come into a world unencumbered by ties and attachments for the very reason that they do not come into the world at all…. For us humans, who enter a world marked by the joys and sufferings of those who are making room for us, who enjoy protection in our early years and opportunities in our maturity, the field of obligation is wider than the field of choice. We are bound by ties that we never chose, and our world contains values and challenges that intrude from beyond the comfortable arena of our agreements. In the attempt to encompass these values and challenges, human beings have developed concepts that have little or no place in liberal theories of the social contract — concepts of the sacred and the sublime, of evil and redemption, that suggest a completely different orientation to the world than that assumed by modern moral philosophy.
You won’t find it at Wikipedia‘s list of films with a 100-percent rating on Rotten Tomatoes, but you might find some movies that are worth watching. The list comprises only films with a critics’ consensus (staff-written summary) or at least 20 reviews. I went down the list and added to it the average rating given each film by users at the Internet Movie Database (IMDb). I also added my ratings of the films that I have seen and rated. (There are several that I have seen but haven’t rated (indicated by*); they’re now too dim in my memory to rate retroactively.)
To be precise, I worked my way down the list, which is organized chronologically, until I got well into the 2000s. I then saw that beginning in the late 1960s, the list became less and less about entertainment and more and more about propagandistic, leftist “documentaries”. I venture to say that every entry after 1999 is of that character. So the table below ends with the most recent movie in the Rotten Tomatoes list that I have seen and rated.
The table lists 198 films, beginning with The Cabinet of Doctor Caligari (1920) and ending with Toy Story 2 (1999). How good are the 198? Well, 195 of them have an IMDb rating of 7.0 or higher; I rarely consider watching a film with a rating below 7.0. Moreover, 71 of the 198 have an IMDb rating of 8.0 or higher; I consider a rating of 8.0 or higher to be a badge of excellence. (You will note that my ratings are generally close to those given by IMDb users, with some notable and glaring exceptions.)
Go here for the latest.
In “Ghosts of Thanksgiving Past” I recall family gatherings of long ago. “The Passing of Red Brick Schoolhouses and a Way of Life” laments the passing of the schoolhouses of my childhood, along with the innocence that was once a hallmark of non-urban America. In “‘Tis the Season for Nostalgia” I recall Christmases past.
I was reminded of those trips into the past by a post at The Federalist by Nathaniel Blake, “What Good Is Cheaper Stuff If It Comes At The Expense Of Community?“. It prompted me to recall other things that meant much to me (in hindsight): the long-vanished locally-owned stores that provided groceries, meat, sundries, haircuts, baked goods, hobby supplies, and more. The owners worked in their stores. They knew you, and you knew them. Many of them were neighbors. Their livelihoods depended not only on providing products and services at reasonable prices — prices that saved you a trip to the big city — but on their friendliness and reputation for honesty.
Of the many stores of that ilk that I remember from early childhood until I went to college, 60 to 75 years ago, only one is still in business. It’s even at the same location, though in a new building, and it doesn’t carry the range of hobby supplies (e.g., model kits and collectible stamps) that it did when I shopped there eons ago.
Here are the sites as they look now (or looked recently), arrayed roughly in the order in which I first saw them (* indicates original building):