Because Tucker didn’t state the point of the chart, I had to read his mind. He was probably trying to convey a message like this:
Defense spending was just “right” (i.e., close to zero) after demobilization from World War II.
Look at what has happened since then: Defense spending (in inflated dollars) has risen to a very large number, even though there hasn’t been a war on the scale of World War II.
The absence of a major war since World War II obviously means that the U.S. spends far too much on defense.
Defense spending, unlike domestic spending is driven by the outside world, by what others could or would do to us, regardless of our delusions about their benignity. It is necessary to spend a lot on defense even when we are not at war, for two purposes: deterrence and preparedness.
With that thought in mind, let’s look at the indicies in following chart (government spending includes State and local as well as federal outlays):
Sources: Current dollar values of government spending derived from Bureau of Economic Analysis, National Income and Product Accounts, Tables 3.1 and 3.95. Population statistics, constant-dollar GDP, and GDP deflators applied to government spending derived from Measuring Worth. (GDP – US data set).
What does the chart suggest? This:
The benchmark for “necessary” defense spending is World War II. Real defense spending has yet to return to that level. But, as a result of our foolish rush to demobilize after World War II, defense spending had to rise in response to Soviet- and Communist Chinese-backed aggression in Korea and the growing military power and aggressiveness of the Soviet Union. Subsequent “bumps” represent the Vietnam War; the Reagan defense buildup, which drove the USSR to its knees and thence to dissolution; and the squandered wars in Iraq and Afghanistan. The most recent rise in defense spending, due to Trump, was cut short by Biden, in keeping with his unarticulated but obvious policy of “accommodating” Russia and China.
The gorilla in the room is redistributive spending: transfer payments (Social Security, Medicare, Medicaid, and their expansion by Obamacare; food stamps; easier access to disability payments; more generous unemployment benefits; Covid-19 “stimmies”; etc., etc. etc.); subisidies (mainly for not growing crops and for wasteful “rewable energy” schemes); and interest on government debt — all of which rob Peter to pay Paul.
Non-defense spending hasn’t been ignored by any means.
All spending categories have outpaced GDP and (by a long shot) population. Defense spending should be driven by external threats, not population. Other government spending should be related to population, but they are obviously more strongly related to the greed of politicians (for votes) and various interest groups (for other people’s money).
Non-defense spending (including transfer payments, etc.) is now almost five times as great as defense spending.
It is evident that defense spending is far too low. If it had risen sufficiently, Russia and China would have remained content to rebuild their economies and refrain from military adventurism. By the same token, the gargantua of non-defense spending (and the regulatory burden that goes with it) has decimated the U.S. economy (see this and this). The far more robust economy that would have resulted absent regulation and profligate spending on “social services” would have had ample room in it for voluntary charity to assist the truly needy (as opposed to the conveniently disabled and lazy).
I will end with this:
It is customary in democratic countries to deplore expenditures on armaments as conflicting with the requirements of the social services. There is a tendency to forget that the most important social service that a government can do for its people is to keep them alive and free. — Marshal of the Royal Air Force Sir John Slessor in Strategy for the West
For the last two year’s of Donald Trump’s presidency, he is all that stood between you and what you are now “enjoying” if you voted for Joe Biden: inflation, military weakness, high energy prices, and “wokeness”, to name the effect of some of “moderate” Biden’s policies. Wake up to the fact that your pearl-clutching (and a lot of Big Tech-media manipulation) helped to elect the most feckless president since Jimmy Carter, and helped to put Nancy Pelosi and Chuck Shumer in charge of Congress.
This is a good time to reproduce this post by Dery Murdock at The American Spectator:
President Trump’s Policy Victories, From A to Z
Let no one say he didn’t deliver wins for conservatives.
A reader who calls himself BillyBob left a comment after my recent op-ed in The American Spectator on the Never Trumpers’ attack on the Republican U.S. Senate candidate in Georgia, Herschel Walker.
Here is what BillyBob wrote:
Liz [Cheney] is actually a true conservative. Trump has no value structure or political spine- all he cares about is “what more can I get for me”. Anyone who believes Trump is conservative is just daft. Kinda’ like Trump.
What utter nonsense! Whatever one thinks about President Donald J. Trump’s personality, his policies were the most conservative reforms that America has seen since President Ronald Wilson Reagan left office — and perhaps even before he arrived. Here is a partial list of what he got done in the White House in one term — even as he fought a totally unhinged Democrat Party, the ferocious news media, the vicious Deep State (including a virulently hostile FBI), and enough lies hurled against him to stretch Pinocchio’s nose to the size of a javelin.
a) The Tax Cuts and Jobs Act — a massive tax reduction that largely remains in place
b) A 21 percent top-corporate tax rate, down from 35 percent
c) Eight regulations erased for every new one imposed
d) School voucher program for Washington, D.C., reauthorized and funded
e) Three constitutionalist justices added to the U.S. Supreme Court
f) Some 200 conservative judges placed in lower federal courts
g) Keystone XL Pipeline approved
h) Oil drilling in a small, specific portion of the Arctic National Wildlife Refuge authorized
i) U.S. energy independence achieved
j) Critical race theory in federal programs dumped
k) Paris Climate Agreement abandoned
l) Iran Nuclear Deal ditched
m) Chinese Communist Party confronted and discredited
n) Islamic State group caliphate obliterated
o) U.S. embassy in Israel moved from Tel Aviv to Jerusalem
p) Four Middle Eastern peace deals with Israel and its neighbors signed
q) About 5 hundred miles of southern-border wall constructed
r) Some 8,000 Opportunity Zones to revitalize low-income areas through market incentives activated
u) Executive order requiring that new federal buildings be designed in classical Greco-Roman style signed
v) North American Free Trade Agreement (NAFTA) replaced with the U.S.–Mexico–Canada Agreement — a new-and-improved free-trade pact
w) Free trade agreement with South Korea launched
x) Right to Try law allowing terminally ill patients access to experimental drugs signed
y) American military prowess restored and funded
z) Operation Warp Speed’s three COVID-19 vaccines delivered in record time via White House collaboration with the private sector
Donald J. Trump enjoyed enough victories to match the number of letters in the English alphabet. These 26 policies and achievements are all conservative. And there are many more (Space Force! Record-low black and Hispanic unemployment!).
Those who claim that President Trump is not a conservative and did not govern as one are either totally ignorant or wildly dishonest.
The saying goes: Luck is the residue of design. My version: The life one leads is — in the main, for most persons — the residue of choice.
There is a kind of person: one who drinks too much, who drives too fast, who spends money that he doesn’t have (or has little prospect of acquiring) on gadgets instead of useful things, who will not accept or hold onto a menial job because it is “beneath” him, who selects a mate for superficial reasons. Such a person is likely to lead a chaotic life — one filled with tension, frustration, and failure. Such a person is not deserving of charity because he is likely to squander it. And yet, the welfare state squanders tax-supported “charity” on such persons, thus encouraging their self-destructive behavior.
My home town once boasted fifteen schoolhouses that were built between the end of the Civil War and 1899. All but the high school were named for presidents of the United States: Adams, Buchanan, Fillmore, Harrison, Jackson, Jefferson, Madison, Monroe, Pierce, Polk, Taylor, Tyler, Van Buren, and Washington. Another of their ilk came along in the early 1900s; Lincoln was its name.
With the Adams School counting for two presidents — the second and sixth — there was a school for every president through Lincoln. Why Lincoln came late is a mystery to me. Lincoln was revered by us Northerners, and his picture was displayed proudly next to Washington’s in schools and municipal offices. We even celebrated Lincoln’s Birthday as a holiday distinct from Washington’s Birthday (a.k.a. President’s Day).
More schools — some named for presidents — followed well into the 20th century, but only the fifteen that I’ve named were built in the style of the classic red-brick schoolhouse: two stories, a center hall, imposing staircase, tall windows, steep roof, and often a tower for the bell that the janitor rang to summon neighborhood children to school. (The Lincoln, as a latecomer, was L-shaped rather than boxy, but it was otherwise a classic red-brick schoolhouse, replete with a prominent bell tower.)
I attended three of the fifteen red-brick schoolhouses. My first was Polk School, where I began kindergarten two days after the formal surrender of Japan on September 2, 1945.
Here’s the Polk in its heyday:
Kindergarten convened in a ground-floor room at the back of the school, facing what seemed then like a large playground, with room for a softball field. The houses at the far end of the field would have been easy targets for adult players, but it would have been a rare feat for a student to hit one over the fence that separated the playground from the houses.
In those innocent days, school-children in our small city got to school and back home by walking. Here’s the route that I followed as a kindergartener:
A kindergartener walking several blocks between home and school, usually alone most of the way? Unheard of today, it seems. But in those days predators were almost unheard of. And, as a practical matter, most families had only one car, which the working (outside-the-home) parent (then known as the father and head-of-household) used on weekdays for travel to and from his job. Moreover, the exercise of walking as much as a mile each way was considered good for growing children — and it was.
The route between my home and Polk School was 0.6 mile in length, and it crossed one busy street. Along that street were designated crossing points, at which stood Safety Patrol Boys, usually 6th-graders, who ensured that students crossed only when it was safe to do so. They didn’t stand in the street and stop oncoming traffic; they simply judged when students could safely cross, and gave them the “green light” by blowing a whistle. In the several years of my elementary-school career, I never saw or heard of a close call, let alone an injury or a fatality.
I began at Polk School because the school closest to my home, Madison School, didn’t have kindergarten. I went Madison for 1st grade. It was a gloomy pile:
Madison was shuttered after my year there, so I returned to Polk for 2nd and 3rd grades. Madison stood empty for a few years, and was razed in the late 1940s or early 1950s. Polk was shuttered sometime in the 1950s, and eventually was razed after being used for many years as a school-district warehouse.
The former site of Madison School now hosts “affordable housing”:
There’s a public playground where Polk School stood:
I spent two more years — 4th and 5th grades — in another red-brick schoolhouse: Tyler School. It’s still there, though it hasn’t been used as a school for many decades. It has served as an apartment house and a halfway house for addicts. It now looks like this:
The only other survivor among the fifteen red-brick schoolhouses is Monroe School, which now seems to serve as an apartment building:
Tyler and Monroe Schools are ghosts from America’s past — a past that’s irretrievable. It was a time of innocence, when America’s wars were fought to victory; when children could safely roam; when marriage was between man and woman, and usually for life; when deviant behavior was discouraged, not “celebrated”; when a high-school diploma and four-year degree meant something, and were worth something; when the state wasn’t the enemy of the church; when politics didn’t intrude into science; when people resorted to government in desperation, not out of habit; and when people had real friends, not Facebook “friends.”
It is with regret that I mention two grave errors in Abraham Lincoln’s Gettysburg Address. The speech begins with this: “Four score and seven years ago [in 1776] our fathers brought forth, upon this continent, a new nation”. Lincoln later characterizes the government as being “of the people by the people for the people”. Both statements are dead wrong.
Lincoln is far from the only influential person to have perpetrated myths about the founding of the United States and the character of the Constitution. These many myths help to sustain the unconscionably oppressive regime that now governs the United States.
It is the purpose of this essay to explode the myths, and to suggest ways to end the oppression.
I. THE CONSTITUTION OF 1787: A BLUEPRINT FOR A NEW NATION
As I will explain, the Constitution of the United States was a contract between the States that ratified it. The contract became binding not only on the States but also on their creature, the national government. (I use “national” throughout instead of “federal” because the Constitution created a new government of strictly limited but national power.)
This written Constitution — not the national government or any branch of it — was to be the supreme law of the land. As the supreme law. It was meant to be a bulwark against the expansion of the powers of the national government beyond those expressly granted to it by the Constitution.
There are many influential parties, justices of the Supreme Court included, who believe that the Constitution means what a majority of the Court says it means. But, as Randy Barnettputs it, the Supreme Court
does not have the power to change the written Constitution, which always remains there to be revived when there is a political and judicial will to do so. For example, after the Supreme Court gutted the Fourteenth Amendment during Reconstruction, it remained a part of the written Constitution for a future more enlightened Supreme Court to put to good use. By the same token, the current Supreme Court can still make serious mistakes about the Constitution. Because the Constitution is in writing, there is an external “there” there by which to assess its opinions.
The real meaning of the Constitution is fixed until it is amended through the process prescribed in the Constitution itself. It is not, unlike the British constitution, a do-it-yourself project. The American Constitution was designed by master architects, who meant it to be executed as it was written. It is a blueprint, not a Rohrschach test. Liberty is still possible under the American Constitution because it is still intact, waiting to be read and enforced correctly. (See Michael Stokes Paulsen, “Originalism: A Logical Necessity“, National Review, September 13, 2018. See also Willam M. Treanor, “Framer’s Intent: Gouverneur Morris, the Committee of Style, and the Creation of the Federalist Constitution“, Georgetown University Law Center, May 19, 2019, for a discussion of the ways in which the Committee of Style seems to have altered the Constitution as adopted piecemeal by the delegates to the convention of 1787, and why — especially in the view of Justice Clarence Thomas — the changes made by the Committee are valid because it was the Committee’s revised text that the convention approved.)
II. THE CONSTITUTION AS A CONTRACT FOR A NEW NATION
James Madison, known as the Father of the Constitution, characterized it as a contract, though he used an older word, namely, compact:
The [third Virginia] resolution declares, first, that “it views the powers of the federal government as resulting from the compact to which the states are parties;” in other words, that the federal powers are derived from the Constitution; and that the Constitution is a compact to which the states are parties. [Report on the Virginia Resolutions to the Virginia House of Delegates, January 1800]
What else could it be? Romantic rhetoric to the contrary notwithstanding, the Constitution is not the equivalent of the Ten Commandments or the Bible, handed directly from God or inspired by Him. The Constitution represents a practical arrangement through which the States that ratified it agreed to establish a national government with some degree of power over the States, but power that was carefully limited by enumeration.
The main purposes for establishing the national government were to provide for the common defense of the States, to ensure the free flow of commerce among the States, and to present a single face to the rest of the world in matters of trade and foreign policy.
These new arrangements represented a drastic change from the Confederation of 1781-89, which was more like a debating club or the United Nations. The Articles of Confederation and Perpetual Union had many provisions resembling those of the Constitution, but their enforcement was relegated to the Congress of the Confederation, that is, to the representatives of the States, each of which had one vote. A committee of the States could act when Congress was not in session, but only on the authority of nine of the thirteen States. In fact, all significant acts, including the creation and maintenance of armed forces and declarations of war required the approval of nine of the thirteen States.
The only way to revise this ponderous arrangement was to tear it up and start over. That was not the original aim of the delegates from twelve of the States who convened in Philadelphia the summer of 1787 with the aim of amending the Articles. (Rhode Island boycotted the convention in Philadelphia and was the last State to ratify the Constitution.) But that is what they did.
The Congress of the Confederation agreed to submit the proposed Constitution to the States for ratification. And when eleven of the States had ratified it, the Congress declared the new Constitution operative. At that point, the Confederation ceased to exist. So much for “perpetual union”.
The Constitution created a new nation, in which membership was voluntary. Non-ratifying States would not have been members of the new nation.
If the Constitution had not been ratified by at least nine States, it would have gone into the trash bin of history. The Confederation might have stumbled along as an ineffectual conclave of thirteen doggedly independent States. Or it might have been abandoned altogether, leaving thirteen disunited States in its wake, some of which might have formed other nations, trading partnerships, or mutual-defense alliances.
III. A CONTRACT BETWEEN THE STATES OR “THE PEOPLE”?
A crucial and often misrepresented aspect of the Constitution is the role of the States in its adoption. There is a prevailing view that the Constitution was adopted by “the people”, not by the States. No State or States, therefore, may withdraw from the constitutional contract because it is not theirs to begin with.
This view is taken because it argues against secession. Anti-secessionism is a religion whose adherents either hold a mistaken, romantic view of the genesis of the nation or dislike the word “secession” because of its association with the slave-holding States that did in fact secede.
The idea that the Constitution is the creature of “the people” is balderdash. It is balderdash of a high order because it was lent credence by none other than John Marshall, Chief Justice of the Supreme Court from 1801 to 1835, whose many opinions shaped constitutional jurisprudence for better and for worse. Consider this passage from Marshall’s opinion in McCulloch v. Maryland (1819):
The Convention which framed the constitution was indeed elected by the State legislatures. But the instrument, when it came from their hands, was a mere proposal, without obligation, or pretensions to it. It was reported to the then existing Congress of the United States, with a request that it might “be submitted to a Convention of Delegates, chosen in each State by the people thereof, under the recommendation of its Legislature, for their assent and ratification.” This mode of proceeding was adopted; and by the Convention, by Congress, and by the State Legislatures, the instrument was submitted to the people. They acted upon it in the only manner in which they can act safely, effectively, and wisely, on such a subject, by assembling in Convention. It is true, they assembled in their several States–and where else should they have assembled? No political dreamer was ever wild enough to think of breaking down the lines which separate the States, and of compounding the American people into one common mass. Of consequence, when they act, they act in their States. But the measures they adopt do not, on that account, cease to be the measures of the people themselves, or become the measures of the State governments.
Marshall argues against a strawman of his own construction: the insinuation that the Constitution was somehow ratified by “the American people”. He does not come out and say that, but he implies that holding the ratifying conventions in the various States was necessary because of the impracticality of holding a national convention of “the people”. The fact is that the conventions in the States were of modest size. The table given here shows that the total number of delegates voting yea and nay in each State ranged from a low of 26 to a high of 355, for an average of 127 per State. This was hardly anything like “one common mass” of the American people. The 1,648 delegates who voted in the thirteen conventions represented about two-tenths of one percent of the free white males aged 16 and older at the time (and presumably far less than one-half of one percent of the free-white males considered eligible for a convention).
The fact is that the ratifying conventions were held in the States because it was left to each State whether to join the new union or remain independent. The conventions were conducted under the auspices of the State legislatures. They were, in effect, special committees with but one duty: to decide for each State whether that State would join the union.
This view is supported by Madison’s contemporaneous account of the ratification process:
[I]t appears, on one hand, that the Constitution is to be founded on the assent and ratification of the people of America, given by deputies elected for the special purpose; but, on the other, that this assent and ratification is to be given by the people, not as individuals composing one entire nation, but as composing the distinct and independent States to which they respectively belong. It is to be the assent and ratification of the several States, derived from the supreme authority in each State, the authority of the people themselves. The act, therefore, establishing the Constitution, will not be a national, but a federal act. [The Federalist No. 39, as published in the Independent Journal, January 16, 1788]
Marshall’s fiction is compounded by the familiar image of the first words of the Preamble of the Constitution:
“We the People” is a brilliant public-relations ploy, but it has nothing to do with the facts of the case. The ratification of the Constitution was not the “will of the people” of the entire nation. It was the will of a tiny fraction of the people of each State that ratified it, and which might well have chosen to reject it.
IV. THE ABROGATION OF THE CONSTITUTIONAL CONTRACT
From the moment of the creation of the national government in 1789, that government was bound to honor the constitutional contract from which it arose. The national government has breached the contract by exceeding the scope of power granted to it by the constitutional contract. Immense, illegitimate power has accrued to the national government through the generations because of myriad laws, regulations, and court rulings that violate the real Constitution and distort its meaning.
The constitutional contract provides for:
primacy of the federal Constitution and of constitutional laws over those of the States
collective obligations of the States, as the united States, and individual obligations of the States to each other
structure of the national government — the three branches, elections and appointments to their offices, and basic legislative procedures
powers of the three branches
division of powers between the States and the national government
rights and privileges of citizens
a process for amending the Constitution.
The principles embodied in the details of the contract are few and simple:
The Constitution and constitutional laws are the supreme law of the land, within the clearly delimited scope of the Constitution. As the ardent nationalist Alexander Hamilton explains in Federalist No. 33, the Constitution “it expressly confines this supremacy to laws made pursuant to the Constitution”.
The national government has no powers other than those provided by the Constitution.
The rights of citizens include not only those rights specified in the Constitution but also any unspecified rights that do not conflict with powers expressly granted the national government or reserved by the States in the creation of the national government.
Moreover, the “checks and balances” in the Constitution are meant to limit the national government’s ability to act, even within its sphere of authority. In the legislative branch neither the House of Representatives nor the Senate can pass a law unilaterally. The president, in his primary constitutional role as head of the executive branch, must sign acts of Congress before they can become law, and may veto acts of Congress — which may, in turn, override his vetoes. From its position atop the judicial branch, the Supreme Court is supposed to decide cases “arising under” (within the scope of) the Constitution, not to change the Constitution without benefit of an amendment adopted as expressly provided in the Constitution.
The Constitution itself defines the sphere of authority of the national government and balances that authority against the authority of the States and the rights of citizens. Although the Constitution specifies certain powers of the executive and judiciary (e.g., commanding the armed forces and judging cases arising under the Constitution), the national government’s power rests squarely upon the legislative authority of Congress, as defined in Article I, Sections 8, 9, and 10.
Nevertheless, over the generations — and especially since the New Deal of Franklin D. Roosevelt — various acts of Congress, the executive branch, and the judicial branch have usurped the powers and rights of States and citizens. This has happened because of deliberate misreadings of the real Constitution; for example:
The phrase “promote the general Welfare” in the Preamble refers to a desired result of the adoption of the Constitution. It is not an edict to redistribute income and wealth.
The phrase “general Welfare” in Article I, Section 8, is meant to place a further limit on the specific powers granted to Congress in the same section of the Constitution. Congress is supposed to exercise those powers for the benefit of all citizens and not for the benefit of the citizens of specific States or regions.
The power of Congress to tax is granted in Article I, Section 8, to enable Congress to execute its specific powers. This limited power has been aggrandized into a general power of taxation for any purpose, constitutional or unconstitutional.
The power of Congress “to regulate Commerce … among the several States” — also granted in Article I, Section 8 — is meant to prevent the States from restricting or distorting the terms of trade across their borders, not to grant the national government the unlimited statutory and regulatory authority that it now has, thanks to the Supreme Court.
In Article I, Section 8, the authority of Congress “[t]o make all Laws which shall be necessary and proper for carrying into Execution the foregoing powers, and all other powers vested by this Constitution in the Government of the United States, or in any department or officer thereof” has been distorted out of all recognition. The words “necessary and proper” are meant to apply to the exercise of Congress’s specific powers. They are not a license to expand those powers on the pretext that the new powers have something to do with those actually granted by the Constitution.
The “equal protection” clause of Amendment XIV — “nor shall any State … deny to any person within its jurisdiction the equal protection of the laws” — is meant to secure the legal equality of those former slaves whose freedom had been secured by Amendment XIII. Amendment XIV has became, instead, an excuse for legislation, executive orders, and judicial decisions that grants special privileges to specific, “protected” groups by curtailing the liberty of those who cannot claim affiliation with one or another of the “protected” groups.
All of that and more is documented in a biting paper, “Our Perfect, Perfect Constitution“, by Michael Stokes Paulsen, Distinguished University Chair and Professor of Law, University of St. Thomas (Minnesota). I will not excerpt the paper because it is short and deserves to be read whole. Instead, I offer my shorter, unschooled version of the Constitution as it now stands:
Congress may pass any law about anything.
The president and the independent regulatory agencies created by Congress may do just about anything they want to do because of (a) delegations of power by Congress and (b) sheer willfulness on the part of the president and the regulatory agencies.
The Supreme Court may rewrite law at will, regardless of the written Constitution, especially for the purposes of (a) enabling Congress to obliterate social and economic liberty, and (b) disabling the ability of the defense and law-enforcement forces of the United States to defend the life, liberty, and property of Americans.
V. THE BASES OF ABROGATION
A. The Framers’ Fatal Error
The wise men who framed the Constitution saw that a legislature could act like a mob; thus:
[I]t may be concluded that a pure democracy, by which I mean a society consisting of a small number of citizens, who assemble and administer the government in person, can admit of no cure for the mischiefs of faction. A common passion or interest will, in almost every case, be felt by a majority of the whole; a communication and concert result from the form of government itself; and there is nothing to check the inducements to sacrifice the weaker party or an obnoxious individual.
Why has government been instituted at all? Because the passions of men will not conform to the dictates of reason and justice, without constraint. Has it been found that bodies of men act with more rectitude or greater disinterestedness than individuals? The contrary of this has been inferred by all accurate observers of the conduct of mankind; and the inference is founded upon obvious reasons. Regard to reputation has a less active influence, when the infamy of a bad action is to be divided among a number than when it is to fall singly upon one. A spirit of faction, which is apt to mingle its poison in the deliberations of all bodies of men, will often hurry the persons of whom they are composed into improprieties and excesses, for which they would blush in a private capacity.
[T]he more multitudinous a representative assembly may be rendered, the more it will partake of the infirmities incident to collective meetings of the people. Ignorance will be the dupe of cunning, and passion the slave of sophistry and declamation. The people can never err more than in supposing that by multiplying their representatives beyond a certain limit, they strengthen the barrier against the government of a few.
[T]here are particular moments in public affairs when the people, stimulated by some irregular passion, or some illicit advantage, or misled by the artful misrepresentations of interested men, may call for measures which they themselves will afterwards be the most ready to lament and condemn. In these critical moments, how salutary will be the interference of some temperate and respectable body of citizens, in order to check the misguided career, and to suspend the blow meditated by the people against themselves, until reason, justice, and truth can regain their authority over the public mind? What bitter anguish would not the people of Athens have often escaped if their government had contained so provident a safeguard against the tyranny of their own passions? Popular liberty might then have escaped the indelible reproach of decreeing to the same citizens the hemlock on one day and statues on the next.
The republican principle demands that the deliberate sense of the community should govern the conduct of those to whom they intrust the management of their affairs; but it does not require an unqualified complaisance to every sudden breeze of passion, or to every transient impulse which the people may receive from the arts of men, who flatter their prejudices to betray their interests. It is a just observation, that the people commonly intend the public good. This often applies to their very errors. But their good sense would despise the adulator who should pretend that they always reason right about the means of promoting it.
The primary inducement to conferring the power in question [the veto] upon the Executive is, to enable him to defend himself; the secondary one is to increase the chances in favor of the community against the passing of bad laws, through haste, inadvertence, or design. The oftener the measure is brought under examination, the greater the diversity in the situations of those who are to examine it, the less must be the danger of those errors which flow from want of due deliberation, or of those missteps which proceed from the contagion of some common passion or interest.
For all of their wisdom, however, the Framers were far too optimistic about the effectiveness of the checks and balances in their design. Consider this, from Hamilton:
It may … be observed that the supposed danger of judiciary encroachments on the legislative authority, which has been upon many occasions reiterated, is in reality a phantom. Particular misconstructions and contraventions of the will of the legislature may now and then happen; but they can never be so extensive as to amount to an inconvenience, or in any sensible degree to affect the order of the political system. This may be inferred with certainty, from the general nature of the judicial power, from the objects to which it relates, from the manner in which it is exercised, from its comparative weakness, and from its total incapacity to support its usurpations by force. And the inference is greatly fortified by the consideration of the important constitutional check which the power of instituting impeachments in one part of the legislative body, and of determining upon them in the other, would give to that body upon the members of the judicial department. This is alone a complete security. There never can be danger that the judges, by a series of deliberate usurpations on the authority of the legislature, would hazard the united resentment of the body intrusted with it, while this body was possessed of the means of punishing their presumption, by degrading them from their stations. While this ought to remove all apprehensions on the subject, it affords, at the same time, a cogent argument for constituting the Senate a court for the trial of impeachments. [Federalist No. 81]
Hamilton’s misplaced faith in the Constitution’s checks and balances is an example of what I call the Framers’ fatal error. The Framers underestimated the will to power that animates office-holders. The Constitution’s wonderful design — horizontal and vertical separation of powers — which worked rather well until the late 1800s, cracked under the strain of populism, as the central government began to impose national economic regulation at the behest of muckrakers and do-gooders. The Framers’ design then broke under the burden of the Great Depression, as the Supreme Court of the 1930s (and since) has enabled the national government to impose its will in matters far beyond its constitutional remit.
The Framers’ fundamental error can be found in Madison’s Federalist No. 51. Madison was correct in this:
It is of great importance in a republic not only to guard the society against the oppression of its rulers, but to guard one part of the society against the injustice of the other part. Different interests necessarily exist in different classes of citizens. If a majority be united by a common interest, the rights of the minority will be insecure.
But Madison then made the error of assuming that, under a central government, liberty is guarded by a diversity of interests:
[One method] of providing against this evil [is] … by comprehending in the society so many separate descriptions of citizens as will render an unjust combination of a majority of the whole very improbable, if not impracticable…. [This] method will be exemplified in the federal republic of the United States. Whilst all authority in it will be derived from and dependent on the society, the society itself will be broken into so many parts, interests, and classes of citizens, that the rights of individuals, or of the minority, will be in little danger from interested combinations of the majority.
In a free government the security for civil rights must be the same as that for religious rights. It consists in the one case in the multiplicity of interests, and in the other in the multiplicity of sects. The degree of security in both cases will depend on the number of interests and sects; and this may be presumed to depend on the extent of country and number of people comprehended under the same government. This view of the subject must particularly recommend a proper federal system to all the sincere and considerate friends of republican government, since it shows that in exact proportion as the territory of the Union may be formed into more circumscribed Confederacies, or States oppressive combinations of a majority will be facilitated: the best security, under the republican forms, for the rights of every class of citizens, will be diminished: and consequently the stability and independence of some member of the government, the only other security, must be proportionately increased.
Madison then went on to contradict what he had said in Federalist No. 46 about the States being a bulwark of liberty:
It can be little doubted that if the State of Rhode Island was separated from the Confederacy and left to itself, the insecurity of rights under the popular form of government within such narrow limits would be displayed by such reiterated oppressions of factious majorities that some power altogether independent of the people would soon be called for by the voice of the very factions whose misrule had proved the necessity of it. In the extended republic of the United States, and among the great variety of interests, parties, and sects which it embraces, a coalition of a majority of the whole society could seldom take place on any other principles than those of justice and the general good; whilst there being thus less danger to a minor from the will of a major party, there must be less pretext, also, to provide for the security of the former, by introducing into the government a will not dependent on the latter, or, in other words, a will independent of the society itself. It is no less certain than it is important, notwithstanding the contrary opinions which have been entertained, that the larger the society, provided it lie within a practical sphere, the more duly capable it will be of self-government. And happily for the REPUBLICAN CAUSE, the practicable sphere may be carried to a very great extent, by a judicious modification and mixture of the FEDERAL PRINCIPLE.
Madison assumed (or asserted) that in creating a new national government with powers greatly exceeding those of the Confederation a majority of States would not tyrannize the minority and that a collection minorities with overlapping interests would not concert to tyrannize the majority. Madison was so anxious to see the Constitution ratified that he oversold himself and the States’ ratifying conventions on the ability of the national government to hold itself in check. Thus the Constitution is lamentably silent on nullification and secession, which are real checks on power.
B. The New Deal and Beyond
Though the constitutional contract had not been strictly adhered to for some time, it began to unravel in earnest with the onset of the Great Depression, which led to the election of Franklin D. Roosevelt and the New Deal.
What went wrong? And how did it go wrong so quickly? Think back to 1928, when Americans were more prosperous than ever and the GOP had swept to its third consecutive lopsided victory in a presidential race. All it took to snatch disaster from the jaws of delirium was a stock-market crash in 1929 (fueled by the Fed) that turned into a recession that turned into a depression (also because of the Fed). The depression became the Great Depression, and it lasted until the eve of World War II, because of the activist policies of Herbert Hoover and Franklin Roosevelt, which suppressed recovery instead of encouraging it. There was even a recession (1937-38) within the depression, and the national unemployment rate was still 15 percent in 1940. It took the biggest war effort in the history of the United States to bring the unemployment rate back to its pre-depression level.
From that relatively brief but deeply dismal era sprang a new religion: faith in the national government to bring peace and prosperity to the land. Most Americans of the era — like most human beings of every era — did not and could not see that government is the problem, not the solution. Victory in World War II, which required central planning and a commandeered economy, helped to expunge the bitter taste of the Great Depression. And coming as it did on the heels of the Great Depression, reinforced the desperate belief — shared by too many Americans — that salvation is to be found in big government.
The beneficial workings of the invisible hand of competitive cooperation are just too subtle for most people to grasp. The promise of a quick fix by confident-sounding politicians is too alluring. FDR became a savior-figure because he talked a good game and was an inspiring war leader, though he succumbed to pro-Soviet advice.
progressivism is … broadly accepted by the American public, inculcated through generations of progressive dominance of education and the media (whether that media is journalism or entertainment). Certainly Democrats embrace it. Now the political success of Donald J. Trump has opened the eyes of the Right to the fact that Republicans largely accept it….
Republicans have occasionally succeeded in slowing the rate at which America has become more progressive. President Reagan was able to cut income tax rates and increase defense spending, but accepted tax increases to kick the can on entitlements and could not convince a Democratic Congress to reduce spending generally. Subsequent administrations generally have been worse. A Republican Congress pressured Bill Clinton into keeping his promise on welfare reform after two vetoes. He did so during a period when the end of the Cold War and the revenues from the tech bubble allowed Washington to balance budgets on the Pentagon’s back. Unsurprisingly, welfare reform has eroded in the ensuing decades.
Accordingly, the big picture remains largely unchanged. Entitlements are not reformed, let alone privatized. To the contrary, Medicare was expanded during a GOP administration, if less so than it would have been under a Democratic regime…. Programs are almost never eliminated, let alone departments.
The Right also loses most cultural battles, excepting abortion and gun rights. Notably, the inroads on abortion may be due as much to the invention and deployment of the sonogram as the steadfastness of the pro-life movement. Otherwise, political and cultural progressivism has been successful in their march through the institutions, including education, religion, and the family.
Curricula increasingly conform to the progressive fashions of the moment, producing generations of precious snowflakes unequipped even to engage in the critical thinking public schools claim to prioritize over an understanding of the ages of wisdom that made us a free and prosperous people. Church membership and attendance continues their long-term decline. A country that seriously debated school prayer 30 years ago now debates whether Christians must be forced to serve same-sex weddings.
Marriage rates continue their long-term decline. Divorce rates have declined from the highs reached during the generation following the sexual revolution, but has generally increased over the course of the century during which progressivism has taken hold (despite the declining marriage rate). Those advocating reform of the nation’s various no-fault divorce laws are few and generally considered fringe. [“Americans Are As Deluded As Our Elites“, The Federalist, June 26, 2016]
There’s more, but disregard Henry’s reification of America when he should write “most Americans”:
Meanwhile, America has voted for decade after decade of tax-and-spend, borrow-and-spend, or some hybrid of the two. If the white working class is now discontented with the government’s failure to redress their grievances, this is in no small part due to the ingrained American expectation that government will do so, based on the observation that government typically hungers to increase government dependency (not that the white working class would use these terms).…
In sum, while it is correct to note that elites are not doing their jobs well, it is more difficult to conclude that elites have not been responding to the political demands of the American public as much as they have driven them.…
The presidential nominees our two major parties have chosen are largely viewed as awful. But Hillary Clinton and Donald Trump offer two slightly different versions of the same delusion: that progressivism works, if only the elites were not so stupid. This delusion is what most Americans currently want to believe.
[g]overnment in the United States, especially at the federal level, has become more about transfer payments and less about provision of goods and services.…
[There has been an] overall upward rise [of transfer payments] in the last half-century from 5% of GDP back in the 1960s to about 15% of GDP in the last few years….
The political economy of such a shift is simple enough: programs that send money to lots of people tend to be popular. But I would hypothesize that this ongoing shift not only reflects voter preferences, but also affect how Americans tend to perceive the main purposes of the federal government. Many Americans have become more inclined to think of federal budget policy not in terms of goods or services or investments that it might perform, but in terms of programs that send out checks. [“The Transition to Transfer Payment Government“, Conversable Economist, July 1, 2016]
VI. VIABLE REMEDIES: SECESSION AND PARTITION
What lies ahead? Not everyone is addicted to government. There are millions of Americans who want less of it — a lot less — rather than more of it. Several options are discussed at length in “A National Divorce”. Here, I borrow from the portions of that post which address secession and partition — the best of the lot. The secion on secession repeats some of the arguments made above.
A. Secession
In accordance with the doctrine of departmentalism, a State may be tempted to nullify an unconstitutional act of the national government. But there are probably many such acts that the State (or a preponderance of its citizens) would wish to nullify. Why do the thing piecemeal — and risk intervention by the national government for the sake of a single issue — when a sweeping solution is at hand? The sweeping solution, of course, is secession.
Secession is a legitimate constitutional act — a legal act, in other words — conventional wisdom to the contrary notwithstanding.
The best way to show that secession is legal is to construct a legal case for it, in the form of a resolution of secession:
In Convention, __________ 20__.
The Declaration of the representatives of the people of the State of _______________.
It has become necessary for the people of _______________ to dissolve the political bands which have connected them with the United States of America, and to assume the separate and equal status of an independent nation. A decent respect for the opinions of mankind requires that the people of _______________ should declare the causes which impel them to the separation, and explain its legality.
The Constitution is a contract — a compact in the language of the Framers. The parties to the compact are not only the States but also the national government created by the Constitution.
It was by the grace of nine States that the Constitution became effective in 1789. Those nine States voluntarily created a new nation and national government and, at the same time, voluntarily ceded to that government certain specified and limited powers. The States and their people were given to understand that, in return for the powers granted it, the central government would exercise those powers for the benefit of the States and their people. Every State subsequently admitted to the union has subcribed to the Constitution with the same understanding as the nine States whose ratification effected it.
Lest there be any question about the status of the Constitution as a compact, we turn to James Madison, who is often called the Father of the Constitution. Madison, in a letter to Daniel Webster dated March 15, 1833, addresses
the question whether the Constitution of the U.S. was formed by the people or by the States, now under a theoretic discussion by animated partizans.
Madison continues:
It is fortunate when disputed theories, can be decided by undisputed facts. And here the undisputed fact is, that the Constitution was made by the people, but as imbodied into the several states, who were parties to it and therefore made by the States in their highest authoritative capacity.
“That this Assembly doth explicitly and peremptorily declare, that it views the powers of the federal government, as resulting from the compact to which the states are parties, as limited by the plain sense and intention of the instrument constituting that compact–as no further valid than they are authorized by the grants enumerated in that compact; and that, in case of a deliberate, palpable, and dangerous exercise of other powers, not granted by the said compact, the states who are parties thereto have the right, and are in duty bound, to interpose, for arresting the progress of the evil and for maintaining, within their respective limits, the authorities, rights, and liberties, appertaining to them.”…
The resolution declares, first, that “it views the powers of the federal government as resulting from the compact to which the states are parties;” in other words, that the federal powers are derived from the Constitution; and that the Constitution is a compact to which the states are parties….
The other position involved in this branch of the resolution, namely, “that the states are parties to the Constitution,” or compact, is, in the judgment of the committee, equally free from objection…. [I]n that sense the Constitution was submitted to the “states;” in that sense the “states” ratified it; and in that sense of the term “states,” they are consequently parties to the compact from which the powers of the federal government result. . . .
. . . The Constitution of the United States was formed by the sanction of the states, given by each in its sovereign capacity.
Finally, in The Federalist No. 39, which informed the debates in the various States about ratification, Madison says that
the Constitution is to be founded on the assent and ratification of the people of America, given by deputies elected for the special purpose; but, on the other, that this assent and ratification is to be given by the people, not as individuals composing one entire nation, but as composing the distinct and independent States to which they respectively belong. It is to be the assent and ratification of the several States, derived from the supreme authority in each State, the authority of the people themselves. . . .
That it will be a federal and not a national act, as these terms are understood by the objectors; the act of the people, as forming so many independent States, not as forming one aggregate nation, is obvious from this single consideration, that it is to result neither from the decision of a majority of the people of the Union, nor from that of a majority of the States. It must result from the unanimous assent of the several States that are parties to it, differing no otherwise from their ordinary assent than in its being expressed, not by the legislative authority, but by that of the people themselves. Were the people regarded in this transaction as forming one nation, the will of the majority of the whole people of the United States would bind the minority, in the same manner as the majority in each State must bind the minority; and the will of the majority must be determined either by a comparison of the individual votes, or by considering the will of the majority of the States as evidence of the will of a majority of the people of the United States. Neither of these rules have been adopted. Each State, in ratifying the Constitution, is considered as a sovereign body, independent of all others, and only to be bound by its own voluntary act.
Madison leaves no doubt about the continued sovereignty of each State and its people. The remaining question is this: On what grounds, if any, may a State withdraw from the compact into which it entered voluntarily?
There is a judicial myth — articulated by a majority of the United States Supreme Court in Texas v. White (1869) — that States may not withdraw from the compact because the union of States is perpetual:
The Union of the States never was a purely artificial and arbitrary relation. It began among the Colonies, and grew out of common origin, mutual sympathies, kindred principles, similar interests, and geographical relations. It was confirmed and strengthened by the necessities of war, and received definite form and character and sanction from the Articles of Confederation. By these, the Union was solemnly declared to “be perpetual.” And when these Articles were found to be inadequate to the exigencies of the country, the Constitution was ordained “to form a more perfect Union.” It is difficult to convey the idea of indissoluble unity more clearly than by these words. What can be indissoluble if a perpetual Union, made more perfect, is not?
The Court’s reasoning is born of mysticism, not legality. Similar reasoning might have been used — and was used — to assert that the Colonies were inseparable from Great Britain. And yet, some of the people of the Colonies put an end to the union of the Colonies and Great Britain, on the moral principle that the Colonies were not obliged to remain in an abusive relationship. That moral principle is all the more compelling in the case of the union known as the United States, which — mysticism aside — is nothing more than the creature of the States.
In fact, the Constitution supplanted the Articles of Confederation and Perpetual Union, by the will of only nine of the thirteen States. Madison says this in Federalist No. 43 regarding that event:
On what principle the Confederation, which stands in the solemn form of a compact among the States, can be superseded without the unanimous consent of the parties to it? . . .
The . . . question is answered at once by recurring to the absolute necessity of the case; to the great principle of self-preservation; to the transcendent law of nature and of nature’s God, which declares that the safety and happiness of society are the objects at which all political institutions aim, and to which all such institutions must be sacrificed.
[a] rightful secession requires the consent of the others [other States], or an abuse of the compact, absolving the seceding party from the obligations imposed by it.
An abuse of the compact most assuredly legitimates withdrawal from it, on the principle of the preservation of liberty, especially if that abuse has been persistent and shows no signs of abating. The abuse, in this instance, has been and is being committed by the national government.
The national government is both a creature of the Constitution and a de facto party to it, as co-sovereign with the States and supreme in its realm of enumerated and limited powers. One of those powers enables the Supreme Court of the United States to decide “cases and controversies” arising under the Constitution, which is but one of the ways in which the Constitution makes the national government a party to the constitutional contract. More generally, the high officials of the national government acknowledge that government’s role as a party to the compact — and the limited powers vested in them — when they take oaths of office requiring them to uphold the Constitution.
Those high officials have nevertheless have committed myriad abuses of the national government’s enumerated and limited powers. The abuses are far too numerous to list in their entirety. The following examples amply justify the withdrawal of the State of _______________ from the compact:
A decennial census is authorized in Article I, Section 2, for the purpose of enumerating the population of each State in order to apportion the membership of the House of Representatives among the States, and for none of the many intrusive purposes since sought by the executive branch and authorized by Congress.
Article I, Section 1, vests all legislative powers of the national government in the Congress, but Congress has authorized and allowed agencies of the executive branch to legislate, in the guise of regulation, on a broad and seemingly limitless range of matters affecting the liberty and property of Americans.
Further, in violation of Article III, which vests the judicial power of the national government in the judicial branch, Congress has authorized and allowed agencies of the executive branch to adjudicate matters about which they have legislated, thus creating conflicts of interest that have systematically deprived millions of Americans of due process of law.
Article I, Section 8, enumerates the specific powers of Congress, which exclude many things that Congress has authorized with the cooperation and acquiescence of the other branches; for example, establishing and operating national welfare and health-care programs; intervening in the education of American’s children in practically every village, town, and city in the land; intrusively regulating not only interstate commerce but also intrastate commerce, the minutiae of manufacturing, and private, non-commercial transactions having only a faint bearing, if any, on interstate commerce; making and guaranteeing loans, including loans by quasi-governmental institutions and other third parties; acquisition of the stock and debt of business enterprises; establishment of a central bank with the power to do more than issue money; requiring the States and their political subdivisions to adopt uniform laws on matters that lie outside the enumerated powers of Congress and beyond the previously agreed powers of the States and their subdivisions; and coercing the States and the political subdivisions in the operation of illegitimate national programs by providing and threatening to withhold so-called federal money, which is in fact taxpayers’ money. The view that the “general welfare” and/or “necessary and proper” clauses of Article I, Section 8, authorize such activities was refuted definitively in advance of the ratification of the Constitution by James Madison in Federalist No. 41, wherein the leading proponents of the Constitution stated their understanding of the Constitution’s meaning when they made the case for its ratification.
One of the provisions of Article I, Section 10, prohibits interference by the States in private contracts; moreover, the Constitution nowhere authorizes the national government to interfere in private contracts. Yet, directly and through the States, the national government has allowed, encouraged, and required interference in private contracts pertaining to employment, property, and financial transactions.
Contrary to the express words of Article II, which vests executive power in the president, Congress has vested executive power in agencies that are not under the control and supervision of the president.
The Supreme Court, in various holdings, has curtailed the president’s ability, as commander-in-chief, to defend Americans and their interests by circumscribing his discretionary authority in matters concerning the capture, detention, interrogation, and appropriate imposition of military punishment for offenses against the law of war, of enemy prisoners captured in the course of ongoing hostilities pursuant to a congressional declaration of war or authorization for use of military force.
Amendment I of the Constitution provides that “Congress shall make no law . . . abridging the freedom of speech.” But Congress has nevertheless abridged the freedom of political speech by passing bills that have been signed into law by presidents of the United States and not entirely struck down by the Supreme Court of the United States.
Amendment IX of the Constitution provides that its “enumeration . . . of certain rights, shall not be construed to deny or disparage others retained by the people.” But Congress, in concert with various presidents and Supreme Court majorities, has enacted laws that circumscribe such time-honored rights as freedom of association, freedom of contract, and property rights. That such laws were enacted for the noble purpose of ending some outward manifestations of discrimination does not exempt them from the purview of Amendment IX. As Amendment XIII attests, freedom is for all Americans, not just those who happen to be in favor at the moment.
As outlined above, the national government has routinely and massively violated Amendment X, which states that “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.”
We, therefore, the representatives of the people of _______________ do solemnly publish and declare that this State ought to be free and independent; that it is absolved from all allegiance to the government of the United States; that all political connection between it and government of the United States is and ought to be totally dissolved; and that as a free and independent State it has full power to levy war, conclude peace, contract alliances, establish commerce, and to do all other acts and things which independent States may of right do. And for the support of this Declaration, with a firm reliance on the protection of divine Providence, we mutually pledge to each other our lives, our fortunes and our sacred honor.
Dobbs is the case in which the Supreme Court overturned the anti-life rulings in Roe v. Wade and Planned Parenthood v. Casey. In West Virginia v. EPA the Court struck down the Environmental Protection Agency’s economically destructive overreach in the regulation of carbon dioxide emissions. And in Bruen the Court rejected a New York law restricting the right to bear arms.
U.S. Representative Maxine Waters (D – CA 43) captured the left’s reaction to these rulings when she said (in connection with Dobbs) “To hell with the Supreme Court. We will defy them.”
If the left succeeds in overturning or circumventing the Court’s decisions in these matters, conservatives — already outraged by leftist lunacy — will be livid.
If the left doesn’t succeed, its vilification of America and America’s political traditions will continue. Memes like “burn America to the goddam ground” will grow in popularity on social media and will spread to left-wing “news” outlets. The loss of the House in November 2022 and (very possibly) the Senate and White House in November 2024 will only intensify the left’s rage. Perhaps — like New England and the abolitionists of two centuries ago — Deep-Blue States will instigate a secession movement.
It would be wise, at that point, for those States with strong conservative governance to propose a national divorce. Leftists could have their own way in their part of the continent, and conservatives could be left in peace in their part of the continent. Let’s call these groupings Governmentland and Freedomland.
There would be some messy details to sort out. Foremost among them would be the question of defense. But it seems to me that if Governmentland shirks its share of the burden, Freedomland could easily afford a robust defense after having shed the many useless departments and agencies — and their policies — that burden taxpayers and the economy.
Further, a Freedomland foreign policy that is unfettered from the United Nations, and based on strength rather than diplomacy, would be a refreshing and fruitful departure from eight decades of feckless interventionism.
Because Freedomland would exist to foster the freedom and prosperity of its own citizens, it would have strict controls on entry. Visitors and temporary workers would vetted and strictly monitored. Prospective immigrants (including those from Governmentland) would be kept out by physical and electronic barriers, and would be vetted before they enter the country. Citizenship would be granted only after an applicant has demonstrated his ability to support himself (and his family if he has one in country), perhaps with the help of churches and charitable organizations. Non-citizens would be ineligible to vote, of course, and would have to have been citizens for 10 years before they are allowed to vote. (By that time one would hope that they would have been weaned from any allegiance to or dependence on a nanny state.)
What about trade between Governmentland and Freedomland? Self-sufficiency should be the watchword for Freedomland. It should not outsource energy, technology, or other products and services that are essential to defense. Some outsourcing may be necessary in the beginning, but there should be a deliberate movement toward self-sufficiency.
Freedomland’s constitution could be modeled on this one, though with some revisions to accommodate points made above.
Finally, why is a national divorce a matter of urgency? Complete victory for the enemies of liberty is only ever a few elections away. The squishy center of the American electorate — as is its wont — will eventually swing back toward the Democrat Party. With a competent Democrat in the White House, a Congress that is firmly controlled by Democrats, and a few party switches in the Supreme Court, the dogmas of the left will be stamped upon the land; for example:
Billions and trillions of additional dollars will be wasted on various “green” projects, including but far from limited to the complete replacement of fossil fuels by “renewables”, with the resulting impoverishment of most Americans, except for comfortable elites who press such policies).
It will be illegal to criticize, even by implication, such things as abortion, illegal immigration, same-sex marriage, transgenderism, anthropogenic global warming, or the confiscation of firearms. These cherished beliefs will be mandated for school and college curricula, and enforced by huge fines and draconian prison sentences (sometimes in the guise of “re-education”).
Any hint of Christianity and Judaism will be barred from public discourse, and similarly punished. Other religions will be held up as models of unity and tolerance.
Reverse discrimination in favor of females, blacks, Hispanics, gender-confused persons, and other “protected” groups will become overt and legal. But “protections” will not apply to members of such groups who are suspected of harboring libertarian or conservative impulses.
Sexual misconduct will become a crime, and any male person may be found guilty of it on the uncorroborated testimony of any female who claims to have been the victim of an unwanted glance, touch (even if accidental), innuendo (as perceived by the victim), etc.
There will be parallel treatment of the “crimes” of racism, anti-immigrationism, anti-Islamism, nativism, and genderism.
All health care in the United States will be subject to review by a national, single-payer agency of the central government. Private care will be forbidden, though ready access to doctors, treatments, and medications will be provided for high officials and other favored persons. The resulting health-care catastrophe that befalls most of the populace (like that of the UK) will be shrugged off as a residual effect of “capitalist” health care.
The regulatory regime will rebound with a vengeance, contaminating every corner of American life and regimenting all businesses except those daring to operate in an underground economy. The quality and variety of products and services will decline as their real prices rise as a fraction of incomes.
The dire economic effects of single-payer health care and regulation will be compounded by massive increases in other kinds of government spending (defense excepted). The real rate of economic growth will approach zero.
The United States will maintain token armed forces, mainly for the purpose of suppressing domestic uprisings. Given its economically destructive independence from foreign oil and its depressed economy, it will become a simulacrum of the USSR and Mao’s China — and not a rival to the new superpowers, Russia and China, which will largely ignore it as long as it doesn’t interfere in their pillaging of respective spheres of influence. A policy of non-interference (i.e., tacit collusion) will be the order of the era in Washington.
Though it would hardly be necessary to rig elections in favor of Democrats, given the flood of illegal immigrants who will pour into the country and enjoy voting rights, a way will be found to do just that. The most likely method will be election laws requiring candidates to pass ideological purity tests by swearing fealty to the “law of the land” (i.e., abortion, unfettered immigration, same-sex marriage, freedom of gender choice for children, etc., etc., etc.). Those who fail such a test will be barred from holding any kind of public office, no matter how insignificant.
Are my fears exaggerated? I doubt it. I have lived long enough and seen enough changes in the political and moral landscape of the United States to know that what I have sketched out can easily happen within a decade after Democrats seize total control of the national government. And that can happen given the fickleness of the electorate.
VII. A RADICAL REMEDY: SEEING THE CONSTITUTION FOR WHAT IT REALLY IS
All of the foregoing is predicated on the validity of the Constitution as the supreme law of the land. But there is a good case to be made that the Constitution is no more valid than a bankrupt gambler’s I.O.U. What follows draws and expands upon “Another Way to Declare Independence” and repeats some of the areguments made earlier in this post.
A. The Constitution’s Standing with “the People”
It is has long been glaringly evident that a large and vocal fraction of U.S. citizens rejects the Constitution’s blueprint for liberty. The blueprint delineates an edifice with these essential features: the horizontal and vertical separation of powers: a central government of limited and enumerated powers; coequal branches of that government, each possessing the ability to restrain the others; and the reservation to the States and the people of the powers not expressly granted to the central government.
More important than the edifice, perhaps, is the foundation upon which it was built: the Judeo-Christian tradition, generally, and the predominantly British roots of the signatories, in particular. As one blogger puts it, “underlying cultural assumptions are just as a important to the success of a republic as its political structures.”
Then, there are the Constitution’s underlying — and largely forgotten — purposes. Two of the main ones were keep the tenuous union of 1776 from flying apart because of sectional differences, and to defend the union militarily without depending on the whims of the various State legislatures.
The Framers’ brilliant scheme worked well for about 140 years, despite the depredations of the Progressive Era, which encompassed the imperial presidencies of Theodore Roosevelt and Woodrow Wilson. Then came the New Deal and it has since been all downhill for the Constitution.
The cultural underpinnings of the Constitution didn’t begin to rot until the onset of America’s societal and cultural degeneration in the 1960s. It was then that political polarization began, and it has accelerated with the passage of time (despite the faux unity that followed 9/11).
The Constitution is positive law, that is, law constructed by formal institutions (e.g., Congress, the Supreme Court), as opposed to natural law, which arises from human coexistence — the Golden Rule, for example. Natural law has moral standing because it appeals to and flows from human nature. Positive law may, by chance, be derived from natural law (e.g., murder is a crime), but it is a contrivance that can just as easily contravene natural law (e.g., the murder of an unborn human being is not a crime).
The myriad statutes, ordinances, regulations, executive orders, and judicial judgments that proscribe the behavior of Americans are positive law. Most of this body of positive law is designed to benefit or satisfy special interests or political ideologies. It has little to do with how human beings would behave were they free to do so, and were mindful of how their behavior would affect others and the behavior of others toward themselves. A great deal of this positive law exists because it has been imposed in the name of the Constitution or some “emanation” from it.
But the Constitution had no moral claim on most of the Americans living at the time of its adoption. And it has no moral claim on any American now living.
On this point, I turn to I turned to Lysander Spooner, anarchist extraordinaire. He begins No Treason(1867) with this:
The Constitution has no inherent authority or obligation. It has no authority or obligation at all, unless as a contract between man and man. And it does not so much as even purport to be a contract between persons now existing. It purports, at most, to be only a contract between persons living eighty years ago. And it can be supposed to have been a contract then only between persons who had already come to years of discretion, so as to be competent to make reasonable and obligatory contracts. Furthermore, we know, historically, that only a small portion even of the people then existing were consulted on the subject, or asked, or permitted to express either their consent or dissent in any formal manner. Those persons, if any, who did give their consent formally, are all dead now. Most of them have been dead forty, fifty, sixty, or seventy years. And the constitution, so far as it was their contract, died with them. They had no natural power or right to make it obligatory upon their children. It is not only plainly impossible, in the nature of things, that they could bind their posterity, but they did not even attempt to bind them. That is to say, the instrument does not purport to be an agreement between any body but “the people” then existing; nor does it, either expressly or impliedly, assert any right, power, or disposition, on their part, to bind anybody but themselves. Let us see. Its language is:
We, the people of the United States (that is, the people then existing in the United States), in order to form a more perfect union, insure domestic tranquillity, provide for the common defense, promote the general welfare, and secure the blessings of liberty to ourselves and our posterity, do ordain and establish this Constitution for the United States of America.
It is plain, in the first place, that this language, as an agreement, purports to be only what it at most really was, viz., a contract between the people then existing; and, of necessity, binding, as a contract, only upon those then existing. In the second place, the language neither expresses nor implies that they had any right or power, to bind their “posterity” to live under it. It does not say that their “posterity” will, shall, or must live under it. It only says, in effect, that their hopes and motives in adopting it were that it might prove useful to their posterity, as well as to themselves, by promoting their union, safety, tranquillity, liberty, etc.
In sum, the Constitution is neither a compact between States (as sovereign entities) nor a law adopted by “the people”. It is a contract that was drawn up by a small fraction of the populace of twelve States, and put into effect by a small fraction of the populace of nine States. Its purpose, in good part, was to promote the interests of many of the Framers, who cloaked those interests in the glowing rhetoric of the Preamble (“We the People”, etc.). The other four of the original thirteen States could have remained beyond the reach of the Constitution, and would have done so but for the ratifying acts of small fractions of their populations. (With the exception of Texas, formerly a sovereign republic, States later admitted weren’t independent entities, but were carved out of territory controlled by the government of the United States. Figuratively, they were admitted to the union at the point of a gun.)
C. Spelling it Out
1. Despite their status as “representatives of the people”, the various fractions of the populace that drafted and ratified the Constitution had no moral authority to bind all of their peers, and certainly no moral authority to bind future generations.
2. The Constitution was and is binding only in the way that a debt to a gangster who demands “protection money” is binding. It was and is binding because state actors have the power to enforce it, as they see fit to interpret it. (One need look no further than the very early dispute between Hamilton and Madison about the meaning of the General Welfare Clause for a relevant and crucial example of interpretative differences.)
3. The Constitution contains provisions that can be and sometimes have been applied to advance liberty. But such applications have depended on the aims and whims of those then in positions of power.
4. It is convenient to appeal to the Constitution in the cause of liberty — and I have often done just that — but this does not change the fact that the Constitution was not and never will be a law enacted by “the people” of the United States or any State thereof.
5. Any person and any government in the United States may therefore, in principle, reject the statutes, executive orders, and judicial holdings of the United States government (or any government) as non-binding.
6. Secession is one legitimate form of rejection (though the preceding discussion clearly implies that secession by a State government is morally binding only on those who assent to the act of secession).
7. The ultimate and truly legitimate form of rejection is civil disobedience — the refusal of individual persons, or voluntary groupings of them (e.g., family, church, club, and other institutions of civil society), to abide by positive law when it infringes on natural law and liberty.
8. States and municipalities governed by leftists are engaging in institutional civil disobedience (e.g., encouragement of illegal immigration, spiteful adoption of aggressive policies to combat “climate change” and to circumvent the Second Amendment; an organized effort to undermine the Electoral College; a conspiracy by state actors to thwart the election of Trump, to oust him from the presidency, to prevent him from running again, and to descredit anyone who subsribes to his view of the “deep state” as an enemy of liberty).
9. The lesson for defenders of liberty is to do what the left is doing, and to do it aggressively. When the left regains control of the White House and Congress — as it will given the mindlessness of most American voters — conservatives must be prepared to resist the edicts emanating from Washington (unless a national divorce can be arranged). The best way to prepare is to emulate and expand on the examples mentioned above. The best defense is a good offense: Dare Washington to deploy its weaponry in the service of slavery.
10. Slavish obedience to the edicts of the central government is neither required by the dead Constitution nor in keeping with moral principles. Those principles put traditional morality and voluntarily evolved social norms above the paper promises of the Constitution. In fact, those paper promises are valid only insofar as they foster the survival of traditional morality and voluntarily evolved norms.
I used to correspond with a fellow whom I had known for 50 years. He’s a pleasant person with a good sense of humor and an easy-going personality. He’s also a chameleon.
By which I mean that he takes on the ideological coloration of his surroundings. He agrees with his companions of the moment. It’s therefore unsurprising that he proudly calls himself a “centrist”. Though he wouldn’t put it this way, his centrism involves compromises between good and evil — the necessary result of which is more evil.
“Centrist,” in his case, is just another word for collabo.
An exchange from six years ago will tell you all that you need to know about him. It began with an e-mail from a third party, in which this was quoted:
IF YOU HAD A HUNCH THE NEWS SYSTEM WAS SOMEWHAT RIGGED AND YOU COULDN’T PUT YOUR FINGER ON IT, THIS MIGHT HELP YOU SOLVE THE PUZZLE.
ABC News executive producer Ian Cameron is married to Susan Rice, National Security Adviser.
CBS President David Rhodes is the brother of Ben Rhodes, Obama’s Deputy National Security Adviser for Strategic Communications.
ABC News correspondent Claire Shipman is married to former White House Press Secretary Jay Carney.
ABC News and Univision reporter Matthew Jaffe is married to Katie Hogan, Obama’s Deputy Press Secretary.
ABC President Ben Sherwood is the brother of Obama’s Special Adviser Elizabeth Sherwood.
CNN President Virginia Moseley is married to former Hillary Clinton’s Deputy Secretary Tom Nides.
Ya think there might be a little bias in the news?
The chameleon’s comment:
I share your concern about MSM bias, but am not as troubled by it. (I stopped watching the Big 3s’ evening news 50 years ago because I couldn’t get a straight view on the Vietnam War.)
My comment on his comment:
You may have stopped watching, and I did too, but millions haven’t. And too many of them are swallowing it whole, which is a big reason for the leftward drift of the country over the past 50 years. (JFK could pass for a conservative today.) So I’m very troubled by it.
His reply to me:
But at my absolute center is a belief in universal suffrage. In a nation of 150m or so (potential) voters, tens of millions are going to be swayed by CBS or, egads, Fox. If it weren’t those sources, it would be something else like them.
I can’t fix that, and see trying as futile. That’s why I’m not troubled. (My lack of concern also stems from seeing the USA as fundamentally on the right track. The latest evidence for that is the rejection of Trump about to occur. And yes, we’ll get Hillary’s excesses in consequence — but Congress will put on the brakes. We survived the Carter presidency when I’d have preferred Ford.)
Let’s parse that.
But at my absolute center is a belief in universal suffrage.What’s sacred about universal suffrage?If suffrage should encompass everyone who’s looking for a free ride at the expense of others — which it does these days — it should certainly include children and barnyard animals. Why should suffrage of any kind be the vehicle for violating constitutional limits on the power of the central government? That’s what it has come to, inasmuch as voters since the days of TR (at least) have been enticed to elect presidents and members of Congress who have blatantly seized unconstitutional powers, with the aid of their appointed lackeys and the connivance of a supine Supreme Court.
In a nation of 150m or so (potential) voters, tens of millions are going to be swayed by CBS or, egads, Fox. If it weren’t those sources, it would be something else like them. True, and all the more reason to keep the power of the central government within constitutional limits.
I can’t fix that, and see trying as futile. That’s why I’m not troubled.You, and I, and every adult can strive to “fix it” in ways big and small. Voting is one way, though probably the least effective (as an individual act). Speaking and writing on the issues is another way. I blog in the hope that some of what I say will trickle into the public discourse.
My lack of concern also stems from seeing the USA as fundamentally on the right track. It’s on the right track only if you think that the decades-long, leftward movement toward a powerful, big-spending, paternalistic government is the right track. That may very well suit a lot of people, but it also doesn’t suit a lot of people. Even FDR never won more than 61 percent of the popular vote, and his numbers dwindled as time went on. But perhaps you’re a utilitarian who believes that the pleasure A obtains from poking B in the eye somehow offsets B’s pain. You may not believe that you believe it, but that’s the import of your worship of universal suffrage, which is nothing more than blind allegiance to the primitive kind of utilitarianism known as majority rule.
The latest evidence for that is the rejection of Trump about to occur. Trump hasn’t yet lost, and even if he does, that won’t be evidence of anything other than desperation on the part of the operatives of the regulatory-welfare state and their various constituencies. Rejection, in any case, would be far from unanimous, so rejection is the wrong word — unless you believe, as you seem to do, that there’s a master “social conscience” which encompasses all Americans.
And yes, we’ll get Hillary’s excesses in consequence — but Congress will put on the brakes. Not if the Dems gain control of the Senate (a tie will do it if HRC is elected), and the ensuing Supreme Court appointees continue to ratify unconstitutional governance.
We survived the Carter presidency when I’d have preferred Ford.There have been more disastrous presidencies than Carter’s, why not mention them? In any event “survival” only means that the nation hasn’t yet crashed and burned. It doesn’t mean that there hasn’t been irreparable damage. Mere survival is a low hurdle (witness the Soviet Union, which survived for 74 years). Nor is mere survival an appropriate standard for a nation with as much potential as this one — potential that has been suppressed by the growth of the central government. So much loss of liberty, so much waste. That’s why I’m troubled, even if I can do little or nothing about it.
I didn’t send the reply because I’m too nice a guy. And because it would pointless to challenge anyone who is so morally obtuse — as many subsequent exchanges confirmed.
But I did finally quit corresponding with him. Enough is enough. The time I wasted reading and responding to his missives is now better spent writing posts for this blog.
The original “progressives” started it, and it’s still going strong.
Can the economic cost of government be estimated? I have done it for the post-World War II era, in an analysis that takes account of the fraction of GDP absorbed by federal, State, and local governments; the rate at which the federal government issues new regulations; the constant-dollar value of business investment (which is influenced by government spending and regulation); and changes in the consumer price index.
But that analysis doesn’t tell the whole story. In fact, it leaves untold the big story: The United States has been in a mega-depression since 1907.
Consider the following graph, which is derived from estimates of constant-dollar GDP per capita that are available here:
There are four eras, as shown by the legend (1942-1946 omitted because of the vast economic distortions caused by World War II):
1866-1907 — annual growth of 2.0 percent — A robust economy, fueled by (mostly) laissez-faire policies and the concomitant rise of industry, mass production, technological innovation, and entrepreneurship.
1908-1941 — annual growth of 1.4 percent — A dispirited economy, shackled by the fruits of “progressivism”; for example, trust-busting; the onset of governance through regulation; the establishment of the income tax; the creation of the destabilizing Federal Reserve; and the New Deal, which prolonged the Great Depression.
1947- 2007 — annual growth of 2.2 percent — A rejuvenated economy, buoyed by the end of the New Deal and the fruits of advances in technology and business management. The rebound in the rate of growth meant that the earlier decline wasn’t the result of an “aging” economy, which is an inapt metaphor for a living thing that is constantly replenished with new people, new capital, and new ideas.
2008-2021 — annual growth of 1.0 percent — An economy sagging under the cumulative weight of the fruits of “progressivism” (old and new); for example, the never-ending expansion of Medicare, Medicaid, and Social Security; and an ever-growing mountain of regulatory restrictions on business. (In a similar post, which I published in 2009, I wrote presciently that “[u]nless Obama’s megalomaniacal plans are aborted by a reversal of the Republican Party’s fortunes, the U.S. will enter a new phase of economic growth — something close to stagnation.)
Had the economy of the U.S. not been deflected from the course that it was on from 1866 to 1907, per capita GDP would now be about 1.4 times its present level. Compare the position of the dashed green line in 2021 — $83,000 — with per capita GDP in that year — $58,000.
If that seems unbelievable to you, it shouldn’t. A growing economy is a kind of compound-interest machine; some of its output is invested in intellectual and physical capital that enables the same number of workers to produce more, better, and more varied products and services. (More workers, of course, will produce even more products and services.) As the experience of 1947-2007 attests, nothing other than government interventions (or a war far more devastating to the U.S than World War II) could have kept the economy from growing along the path of 1866-1907. (I should add that economic growth in 1947-2007 would have been even greater than it was but for the ever-rising tide of government interventions.)
The sum of the annual gaps between what could have been (the dashed green line) and the reality after 1907 (omitting 1942-1946) is almost $700,000 — that’s per person in 2012 dollars. It’s $800,000 per person in 2021 dollars, and even more in 2022 dollars.
That cumulative gap represents our mega-depression.
Have you ever chanced to hear it said “There is no better investment than taxes. Only see what a number of families it maintains, and consider how it reacts on industry; it is an inexhaustible stream, it is life itself.” . . .
The advantages which officials advocate are those which are seen. The benefit which accrues to the providers is still that which is seen. This blinds all eyes.
But the disadvantages which the tax-payers have to get rid of are those which are not seen. And the injury which results from it to the providers, is still that which is not seen, although this ought to be self-evident.
When an official spends for his own profit an extra hundred sous, it implies that a tax-payer spends for his profit a hundred sous less. But the expense of the official is seen, because the act is performed, while that of the tax-payer is not seen, because, alas! he is prevented from performing it.
In the case of aggregate economic activity, what we see is what has been left to us by government. What we do not see is the extent to which the fruits of labor taken from us by government and the restrictions placed upon economic activity by government have deprived the economy of entrepreneurship, innovation, technology, and productive capacity. The cumulative effect of those deprivations — that which we do not see — dwarfs the Great Depression.
Rod Dreher, in the course of a premature paean to Barack Obama’s “diplomatic” approach to ideological strife, wrote this:
The source of our culture war is conflicting visions of what it means to be free and what it means to be an American – and even what it means to be fully human. More concretely, as Princeton’s Robert George has written, they have to do mainly “with sexuality, the transmitting and taking of human life, and the place of religion and religiously informed moral judgment in public life.” Because the cultural left and cultural right hold to irreconcilable orthodoxies on these questions, we find scant cultural consensus. That’s life in America. Unless we become a homogeneous country, we will continue to struggle to live together, staying true to our deepest beliefs while respecting the liberty of others to stay true to their own. But we do not live in a libertarian Utopia. We can’t have it all. If, for example, courts constitutionalized same-sex marriage, as gay activists seek, that would have a ground-shaking effect on religious liberty, public schooling and other aspects of American life. Without question, it would intensify the culture war, as partisans of the left and right fight for what each considers a sacred principle. What irritates conservatives is the liberals’ groundless conceit that they fight from a values-neutral position, while the right seeks to impose its norms on others. Nonsense. Marriage was a settled issue until liberals began using courts to impose their moral vision on (so far) an unwilling majority. Who fired the first shot there? [“Obama Won’t End the Culture Wars”, RealClearPolitics, February 16, 2009]
Dreher was wrong about the good faith in which the left pursues its agenda, but he was certainly right about the earth-shattering effects of the constitutionalization of same-sex “marriage”.
In any event, it doesn’t matter whether the unwilling upon whom the left’s agenda is imposed are a majority or a minority. Just about everyone is a loser in the war against morality and liberty. When social norms — long-established rules of behavior — are sundered willy-nilly, the result is a breakdown of the voluntary order known as civil society. The ability to live a peaceful, happy, and even prosperous life depends on the norms of civil society. That is so because it is impossible and — more importantly — undesirable for the state to police everyone’s behavior.
Liberty depends, therefore, on the institutions of civil society — family, church, club, and the like — through which individuals learn to treat one another with respect, through which individuals often come to the aid of one another, and through which instances of disrespect can be noted, publicized, and even punished (e.g., by criticism and ostracism). That is civil society, which the state ought to protect, but instead usurps and destroys.
Usurping the functions of civil society is one of the state’s primary (and illegitimate) objectives. The state establishes agencies (e.g., public schools, welfare), gives them primary and even sole jurisdiction in many matters, and funds them with tax money that could have gone to private institutions. Worse, however, is the way in which the state destroys the social norms that foster social harmony — mutual respect and trust — without which a people cannot flourish. As I observed some years ago, in connection with same-sex “marriage”:
Given the signals being sent by the state, the rate of formation of traditional, heterosexual marriages will continue to decline. (According to the Census Bureau , the percentage of adult males who are married dropped steadily from 71.1 percent in the 1960 census to 58.6 percent in the 2000 census; for females, the percentage dropped from 67.4 to 54.6. About half of each drop is explained by a rise in the percentage of adults who never marry, the other half by a rise in the percentage of divorced adults. Those statistics are what one should expect when the state signals — as it began to do increasingly after 1960 — that traditional marriage is no special thing by making it easier for couples to divorce, by subsidizing single mothers, and by encouraging women to work outside the home.)
“Thanks” to the signals sent by the state in the form of legislative, executive, and judicial dictates, we now have not just easy divorce, subsidized illegitimacy, and legions of non-mothering mothers, but also abortion, concerted (and deluded) efforts to defeminize females and to neuter or feminize males, forced association (with accompanying destruction of property and employment rights), suppression of religion, absolution of pornography, and the encouragement of “alternative lifestyles” that feature disease, promiscuity, and familial instability. The state, of course, doesn’t act of its own volition. It acts at the behest of special interests — interests with a “cultural” agenda. Dreher calls them liberals. I call them left-statists. They are bent on the eradication of civil society — nothing less — in favor of a state-directed Rousseauvian dystopia from which morality and liberty will have vanished, except in Orwellian doublespeak.
Liberty disappears slowly at first, then suddenly.
In “Social Norms and Liberty” I list 24 moral precepts that would prevail a polity that could be fairly described as a regime of liberty: a regime of voluntary and mutually beneficial coexistence based on mutual trust, respect, and forbearance.
The list isn’t exhaustive, and some of its elements are far more complex than others, but it will do as a starting point for this exercise, which is to explain conceptually how a polity like the United States goes from liberty (or something far closer to it than is now the case) to tyranny (which impends).
If the general observance of 24 precepts creates the conditions for liberty, what happens when one of them is discarded? There are 23 left, and 23 is almost 24, so something like liberty may still obtain.
But in the real world of slippery slopes (which is the world that I have inhabited for more than 80 years), the discarding of one precept becomes the starting point for the discarding of more. In fact, the precept of self-reliance and — when necessary — reliance on the organs of civil society, to the exclusion of government, had been badly eroded by the New Deal before I was born. The further diminution of self-reliance and reliance on civil society has become almost complete because of the slippery slope that led from Social Security to Medicare, Medicaid, welfare as an entitlement, food stamps, etc., etc. etc.
How many of the 24 precepts remain generally observed? None of them, by my count. How many others that I didn’t list have also fallen by the wayside? Most of them, probably, based on the present state of American morals and mores.
What caused the precepts to fall by the wayside? A combination of these things did it:
Asymmetrical ideological warfare, which favors the opponents of traditional morality and the proponents of big government (statists), and which both parties use to characterize their opponents, ironically, as “nazis” and “fascists” — which is the height of psychological progjection. (See the list of related posts at the bottom of this one.)
The onslaught of permissiveness — as promoted and apprroved by “educators” and pseudo-psychologists — given official status by the government-ordered abandonment of traditional moral codes (e.g., no-fault divorce; filthy speech as free speech; the legalization of abortion, sodomy, and same-sex “marriage”)
Growing reliance on government — whether instigated by government’s “beneficiaries” by “activists”, or by politicians (whether power-hungry or truly compassionate), social engineers (economists, sociologists, etc.), or combinations of these — which fostered the abandonment of some moral codes in the first place and became (ineffective) substitutes for them in the second place (e.g., the practical elimination of the death penalty and its replacement by “life” sentences that aren’t life sentences, accompanied by greater leniency across the board in prosecutions and sentences).
Because the codes of traditional morality are interlocking and mutually reinforcing, they have (or had) a combined power that is (or was) greater than a mere summing of them. For the sake of illustration, the combined power of the 24 precepts could be thought of as the square of 24, which is 576. The removal of one precept therefore reduced the combined power of the remaining precepts to 529, which is less than 23/24. (It is about 22/24). The removal of half (12) of the precepts reduced the combined power the the remaining precepts to 144, which is only one-fourth the power of the original number. (Don’t take them numbers literally; I’m just illustrating the cumulative effect of abandoning precepts.)
So, the abandonment of a civilizing precept not only encourages the abandonment of others (the slippery slope), but the cumulative effect of the resulting lacunae is out-sized. To return to the mathematical analogy, if one precept of 24 were left standing, it would represent 1/576 of the power of the original 24; that is, it would have no practical effect on the behavior of the citizens of the polity.
Somewhere along the way, government — which plays a central role in the abandonment of civilizing precepts — intervenes to avert utter chaos. But government intervention merely exacerbates the unraveling of social comity because it results in greater reliance on government. To compensate for the loss of civilizing norms, which enable a people to coexist peacefully and cooperatively (with minimal government intervention), government must institute draconian rules, ensure (through surveillance) that they are obeyed, and exact punishments if they are not obeyed. But when the governors are not respectful of traditional morality, the rules are ones that further erode it (e.g., requiring all citizens to “respect” behavior that undermines civilizing norms). The enforcement of anti-social rules requires more government intervention, and on and on until “Big Brother” is watching everyone and brooks no deviation from its edicts.
The same governors — in the same spirit of pseudo-omniscient omnipotence — are emboldened to impose one-size-fits all rules about economic relationships (e.g., the substitution of “renewables” for reliable and efficient sources of energy). The result, as with social relationships, is economic degradation to match social degradation.
Government, as is always the case under tyranny, casts a glowing light on the degradation and presents it as progress. Which is like putting lipstick on a pig, but — to change the metaphor — none will dare say with impunity that dictator has no clothes.
I usually eschew “we” and “us” in writing about a collection of persons who happen to be thrown together in a category (e.g., Americans, economists, Republicans). “We” and “us” connote unanimity or collective action, both of which are rare if not impossible in groups that consist of widely varying backgrounds, interests, objectives, biases, etc.
There are nevertheless some things that are so close to being universal that it’s fair to refer to them as characteristics of “we” and “us”. The inadequacy of language is one of those things.
Why is that the case? Try to describe in words a person who is beautiful or handsome to you, and why. It’s hard to do, if not impossible. There’s something about the combination of that person’s features, coloring, expression, etc., that defies anything like a complete description. You may have an image of that person in your mind, and you may know that — to you — the person is beautiful or handsome. But you just can’t capture in words all of those attributes. Why? Because the person’s beauty or handsomeness is a whole thing. It’s everything taken together, including subtle things that nestle in your subconscious mind but don’t readily swim to the surface. One such thing could be the relative size of the person’s upper and lower lips in the context of that particular person’s face; whereas, the same lips on another face might convey plainness or ugliness.
Words are inadequate because they describe one thing at a time — the shape of a nose, the slant of a brow, the prominence of a cheekbone. And the sum of those words isn’t the same thing as your image of the beautiful or handsome person. In fact, the sum of those words may be meaningless to a third party, who can’t begin to translate your words into an image of the person you think of as beautiful or handsome.
Yes, there are (supposedly) general rules about beauty and handsomeness. One of them is the symmetry of a person’s features. But that leaves a lot of ground uncovered. And it focuses on one aspect of a person’s face, rather than all of its aspects, which are what you take into account when you judge a person beautiful or handsome.
And, of course, there are many disagreements about who is beautiful or handsome. It’s a matter of taste. Where does the taste come from? Who knows? I have a theory about why I prefer dark-haired women to women whose hair is blonde, red, or medium-to-light brown: My mother was dark-haired, and photographs of her show that she was beautiful (in my opinion) as a young woman. (Despite that, I never thought of her as beautiful because she was just Mom to me.) You can come up with your own theories — and I expect that no two of them will be the same.
What about facts? Isn’t it possible to put facts into words or into symbols that stand for words? Not really, and for much the same reason that it’s impossible to describe beauty, handsomeness, love, hate, or anything “subjective” or “emotional.” Facts, at bottom, are often subjective, and sometimes based on emotion.
Let’s take a “fact” at random: the color red. We can all agree as to whether something looks red, can’t we? Even putting aside people who are color-blind, the answer is: not necessarily. For one thing red is defined as having a predominant light wavelength of roughly 620–740 nanometers. “Predominant” and “roughly” are weasel-words. Clearly, there’s no definite point on the visible spectrum where light changes from orange to red. If you think there is, just look at this chart and tell me where it happens. Red comes in shades, which various people describe variously: orange-red and reddish-orange, for example.
does not … contain all the colors that the human eyes and brain can distinguish. Unsaturated colors such as pink, or purple variations such as magenta, are absent, for example, because they can be made only by a mix of multiple wavelengths.
Thus we have magenta, fuchsia, blood-red, scarlet, crimson, vermilion, maroon, ruby, and even the many shades of pink — some are blends, some are represented by narrow segments of the light spectrum. Do all of those kinds of red have a clear definition, or are they defined by the beholder? Well, some may be easy to distinguish from others, but the distinctions between them remain arbitrary. Where does scarlet or magenta become vermilion?
In any event, how do you describe a color in words? Referring to its wavelength or composition in terms of other colors or its relation to other colors is no help. Wavelength really is meaningless unless you can show an image of the visible spectrum to someone who perceives colors exactly as you do, and point to red — or what you call red. In doing so, you will have pointed to a range of colors, not to red, because there is no red red and no definite boundary between orange and red (or yellow and orange, or green and yellow, etc.).
Further, you won’t have described red in words. And you can’t — without descending into tautologies — because red (as you visualize it) is what’s in your mind. It’s not an objective fact.
My point is that description isn’t the same as definition. You can define red (however vaguely) as a color which has a predominant light wavelength of roughly 620–740 nanometers. But you can’t describe it. Why? Because red is just a concept that resides in your brain.
A concept isn’t a real thing that you can see, hear, taste, touch, smell, eat, drink from, drive, etc. How do you describe a concept? You define it in terms of other concepts, which leads to more uncertainty and is the mental equivalent of tail-chasing.
All right, you say, it’s impossible to describe concepts, but surely it’s possible to describe things. People do it all the time. See that ugly, dark-haired, tall guy standing over there? I’ve already dealt with ugly, indirectly, in my discussion of beauty or handsomeness. Ugliness, like beauty, is just a concept, the idea of which differs from person to person. What about tall? It’s a relative term, isn’t it? You can measure a person’s height, but whether or not you consider him tall depends on where and when you live and the range of heights you’re used to encountering. A person who seems tall to you may not seem tall to your taller brother. Dark-haired will evoke different pictures in different minds — ranging from jet-black to dark brown and even auburn.
But if you point to the guy you call ugly, dark-haired, tall guy, I may agree with you that he’s ugly, dark-haired, and tall. Or I may disagree with you, but gain some understanding of what you mean by ugly, dark-haired, and tall.
And therein lies the secret of how people are able to communicate with each other, despite their inability to describe concepts or to define them without going in endless circles and following endless chains of definitions. First, human beings possess central nervous systems and sensory organs that are much alike, though within a wide range of variations (e.g., many people wear glasses and those glasses come in an almost-infinite variety of corrections; hearing aids are programmed to an almost-infinite variety of settings; sensitivity to touch varies widely; reaction times vary widely; etc.). Nevertheless, most people seem to perceive the same color when light with a wavelength of, say, 700 nanometers strikes the retina. The same goes for sounds, tastes, smells, etc., as various external stimuli are detected by various receptors. Those perceptions then acquire agreed definitions through acculturation. For example, an object that reflects light with a wavelength of 700 nanometers becomes known as red; a sound with a certain frequency becomes known as middle C; a certain taste is characterized as bitter, sweet, or sour.
Objects acquire names in the same way: for example: in American English, a fully grown person who was born female is a “woman”; a square piece of cloth that’s wrapped around a person’s head or neck is a “bandana”, and a longish, curved, yellow-skinned fruit with a soft interior a “banana”. And so I can visualize a woman wearing a red bandana and eating a banana.
There is less agreement about “soft” concepts (e.g., beauty) because they’re based not just on “hard” facts (e.g., the wavelength of light, biological sex), but on judgments that vary from person to person. A face that’s cute to one person may be beautiful to another person, but there’s no rigorous division between cute and beautiful. Both convey a sense of physical attractiveness that many persons will agree upon, but which won’t yield a consistent image. A very large percentage of Caucasian males (of a certain age) would agree that Ingrid Bergman and Hedy Lamarr were beautiful, but there’s nothing like a consensus about Katharine Hepburn (perhaps striking but not beautiful) or Jean Arthur (perhaps cute but not beautiful).
Other concepts, like GDP, acquire seemingly rigorous definitions, but they’re based on strings of definitions, the underpinnings of which may be as squishy as the flesh of a banana (e.g., the omission from GDP of the value of housework and the effects of pollution). So if you’re familiar with the definitions of the definitions, you have a good grasp of the concepts. If you aren’t, you don’t. But if you have a good grasp of the numbers underlying the definitions of definitions, you know that the top-level concept is actually vague and hard to pin down. The numbers not only omit important things but are only estimates, and often are estimates of disparate things that are grouped because they’re judged to be enough alike (which is vagueness on stilts).
Acculturation in the form of education is a way of getting people to grasp concepts that have widely agreed definitions. Mathematics, for example, is nothing but concepts, all the way down (though it starts with something real: the counting of things). And to venture beyond arithmetic is to venture into a world of ideas that’s held together by definitions that rest upon definitions and end in nothing real. Unless you’re one of those people who insists that mathematics is the “real” stuff of which the universe is made, which is nothing more than a leap of faith. (Math, by the way, is nothing but words in shorthand.)
And so, human beings are able to communicate and (usually) understand each other (if they speak the same language) because of their physical and cultural similarities, which include education in various and sundry subjects. Those similarities also enable people of different cultures and languages to translate their concepts (and the words that define them) from one language to another.
Those similarities also enable people to “feel” what another person is feeling when he says that he’s happy, sad, drunk, or whatever. There’s the physical similarity — the physiological changes that usually occur when a person becomes what he thinks of as happy, etc. And there’s acculturation — the acquired knowledge that people feel happy (or whatever) for certain reasons (e.g., a marriage, the birth of a child) and display their happiness in certain ways (e.g., a broad smile, a “jump for joy”).
A good novelist, in my view, is one who knows how to use words that evoke vivid mental images of the thoughts, feelings, and actions of characters, and the settings in which the characters act out the plot of a novel. A novelist who can do that and also tell a good story — one with an engaging or suspenseful plot — is thereby a great novelist.
But good and great novelists are thin on the ground. That is to say, there are relatively few persons among us who are able to grasp and communicate effectively a broad range of the kinds of thoughts and feelings that lurk in the minds of human beings. And even those few have their blind spots. Most of them, it seems to me, are persons of the left, and are therefore unable to empathize with the thoughts and feelings of the working-class people who seethe with resentment about the fawning and favoritism toward blacks, illegal immigrants, gender-confused persons, and other so-called victims. In fact, those few otherwise perceptive and articulate writers make it a point to write off the working-class people as racists, bigots, and ignoramuses.
Which just underscores my point about the impossibility of objectively forming our thoughts and feelings about the world around us and the other people in it. And we’re practically tongue-tied when it comes to expressing those thoughts and feelings to others. And our feelings — such as our political preferences, which probably are based more on temperament than on facts — get in the way.
Love, to take a leading example, is a feeling that just is. The why and wherefore of it is beyond our ability to understand and explain. Some of the feelings attached to it can be expressed in prose, poetry, and song, but those are superficial expressions that don’t capture the depth of love and why it exists.
The world of science is of no real help. Even if feelings of love could be expressed in scientific terms — the action of hormone A on brain region X — that would be worse than useless. It would reduce love to chemistry, when we “know” that there’s more to it than that. Why, for example, is hormone A activated by the presence or thought of person M but not person N, even if they’re identical twins?
The world of science is of no real help about “getting to the bottom of things”. Science is an infinite regress. S is explained in terms of T, which is explained in terms of U, which is explained in terms of V, and on and on. For example, there was the “indivisible” atom, which turned out to consist of electrons, protons, and neutrons. But electrons have turned out to be more complicated than originally believed, and protons and neutrons have been found to be made of smaller particles with distinctive characteristics. So it’s reasonable to ask if all of the particles now considered elementary are really indivisible. Perhaps there other more-elementary particles yet to be hypothesized and discovered. And even if all of the truly elementary particles are discovered, scientists will still be unable to explain what those particles really “are.”
And even if scientists get to the bottom of elementary particles, so to speak, will they know when they have done so? And even if they do know, will they know how and why those elementary particles came into being? It is safe to say that they will not know when they have reached bottom, and they will have no idea of how and why elementary particles exist because they will be asking questions to which answers do not lie in the material universe that bounds their potential knowledge.
Some would describe taxation as a form of theft and conscription as a form of slavery — in fact some would prefer to describe taxation as slavery too, or at least as forced labor. Much might be said against these descriptions, but that is beside the point. For within proper limits, such practices when engaged in by governments are acceptable, whatever they are called. If someone with an income of $2000 a year trains a gun on someone with an income of $100000 a year and makes him hand over his wallet, that is robbery. If the federal government withholds a portion of the second person’s salary (enforcing the laws against tax evasion with threats of imprisonment under armed guard) and gives some of it to the first person in the form of welfare payments, food stamps, or free health care, that is taxation. In the first case it is (in my opinion) an impermissible use of coercive means to achieve a worthwhile end. In the second case the means are legitimate, because they are impersonally imposed by an institution designed to promote certain results. Such general methods of distribution are preferable to theft as a form of private initiative and also to individual charity. This is true not only for reasons of fairness and efficiency, but also because both theft and charity are disturbances of the relations (or lack of them) between individuals and involve their individual wills in a way that an automatic, officially imposed system of taxation does not. [Mortal Questions, “Ruthlessness in Public Life,” pp. 87-88]
How many logical and epistemic errors can a supposedly brilliant philosopher make in one (long) paragraph? Too many:
“For within proper limits” means that Nagel is about to beg the question by shaping an answer that fits his idea of proper limits.
Nagel then asserts that the use by government of coercive means to achieve the same end as robbery is “legitimate, because [those means] are impersonally imposed by an institution designed to promote certain results.” Balderdash! Nagel’s vision of government as some kind of omniscient, benevolent arbiter is completely at odds with reality. The “certain results” (redistribution of income) are achieved by functionaries, armed or backed with the force of arms, who themselves share in the spoils of coercive redistribution. Those functionaries act under the authority of bare majorities of elected representatives, who are chosen by bare majorities of voters. And those bare majorities are themselves coalitions of interested parties — hopeful beneficiaries of redistributionist policies, government employees, government contractors, and arrogant statists — who believe, without justification, that forced redistribution is a proper function of government.
On the last point, Nagel ignores the sordid history of the unconstitutional expansion of the powers of government. Without justification, he aligns himself with proponents of the “living Constitution.”
Nagel’s moral obtuseness is fully revealed when he equates coercive redistribution with “fairness and efficiency”, as if property rights and liberty were of no account.
The idea that coercive redistribution fosters efficiency is laughable. It does quite the opposite because it removes resources from productive uses — including job-creating investments. The poor are harmed by coercive redistribution because it drastically curtails economic growth, from which they would benefit as job-holders and (where necessary) recipients of private charity (the resources for which would be vastly greater in the absence of coercive redistribution).
Finally (though not exhaustively), Nagel’s characterization of private charity as a “disturbance of the relations … among individuals” is so wrong-headed that it leaves me dumbstruck. Private charity arises from real relations among individuals — from a sense of community and feelings of empathy. It is the “automatic, officially imposed system of taxation” that distorts and thwarts (“disturbs”) the social fabric.
In any event, the answer to the question posed in the title of this post is “yes”; taxation for the purpose of redistribution is slavery (see number 2 in the second set of definitions). Taxation amounts to the subjection of most taxpayers — those who are not deadbeats, do-gooders, demagogues, and government drones — to persons who are such things (even if they pay taxes).
If “slavery” is too strong a word, “theft” will do quite well.
Leftist “wokeism” and big government are destroying both.
Libertarianism, as it is usually explained and presented, lacks an essential ingredient: morality. Yes, libertarians espouse a superficially plausible version of morality — the harm principle:
[T]he sole end for which mankind are warranted, individually or collectively in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. [John Stuart Mill, On Liberty (1869), Chapter I, paragraph 9.]
This is empty rhetoric. Harm must be defined, and its definition must arise from social norms.
Liberty is not an abstraction, it is the scope of action that is allowed by socially agreed upon rights. It is that restrained scope of action which enables a people to coexist willingly, peacefully, and cooperatively for their mutual benefit. Such coexistence depends greatly on mutual aid, trust, respect, and forbearance. Liberty is therefore necessarily degraded when courts — at the behest of “liberals” and so-called libertarians sunder social restraints in the name of liberty.
Social norms have changed for the worse since the days of my Midwestern upbringing in the 1940s and 1950s. Many of those norms have gone to the great graveyard of quaint ideas; for example:
Behavior is shaped by social norms, like those listed here. The norms are rooted in the Ten Commandments and time-tested codes of behavior. The norms aren’t altered willy-nilly in accordance with the wishes of “activists”, as amplified through the megaphone of the mass media.
Rules of grammar serve the useful purpose of enabling people to understand each other easily. The flouting of grammatical rules in everyday conversation is a sign of ignorance and ill-breeding, not originality.
Dead, white, European males produced some of the greatest works of art, music, literature, philosophy, science, and political theory. Those dead, white, European males are to be celebrated for their accomplishments, not derided just because they are dead or were not black/brown/tan, female, of confused gender, or inhabitants of non-European places.
Marriage is a union of one man and one woman. Nothing else is marriage, despite legislative, executive, and judicial decrees that substitute brute force for the wisdom of the ages.
Marriage comes before children. This is not because people are pure at heart, but because it is the responsible way to start life together and to ensure that one’s children enjoy a stable, nurturing home life.
Marriage is until “death do us part”. Divorce is a recourse of last resort, not an easy way out of marital and familial responsibilities or the first recourse when one spouse disappoints or angers the other.
Children are disciplined — sometimes spanked — when they do wrong. They aren’t given long, boring, incomprehensible lectures about why they’re doing wrong. Why not? Because they usually know that they’re doing wrong and are just trying to see what they can get away with.
Gentlemen don’t swear in front of ladies, and ladies don’t swear in front of gentlemen; discourse is therefore more likely to be rational, and certainly more bearable to those within earshot.
A person’s “space” is respected, as long as person is being respectful of others. A person’s space is not invaded by a loud conversation of no interest to anyone but the conversant.
A person grows old gracefully and doesn’t subject others to the sight of flabby, wrinkled tattoos (unless you were a sailor who has one tattoo on one arm).
Drugs are taken for the treatment of actual illnesses, not for recreational purposes.
Income is earned, not “distributed”. Persons who earn a lot of money are to be respected (unless they are criminals or crony capitalists). If you envy them to the point of wanting to take their money, you’re a pinko-commie-socialist (no joke).
People should work, save, and pay for their own housing. The prospect of owning one’s own home, by dint of one’s own labor, is an incentive to work hard and to advance oneself through the acquisition of marketable skills.
Welfare is a gift that one accepts as a last resort, it is not a right or an entitlement, and it is not bestowed on persons with convenient disabilities.
Regarding work and welfare, a person who lacks work, is seriously ill, or disabled should be helped by his family, friends, neighbors, co-religionists, and other organs of civil society with which he is affiliated or which come to know of his plight.
A man holds a door open for a woman out of courtesy, and he does the same for anyone who is obviously weak or laden with packages
Sexism (though it wasn’t called that) is nothing more than the understanding — shared by men and women — that women are members of a different sex (the only different one); are usually weaker than men; are endowed with different brain chemistry and physical skills than men (still a fact); and enjoy discreet admiration (flirting) if they’re passably good-looking (or better). Women who reject those propositions — and who try to enforce modes of behavior that assume differently — are embittered and twisted.
A mother who devotes time and effort to the making of a good home and the proper rearing of her children is a pillar of civilized society. Her life is to be celebrated, not condemned as “a waste”.
Homosexuality is a rare, aberrant kind of behavior. (And this was before AIDS proved it to be aberrant.) It’s certainly not a “lifestyle” to be celebrated and shoved down the throats of all who object to it.
Privacy is a constrained right. It doesn’t trump moral obligations, among which are the obligations to refrain from spreading a deadly disease and to preserve innocent life.
Addiction isn’t a disease; it’s a surmountable failing.
Justice is for victims. Victims are persons to whom actual harm has been done by way of fraud, theft, bodily harm, murder, and suchlike. A person with a serious disease or handicap isn’t a victim, nor is a person with a drinking or drug problem.
Justice is a dish best served hot, so that would-be criminals can connect the dots between crime and punishment. Swift and sure punishment is the best deterrent of crime. Capital punishment is the ultimate deterrent because an executed killer can’t kill again.
Peace is the result of preparedness for war; lack of preparedness invites war.
The list isn’t exhaustive, but it’s certainly representative. The themes are few and simple: respect others, respect tradition, restrict government to the defense of society from predators foreign and domestic. The result is liberty: A regime of mutually beneficial coexistence based on trust. That’s all it takes — not big government bent on dictating how Americans live their lives.
Economic and social liberty are indivisible. The extent of liberty is inversely proportional to the power of government.
How … did America come to be run by a cabal of super-rich “oligarchs”, politicians, bureaucrats, academics, and “journalists” who sneer at the list [of traditonal American values] and reject it, in deed if not in word?
It happened one step backward at a time. America’s old culture, along with much of its liberty and (less visibly) its prosperity, was lost step by step through a combination of chicanery (by the left) and compromise (by “centrists” and conservative dupes). The process — the culmination of which is “wokeness” — has a long history and deep roots. Those roots are not in Marxism, socialism, atheism, or any of the other left-wing “isms” (appalling and dangerous they may be). They are, as I explain here, in (classical) liberalism, the supposed bulwark of liberty and prosperity.
An “ism” is only as effective as its adherents. The adherents of (classical) liberalism are especially ineffective in the defense of liberty because they are blinded by their own rhetoric. Take Deirdre McCloskey, for example, whom Arnold Kling quotes approvingly in a piece that I eviscerated….
That drips with smugness and condescension. And it wildly mischaracterizes the wealthy “elites” who have taken charge in the West….
All that McCloskey has told us is that she (formerly he) views his/her way of life as superior to that of the unwashed masses, living and dead. Further, holding that view — which is typical of liberals classical and modern (i.e., statists) — he/she obviously believes that the superior way of life should be adopted by the unwashed — for their own good, of course. (If this isn’t to be accomplished by force, as statists would prefer, then by education and example. This would include, but not be limited to, choosing a new sexual identity if one is deluded enough to believe that he/she was “assigned” the wrong one at birth.)
It is hard to tell McCloskey’s attitude from that of a member of the “woke” elite, though he/she undoubtedly deny being such a person. I am willing to bet, however, that most of McCloskey’s ilk (if not he/she him/herself) voted enthusiastically for “moderate” Joe Biden because rude, crude Donald Trump offended their tender sensibilities (and threatened their statist agenda). And they did so knowing that Biden, despite his self-proclaimed “moderation”, was and is allied with leftists whose statist ambitions for the United States are an affront to every tenet of classical liberalism, not the least of which is freedom of speech.
Thus the unrelenting attacks on Donald Trump — the leading symbol of Americanism, anti-”wokeism”, and anti-statism — which began before he became president, persisted throughout his presidency, and have intensified since he left the presidency. The attacks, it should be emphasized, have been and are being conducting by officials and agencies of the federal government.
If Trump is to be silenced, so are his followers. Bill Vallicella depicts the horror that is to come:
With their hands on the levers of power, the Democrats can keep the borders open, empty the prisons, defund the police except for the state police, confiscate the firearms of law-abiding citizens, do away with the filibuster, give felons the right to vote while in prison, outlaw home schooling, alter curricula to promote the ‘progressive’ worldview (by among other things injecting 1619 Project fabrications into said curricula), infiltrate and ultimately destroy the institutions of civil society, pass ‘hate speech’ laws to squelch dissent, suppress religion, and so on into the abyss of leftist nihilism.
To which I would add: overriding and penalizing objections to allowing transgendered “men” into girls’ and women’s bathrooms, locker rooms, and sports; suppressing parents who object to such things and to the teaching of critical race theory; penalizing small businesses who object to forced participation in the “celebration” of LGBTQ-ness; the raising and arming of a vast cadre of IRS auditors and enforces; and on and on.
A leading example of the oppression is the elite and official response to Covid: mask mandates. Martin Gurri spells it out:
Three days after the mask mandate was struck down [by U.S. District Judge Kathryn Kimball Mizelle], on April 21 [2022], Barack Obama delivered the bad news about “disinformation” to a Stanford University forum on that subject. His unacknowledged theme, too, was the crisis of elite authority, which he explained with a history lesson. The twentieth century, Obama said, may have excluded “women and people of color,” but it was a time of information sanity, when the masses gathered in the great American family room to receive the news from Walter Cronkite…. Those were the days when a “shared culture” could operate on a “shared set of facts.”
The digital age has battered that peaceable kingdom to bits… [w]ho had the authority to make projections and recommendations[?]
Online, everyone did. People with opinions that the former president found toxic—nationalists, white supremacists, unhinged Republicans, Vladimir Putin and his gang of Russian hackers—could say anything they wished on the Web, no matter how irresponsible, including lies. A defenseless public, sunk in ignorance, could be deceived into voting against enlightened Democrats.
Total blindness to the other side of the story is a partisan affliction that Obama makes no attempt to overcome…. [H]e never mentioned the most effective disinformation campaign of recent times, conducted against Trump by the Hillary Clinton campaign, in which members of his own administration participated. He simply doesn’t believe that it works that way. Disinformation, for him, is a form of lèse-majesté—any insult to the progressive ruling class.
How are we to deal with this “tumultuous, dangerous moment in history”? Obama was clear about the answer: we must recover the power to exclude certain voices, this time through regulation. The government must assume control over disorderly online speech. First Amendment guarantees of freedom of speech don’t apply to private companies like Facebook and Twitter, he noted. At the same time, since these companies “play a unique role in how we . . . are consuming information,” the state must impose “accountability.” The examples he provided betray nostalgia for a lost era: the “meat inspector,” who would presumably check on how the algorithmic sausage is made; and the Fairness Doctrine, which somehow would be applied to an information universe virtually infinite in volume.
Obama views disinformation much as Fauci does Covid-19: as a lever of authority in the hands of the guardian class. Democracy, he tells us over and again, must be protected from “toxic content.” But by democracy, he means the rule of the righteous, a group that coincides exactly with his partisan inclinations. By toxic, he means anything that smacks of Trumpism. The former president’s speech was vague on details, but it left all options open. Who can say what pretext will be needed to expel the next rough beast from social media, tomorrow or the day after?…
Obama’s speech, in turn, took place four days before the apparent sale of Twitter to Elon Musk—at which point elite despair, always volatile, at last exploded in a fireball of rage and panic….
For a considerable number of agitated people, [Musk’s] goal of neutrality [on Twitter] was an abomination. Suddenly, “free speech” became a code for something dark and evil—racism, white nationalism, oligarchy, transphobia, “extremist rightwing Nazis”—all the phantoms and goblins that inhabit the nightmares of the progressive mind….
Following the Obama formula, the itch to control what Americans can say online was equated with the defense of freedom. Granting unfettered speech to the rabble, as Musk intended, would be “dangerous to our democracy,” Elizabeth Warren said. “For democracy to survive we need more content moderation, not less,” was how Max Boot, Washington Post columnist, put it. “We must pass laws to protect privacy and promote algorithmic justice for internet users,” was the bizarre formulation of Ed Markey, junior senator from Massachusetts. The Biden White House, never a hotbed of originality, recited the Obama refrain about holding the digital platforms “accountable” for the “harm” they inflict on us….
The second theme follows from the first. The elites are convinced that their control over American society is slipping away. They have conquered the presidency, both houses of Congress, and the entirety of our culture; yet their mood is one of panic and resentment….
[But] Starting with the onset of Covid-19 in the spring of 2020, elite fortunes took an almost magical turn. The pandemic frightened the public into docility. The Black Lives Matter riots enshrined racial doctrines that demanded constant state interference as not only legitimate but mandatory in every corner of American culture. The malevolent Trump went down to defeat, and the presidency passed to Biden, a hollow man easily led by the progressive zealots around him. The Senate flipped Democratic….
From the scientific establishment through the corporate boardroom all the way to Hollywood, elite keepers of our culture speak with a single, shrill voice—and the script always follows the dogmas of one particular war-band—the cult of identity—and the politics of one specific partisan flavor, that of progressive Democrats….
Are we on the cusp … of an anti-elite cultural revolution? I still wouldn’t bet on it. For obscure reasons of psychology, creative minds incline to radical politics. A kulturkampf directed from Tallahassee, Florida, or even Washington, D.C., won’t budge that reality much. The group portrait of American culture will continue to tilt left indefinitely.
But that’s not the question at hand. What terrifies elites is the loss of their cultural monopoly in the face of a foretold political disaster. They fear diversity of any kind, with good cause: to the extent that the public enjoys a variety of choices in cultural products, elite control will be proportionately diluted.
And the left, in its panic about the possible loss of control, is trying to tighten its grip on ideas and on its ability to make the “masses” do its bidding.
I am repeating the introduction for those readers who haven’t seen parts I, II, and III, which are here, here, and here.
This series is aimed at writers of non-fiction works, but writers of fiction may also find it helpful. There are four parts:
I. Some Writers to Heed and Emulate
A. The Essentials: Lucidity, Simplicity, Euphony B. Writing Clearly about a Difficult Subject C. Advice from an American Master D. Also Worth a Look
II. Step by Step
A. The First Draft
1. Decide — before you begin to write — on your main point and your purpose for making it. 2. Avoid wandering from your main point and purpose; use an outline. 3. Start by writing an introductory paragraph that summarizes your “story line”. 4. Lay out a straight path for the reader. 5. Know your audience, and write for it. 6. Facts are your friends — unless you’re trying to sell a lie, of course. 7. Momentum is your best friend.
B. From First Draft to Final Version
1. Your first draft is only that — a draft. 2. Where to begin? Stand back and look at the big picture. 3. Nit-picking is important. 4. Critics are necessary, even if not mandatory. 5. Accept criticism gratefully and graciously. 6. What if you’re an independent writer and have no one to turn to? 7. How many times should you revise your work before it’s published?
III. Reference Works
A. The Elements of Style B. Eats, Shoots & Leaves C. Follett’s Modern American Usage D. Garner’s Modern American Usage E. A Manual of Style and More
IV. Notes about Grammar and Usage
A. Stasis, Progress, Regress, and Language B. Illegitimi Non Carborundum Lingo
1. Eliminate filler words. 2. Don’t abuse words. 3. Punctuate properly. 4. Why ‘s matters, or how to avoid ambiguity in possessives. 5. Stand fast against political correctness. 6. Don’t split infinitives. 7. It’s all right to begin a sentence with “And” or “But” — in moderation. 8. There’s no need to end a sentence with a preposition.
Some readers may conclude that I prefer stodginess to liveliness. That’s not true, as any discerning reader of this blog will know. I love new words and new ways of using words, and I try to engage readers while informing and persuading them. But I do those things within the expansive boundaries of prescriptive grammar and usage. Those boundaries will change with time, as they have in the past. But they should change only when change serves understanding, not when it serves the whims of illiterates and language anarchists.
IV. NOTES ABOUT GRAMMAR AND USAGE
This part delivers some sermons about practices to follow if you wish to communicate effectively and be taken seriously, and if you wish not to be thought of as a semi-literate, self-indulgent, faddish dilettante. Section A is a defense of prescriptivism in language. Section B (the title of which is mock-Latin for “Don’t Let the Bastards Wear Down the Language”) counsels steadfastness in the face of political correctness and various sloppy usages.
A. Stasis, Progress, Regress, and Language
To every thing there is a season, and a time to every purpose under the heaven…. — Ecclesiastes 3:1 (King James Bible)
Nothing man-made is permanent; consider, for example, the list of empires here. In spite of the history of empires — and other institutions and artifacts of human endeavor — most people seem to believe that the future will be much like the present. And if the present embodies progress of some kind, most people seem to expect that progress to continue.
Things do not simply go on as they have been without the expenditure of requisite effort. Take the Constitution’s broken promises of liberty, about which I have written so much. Take the resurgence of Russia as a rival for international influence. This has been in the works for about 25 years, but didn’t register on most Americans until the Crimean crisis and the invasion of Ukraine. What did Americans expect? That the U.S. could remain the unchallenged superpower while reducing its armed forces to the point that they were strained by relatively small wars in Afghanistan and Iraq? That Vladimir Putin would be cowed by American presidents who had so blatantly advertised their hopey-changey attitudes toward Iran and Islam, while snubbing a traditional ally like Israel, failing to beef up America’s armed forces, and allowing America’s industrial capacity to wither in the name of “globalaism” (or whatever the current catch-phrase may be)?
Turning to naïveté about progress, I offer Steven Pinker’s fatuous The Better Angels of Our Nature: Why Violence Has Declined. Pinker tries to show that human beings are becoming kinder and gentler. I have much to say in another post about Pinker’s thesis. One of my sources is Robert Epstein’s review of Pinker’s book. This passage is especially apt:
The biggest problem with the book … is its overreliance on history, which, like the light on a caboose, shows us only where we are not going. We live in a time when all the rules are being rewritten blindingly fast—when, for example, an increasingly smaller number of people can do increasingly greater damage. Yes, when you move from the Stone Age to modern times, some violence is left behind, but what happens when you put weapons of mass destruction into the hands of modern people who in many ways are still living primitively? What happens when the unprecedented occurs—when a country such as Iran, where women are still waiting for even the slightest glimpse of those better angels, obtains nuclear weapons? Pinker doesn’t say.
Less important in the grand scheme, but no less wrong-headed, is the idea of limitless progress in the arts. To quote myself:
In the early decades of the twentieth century, the visual, auditory, and verbal arts became an “inside game”. Painters, sculptors, composers (of “serious” music), choreographers, and writers of fiction began to create works not for the enjoyment of audiences but for the sake of exploring “new” forms. Given that the various arts had been perfected by the early 1900s, the only way to explore “new” forms was to regress toward primitive ones — toward a lack of structure…. Aside from its baneful influence on many true artists, the regression toward the primitive has enabled persons of inferior talent (and none) to call themselves “artists”. Thus modernism is banal when it is not ugly.
Painters, sculptors, etc., have been encouraged in their efforts to explore “new” forms by critics, by advocates of change and rebellion for its own sake (e.g., “liberals” and “bohemians”), and by undiscriminating patrons, anxious to be au courant. Critics have a special stake in modernism because they are needed to “explain” its incomprehensibility and ugliness to the unwashed.
The unwashed have nevertheless rebelled against modernism, and so its practitioners and defenders have responded with condescension, one form of which is the challenge to be “open minded” (i.e., to tolerate the second-rate and nonsensical). A good example of condescension is heard on Composers Datebook, a syndicated feature that runs on some NPR stations. Every Composers Datebook program closes by “reminding you that all music was once new.” As if to lump Arnold Schoenberg and John Cage with Johann Sebastian Bach and Ludwig van Beethoven.
All music, painting, sculpture, dance, and literature was once new, but not all of it is good. Much (most?) of what has been produced since 1900 is inferior, self-indulgent crap.
And most of the ticket-buying public knows it. Take opera, for example. An article by Christopher Ingraham purports to show that “Opera is dead, in one chart” (The Washington Post, October 31, 2014). Here’s the chart and Ingraham’s interpretation of it:
The chart shows that opera ceased to exist as a contemporary art form roughly around 1970. It’s from a blog post by composer and programmer Suby Raman, who scraped the Met’s public database of performances going back to the 19th century. As Raman notes, 50 years is an insanely low bar for measuring the “contemporary” – in pop music terms, it would be like considering The Beatles’ I Wanna Hold Your Hand as cutting-edge.
Back at the beginning of the 20th century, anywhere from 60 to 80 percent of Met performances were of operas composed in some time in the 50 years prior. But since 1980, the share of contemporary performances has surpassed 10 percent only once.
Opera, as a genre, is essentially frozen in amber – Raman found that the median year of composition of pieces performed at the Met has always been right around 1870. In other words, the Met is essentially performing the exact same pieces now that it was 100 years ago….
Contrary to Ingraham, opera isn’t dead; for example, there are more than 220 active opera companies in the U.S. It’s just that there’s little demand for operatic works written after the late 1800s. Why? Because most opera-lovers don’t want to hear the strident, discordant, unmelodic trash that came later. Giacomo Puccini, who wrote melodic crowd-pleasers until his death in 1924, is an exception that proves the rule.
Language is in the same parlous state as the arts. Written and spoken English improved steadily as Americans became more educated — and as long as that education included courses which prescribed rules of grammar and usage. By “improved” I mean that communication became easier and more effective; specifically:
A larger fraction of Americans followed the same rules in formal communications (e.g., speeches, business documents, newspapers, magazines, and books,).
Movies and radio and TV shows also tended to follow those rules, thereby reaching vast numbers of Americans who did little or no serious reading.
There was a “trickle down” effect on Americans’ written and spoken discourse, especially where it involved mere acquaintances or strangers. Standard American English became a kind of lingua franca, which enabled the speaker or writer to be understood and taken seriously.
I call that progress.
There is, however, an (unfortunately) influential attitude toward language known as descriptivism. It is distinct from (and often opposed to) rule-setting (prescriptivism). Consider this passage from the first chapter of an online text:
Prescriptive grammar is based on the idea that there is a single right way to do things. When there is more than one way of saying something, prescriptive grammar is generally concerned with declaring one (and only one) of the variants to be correct. The favored variant is usually justified as being better (whether more logical, more euphonious, or more desirable on some other grounds) than the deprecated variant. In the same situation of linguistic variability, descriptive grammar is content simply to document the variants – without passing judgment on them.
This misrepresents the role of prescriptive grammar. It’s widely understood that there’s more than one way of expressing an idea and more than one way in which the idea can be made understandable to others. The rules of prescriptive grammar, when followed, improve understanding, in two ways. First, by avoiding utterances that would be incomprehensible or, at least, very hard to understand. Second, by ensuring that utterances aren’t simply ignored or rejected out of hand because their form indicates that the writer or speaker is either ill-educated or stupid.
What, then, is the role of descriptive grammar? The authors offer this:
[R]ules of descriptive grammar have the status of scientific observations, and they are intended as insightful generalizations about the way that speakers use language in fact, rather than about the way that they ought to use it. Descriptive rules are more general and more fundamental than prescriptive rules in the sense that all sentences of a language are formed in accordance with them, not just a more or less arbitrary subset of shibboleth sentences. A useful way to think about the descriptive rules of a language … is that they produce, or generate, all the sentences of a language. The prescriptive rules can then be thought of as filtering out some (relatively minute) portion of the entire output of the descriptive rules as socially unacceptable.
Let’s consider the assertion that descriptive rules produce all the sentences of a language. What does that mean? It seems to mean the actual rules of a language can be inferred by examining sentences uttered or written by users of the language. But which users? Native users? Adults? Adults who have graduated from high-school? Users with IQs of at least 100?
Pushing on, let’s take a closer look at descriptive rules and their utility. The authors say that
we adopt a resolutely descriptive perspective concerning language. In particular, when linguists say that a sentence is grammatical, we don’t mean that it is correct from a prescriptive point of view, but rather that it conforms to descriptive rules….
The descriptive rules amount to this: They conform to practices that a speakers and writers actually use in an attempt to convey ideas, whether or not the practices state the ideas clearly and concisely. Thus the authors approve of these sentences because they’re of a type that might well occur in colloquial speech:
Over there is the guy who I went to the party with.
Over there is the guy with whom I went to the party.
(Both are clumsy ways of saying “I went to the party with that person.”)
Bill and me went to the store.
(Alternatively: “Bill and I went to the store.” “Bill went to the store with me.” “I went to the store with Bill.” Aha! Three ways to say it correctly, not just one way.)
But the authors label the following sentences as ungrammatical because they don’t comport with the colloquial speech:
Over there is guy the who I went to party the with.
Over there is the who I went to the party with guy.
Bill and me the store to went.
In other words, the authors accept as grammatical anything that a speaker or writer is likely to say, according to the “rules” that can be inferred from colloquial speech and writing. It follows that whatever is is right, even “Bill and me to the store went” or “Went to the store Bill and me”, which aren’t far-fetched variations on “Bill and me went to the store.” (Yoda-isms they read like.) They’re understandable, but only with effort. And further evolution would obliterate their meaning.
The fact is that the authors of the online text — like descriptivists generally — don’t follow their own anarchistic prescription. Follett puts it this way in Modern American Usage:
It is … one of the striking features of the libertarian position [with respect to language] that it preaches an unbuttoned grammar in a prose style that is fashioned with the utmost grammatical rigor. H.L. Mencken’s two thousand pages on the vagaries of the American language are written in the fastidious syntax of a precisian. If we go by what these men do instead of by what they say, we conclude that they all believe in conventional grammar, practice it against their own preaching, and continue to cultivate the elegance they despise in theory….
[T]he artist and the user of language for practical ends share an obligation to preserve against confusion and dissipation the powers that over the centuries the mother tongue has acquired. It is a duty to maintain the continuity of speech that makes the thought of our ancestors easily understood, to conquer Babel every day against the illiterate and the heedless, and to resist the pernicious and lulling dogma that in language … whatever is is right and doing nothing is for the best. [Pp. 30-31]
Follett also states the true purpose of prescriptivism, which isn’t to prescribe rules for their own sake:
[This book] accept[s] the long-established conventions of prescriptive grammar … on the theory that freedom from confusion is more desirable than freedom from rule…. [P. 243]
E.B. White says it more colorfully in his introduction to The Elements of Style. Writing about William Strunk Jr., author of the original version of the book, White says:
All through The Elements of Style one finds evidence of the author’s deep sympathy for the reader. Will felt that the reader was in serious trouble most of the time, a man floundering in a swamp, and that it was the duty of anyone attempting to write English to drain this swamp quickly and get his man up on dry ground, or at least throw him a rope. In revising the text, I have tried to hold steadily in mind this belief of his, this concern for the bewildered reader. [P. xvi, third edition]
Descriptivists would let readers founder in the swamp of incomprehensibility. If descriptivists had their way — or what they claim to be their way — American English would, like the arts, recede into formless primitivism.
Eternal vigilance about language is the price of comprehensibility.
B. Illegitimi Non Carborundum Lingo
The vigilant are sorely tried these days. What follows are several restrained rants about some practices that should be resisted and repudiated.
1. Eliminate filler words.
When I was a child, most parents and all teachers told children to desist from saying “uh” between words. “Uh” was then the filler word favored by children, adolescents, and even adults. The resort to “uh” meant that the speaker was stalling because he had opened his mouth without having given enough thought to what he meant to say.
Next came “you know”. It has been displaced, in the main, by “like”, where it hasn’t been joined to “like” in the formation “like, you know”.
The need of a filler word (or phrase) seems ineradicable. Too many people insist on opening their mouths before thinking about what they’re about to say. Given that, I urge Americans in need of a filler word to use “uh” and eschew “like” and “like, you know”. “Uh” is far less distracting and irritating than the rat-a-tat of “like-like-like-like”.
Of course, it may be impossible to return to “uh”. Its brevity may not give the users of “like” enough time to organize their TV-smart-phone-video-game-addled brains and deliver coherent speech.
In any event, speech influences writing. Sloppy speech begets sloppy writing, as I know too well. I have spent the past 60 years of my life trying to undo habits of speech acquired in my childhood and adolescence — habits that still creep into my writing if I drop my guard.
2. Don’t abuse words.
How am I supposed to know what you mean if you abuse perfectly good words? Here I discuss four prominent examples of abuse.
Anniversary
Too many times in recent years I’ve heard or read something like this: “Sally and me are celebrating our one-year anniversary.” The “and me” is bad enough; “one-year anniversary” (or any variation of it) is truly egregious.
The word “anniversary” means “the annually recurring date of a past event”. To write or say “x-year anniversary” is redundant as well as graceless. Just write or say “first anniversary”, “two-hundred fiftieth anniversary”, etc., as befits the occasion.
To write or say “x-month anniversary” is nonsensical. Something that happened less than a year ago can’t have an anniversary. What is meant is that such-and-such happened “x” months ago. Just say it.
Data
A person who writes or says “data is” is at best an ignoramus and at worst a Philistine.
Language, above all else, should be used to make one’s thoughts clear to others. The pairing of a plural noun and a singular verb form is distracting, if not confusing. Even though datum is seldom used by Americans, it remains the singular form of data, which is the plural form. Data, therefore, never “is”; data always “are”.
H.W. Fowler says:
Latin plurals sometimes become singular English words (e.g., agenda, stamina) and data is often so treated in U.S.; in Britain this is still considered a solecism… [A Dictionary of Modern English Usage, p.119, second edition]
But Follett’s Modern American Usage is better on the subject:
Those who treat data as a singular doubtless think of it as a generic noun, comparable to knowledge or information [a generous interpretation]…. The rationale of agenda as a singular is its use to mean a collective program of action, rather than separate items to be acted on. But there is as yet no obligation to change the number of data under the influence of error mixed with innovation. [Pp. 130-131]
the AP Style Guide’s decision to allow the use of hopefully as a sentence adverb, announced on Twitter at 6:22 a.m. on 17 April 2012:
Hopefully, you will appreciate this style update, announced at #aces2012. We now support the modern usage of hopefully: it’s hoped, we hope.
Liberman, who is a descriptivist, defends AP’s egregious decision. His defense consists mainly of citing noted writers who have used “hopefully” where they meant “it is to be hoped”. I suppose that if those same noted writers had chosen to endanger others by driving on the wrong side of the road, Liberman would praise them for their “enlightened” approach to driving.
Geoff Nunberg also defends “hopefully“ in “The Word ‘Hopefully’ Is Here to Stay, Hopefully”, which appears at npr.org. Numberg (or the headline writer) may be right in saying that “hopefully” is here to stay. But that does not excuse the widespread use of the word in ways that are imprecise and meaningless.
The crux of Nunberg’s defense is that “hopefully” conveys a nuance that “language snobs” (e.g., me) are unable to grasp:
Some critics object that [“hopefully” is] a free-floating modifier (a Flying Dutchman adverb, James Kirkpatrick called it) that isn’t attached to the verb of the sentence but rather describes the speaker’s attitude. But floating modifiers are mother’s milk to English grammar — nobody objects to using “sadly,” “mercifully,” “thankfully” or “frankly” in exactly the same way.
Or people complain that “hopefully” doesn’t specifically indicate who’s doing the hoping. But neither does “It is to be hoped that,” which is the phrase that critics like Wilson Follett offer as a “natural” substitute. That’s what usage fetishism can drive you to — you cross out an adverb and replace it with a six-word impersonal passive construction, and you tell yourself you’ve improved your writing.
But the real problem with these objections is their tone-deafness. People get so worked up about the word that they can’t hear what it’s really saying. The fact is that “I hope that” doesn’t mean the same thing that “hopefully” does. The first just expresses a desire; the second makes a hopeful prediction. I’m comfortable saying, “I hope I survive to 105″ — it isn’t likely, but hey, you never know. But it would be pushing my luck to say, “Hopefully, I’ll survive to 105,” since that suggests it might actually be in the cards.
Floating modifiers may be common in English, but that does not excuse them. Given Numberg’s evident attachment to them, I am unsurprised by his assertion that “nobody objects to using ‘sadly,’ ‘mercifully,’ ‘thankfully’ or ‘frankly’ in exactly the same way.”
Nobody, Mr. Nunberg? Hardly. Anyone who cares about clarity and precision in the expression of ideas will object to such usages. A good editor would rewrite any sentence that begins with a free-floating modifier — no matter which one of them it is.
Nunberg’s defense against such rewriting is that Wilson Follett offers “It is to be hoped that” as a cumbersome, wordy substitute for “hopefully”. I assume that Nunberg refers to Follett’s discussion of “hopefully” in Modern American Usage. If so, Nunberg once again proves himself an adherent of imprecision, for this is what Follett actually says about “hopefully”:
The German language is blessed with an adverb, hoffentlich, that affirms the desirability of an occurrence that may or may not come to pass. It is generally to be translated by some such periphrasis as it is to be hoped that; but hack translators and persons more at home in German than in English persistently render it as hopefully. Now, hopefully and hopeful can indeed apply to either persons or affairs. A man in difficulty is hopeful of the outcome, or a situation looks hopeful; we face the future hopefully, or events develop hopefully. What hopefully refuses to convey in idiomatic English is the desirability of the hoped-for event. College, we read, is a place for the development of habits of inquiry, the acquisition of knowledge and, hopefully, the establishment of foundations of wisdom. Such a hopefully is un-English and eccentric; it is to be hoped is the natural way to express what is meant. The underlying mentality is the same—and, hopefully, the prescription for cure is the same (let us hope) / With its enlarged circulation–and hopefully also increased readership–[a periodical] will seek to … (we hope) / Party leaders had looked confidently to Senator L. to win . . . by a wide margin and thus, hopefully, to lead the way to victory for. . . the Presidential ticket (they hoped) / Unfortunately–or hopefully, as you prefer it–it is none too soon to formulate the problems as swiftly as we can foresee them. In the last example, hopefully needs replacing by one of the true antonyms of unfortunately–e.g. providentially.
The special badness of hopefully is not alone that it strains the sense of -ly to the breaking point, but that appeals to speakers and writers who do not think about what they are saying and pick up VOGUE WORDS [another entry in Modern American Usage] by reflex action. This peculiar charm of hopefully accounts for its tiresome frequency. How readily the rotten apple will corrupt the barrel is seen in the similar use of transferred meaning in other adverbs denoting an attitude of mind. For example: Sorrowfully (regrettably), the officials charged with wording such propositions for ballot presentation don’t say it that way / the “suicide needle” which–thankfully–he didn’t see fit to use (we are thankful to say). Adverbs so used lack point of view; they fail to tell us who does the hoping, the sorrowing, or the being thankful. Writers who feel the insistent need of an English equivalent for hoffentlich might try to popularize hopingly, but must attach it to a subject capable of hoping. [Op. cit., pp. 178-179]
Follett, contrary to Nunberg’s assertion, does not offer “It is to be hoped that” as a substitute for “hopefully”, which would “cross out an adverb and replace it with a six-word impersonal passive construction”. Follett gives “it is to be hoped for” as the sense of “hopefully”. But, as the preceding quotation attests, Follett is able to replace “hopefully” (where it is misused) with a few short words that take no longer to write or say than “hopefully”, and which convey the writer’s or speaker’s intended meaning more clearly. And if it does take a few extra words to say something clearly, why begrudge those words?
What about the other floating modifiers — such as “sadly”, “mercifully”, “thankfully”, and “frankly” — which Nunberg defends with much passion and no logic? Follett addresses those others in the third paragraph quoted above, but he does not dispose of them properly. For example, I would not simply substitute “regrettably” for “sorrowfully”; neither is adequate. What is wanted is something like this: “The officials who write propositions for ballots should not have said … , which is misleading (vague/ambiguous).” More words? Yes, but so what? (See above.)
In any event, a writer or speaker who is serious about expressing himself clearly to an audience will never say things like “Sadly (regrettably), the old man died” when he means either “I am (we are/they are/everyone who knew him is) saddened by (regrets) the old man’s dying”, or (less probably) “The old man grew sad as he died” or “The old man regretted dying.” I leave “mercifully”, “thankfully”, “frankly”, and the rest of the overused “-ly” words as an exercise for the reader.
The aims of a writer or speaker ought to be clarity and precision, not a stubborn, pseudo-logical insistence on using a word or phrase merely because it is in vogue or (more likely) because it irritates so-called language snobs. I doubt that even the pseudo-logical “language slobs” of Nunberg’s ilk condone “like” and “you know” as interjections. But by Nunberg’s “logic” those interjections should be condoned — nay, encouraged — because “everyone” knows what someone who uses them is “really saying”, namely, “I am too stupid or lazy to express myself clearly and precisely.”
Literally, of course, means something that is actually true: “Literally every pair of shoes I own was ruined when my apartment flooded.”
When we use words not in their normal literal meaning but in a way that makes a description more impressive or interesting, the correct word, of course, is “figuratively”.
But people increasingly use “literally” to give extreme emphasis to a statement that cannot be true, as in: “My head literally exploded when I read Merriam-Webster, among others, is now sanctioning the use of literally to mean just the opposite.”
Indeed, Ragan’s PR Daily reported last week that Webster, Macmillan Dictionary and Google have added this latter informal use of “literally” as part of the word’s official definition. The Cambridge Dictionary has also jumped on board….
Webster’s first definition of literally is, “in a literal sense or matter; actually”. Its second definition is, “in effect; virtually”. In addressing this seeming contradiction, its authors comment:
“Since some people take sense 2 to be the opposition of sense 1, it has been frequently criticized as a misuse. Instead, the use is pure hyperbole intended to gain emphasis, but it often appears in contexts where no additional emphasis is necessary.”…
The problem is that a lot of people use “literally” when they mean “figuratively” because they don’t know better. It’s literally* incomprehensible to me that the editors of dictionaries would suborn linguistic anarchy. Hopefully,** they’ll rethink their rashness. _________ * “Literally” is used correctly, though it’s superfluous here. ** “Hopefully” is used incorrectly, but in the spirit of the times.
3. Punctuate properly.
I can’t compete with Lynne Truss’s Eats, Shoots & Leaves, so I won’t try. Just read it and heed it.
But I must address the use of the hyphen in compound adjectives, and the serial comma.
Regarding the hyphen, David Bernstein of The Volokh Conspiracy writes:
I frequently have disputes with law review editors over the use of dashes. Unlike co-conspirator Eugene, I’m not a grammatical expert, or even someone who has much of an interest in the subject.
But I do feel strongly that I shouldn’t use a dash between words that constitute a phrase, as in “hired gun problem”, “forensic science system”, or “toxic tort litigation.” Law review editors seem to want to generally want to change these to “hired-gun problem”, “forensic-science system”, and “toxic-tort litigation.” My view is that “hired” doesn’t modify “gun”; rather “hired gun” is a self-contained phrase. The same with “forensic science” and “toxic tort.”
Most of the commenters are right to advise Bernstein that the “dashes” — he means hyphens — are necessary. Why? To avoid confusion as to what is modifying the noun “problem”.
In “hired gun”, for example, “hired” (adjective) modifies “gun” (noun) to convey the idea of a bodyguard or paid assassin. But in “hired-gun problem”, “hired-gun” is a compound adjective which requires both of its parts to modify “problem”. It’s not a “hired problem” or a “gun problem”, it’s a “hired-gun problem”. The function of the hyphen is to indicate that “hired” and “gun”, taken separately, are meaningless as modifiers of “problem”, that is, to ensure that the meaning of the adjective-noun phrase is not misread.
A hyphen isn’t always necessary in such instances. But the consistent use of the hyphen in such instances avoids confusion and the possibility of misinterpretation.
The consistent use of the hyphen to form a compound adjective has a counterpart in the consistent use of the serial comma, which is the comma that precedes the last item in a list of three or more items (e.g., the red, white, and blue). Newspapers (among other sinners) eschew the serial comma for reasons too arcane to pursue here. Thoughtful counselors advise its use. (See, for example, Modern American Usage at pp. 422-423.) Why? Because the serial comma, like the hyphen in a compound adjective, averts ambiguity. It isn’t always necessary, but if it’s used consistently, ambiguity can be avoided. (Here’s a great example, from the Wikipedia article linked to in the first sentence of this paragraph: “To my parents, Ayn Rand and God”. The writer means, of course, “To my parents, Ayn Rand, and God”.)
A little punctuation goes a long way.
Addendum
I have reverted to the British style of punctuating in-line quotations, which I followed 45 years ago when I published a weekly newspaper. The British style is to enclose within quotation marks only (a) the punctuation that appears in quoted text or (b) the title of a work (e.g., a blog post) that is usually placed within quotation marks.
I have reverted because of the confusion and unsightliness caused by the American style. It calls for the placement of periods and commas within quotation marks, even if the periods and commas don’t occur in the quoted material or title. Also, if there is a question mark at the end of quoted material, it replaces the comma or period that might otherwise be placed there.
If I had continued to follow American style, I would have referred to a series of blog posts in this way:
What a hodge-podge. There’s no comma between the first two entries, and the sentence ends with an inappropriate question mark. With two titles ending in question marks, there was no way for me to avoid a series in which a comma is lacking. I could have avoided the sentence-ending question mark by recasting the list, but the items are listed chronologically, which is how they should be read.
I solved these problems easily by reverting to the British style:
This not only eliminates the hodge-podge, but is also more logical and accurate. All items are separated by commas, commas aren’t displaced by question marks, and the declarative sentence ends with a period instead of a question mark.
4. Why “‘s” matters, or how to avoid ambiguity in possessives.
Most newspapers and magazines follow the convention of forming the possessive of a word ending in “s” by putting an apostrophe after the “s”; for example:
Dallas’ (for Dallas’s)
Texas’ (for Texas’s)
Jesus’ (for Jesus’s)
This may work on a page or screen, but it can cause ambiguity if carried over into speech*. (If I were the kind of writer who condescends to his readers, I would at this point insert the following “trigger warning”: I am about to take liberties with the name of Jesus and the New Testament, about which I will write as if it were a contemporary document. Read no further if you are easily offended.)
What sounds like “Jesus walks on water” could mean just what it sounds like: a statement about a feat of which Jesus is capable or is performing. But if Jesus walks on the water more than once, it could refer to his plural perambulations: “Jesus’ walks on water”**, as it would appear in a newspaper.
The simplest and best way to avoid the ambiguity is to insist on “Jesus’s walks on water”** for the possessive case, and to inculcate the practice of saying it as it reads. How else can the ambiguity be avoided, in the likely event that the foregoing advice will be ignored?
If what is meant is “Jesus walks on water”, one could say “Jesus can [is able to] walk on water” or “Jesus is walking on water”, according to the situation.
If what is meant is that Jesus walks on water more than once, “Jesus’s walks on water” is unambiguous (assuming, of course, that one’s listeners have an inkling about the standard formation of a singular possessive). There’s no need to work around it, as there is in the non-possessive case. But if you insist on avoiding the “‘s” formation, you can write or say “the water-walks of Jesus”.
I now take it to the next level.
What if there’s more than one Jesus who walks on water? Well, if they all can walk on water and the idea is to say so, it’s “the Jesuses walk on water”. And if they all walk on water and the idea is to refer to those outings as the outings of them all, it’s “the water-walks of the Jesuses”.
Why? Because the standard formation of the plural possessive of Jesus is Jesuses’. Jesusues’s would be too hard to say or comprehend. But Jesuses’ sounds the same as Jesuses, and must therefore be avoided in speech, and in writing intended to be read aloud. Thus “the water walks of the Jesuses” instead of “the Jesuses’ walks on water”, which is ambiguous to a listener. __________ * A good writer will think about the effect of his writing if it is read aloud.
** “Jesus’ walks on water” and “Jesus’s walks on water” misuse the possessive case, though it’s a standard kind misuse that is too deeply entrenched to be eradicated. Strictly speaking, Jesus doesn’t own walks on water, he does them. The alternative construction, “the water-walks of Jesus”, is better; “the water-walks by Jesus” is best.
5. Stand fast against political correctness.
As a result of political correctness, some words and phrases have gone out of favor. needlessly. Others are cluttering the language, needlessly. Political correctness manifests itself in euphemisms, verboten words, and what I call gender preciousness.
Euphemisms
These are much-favored by persons of the left, who seem unable to have an aversion to reality. Thus, for example:
“Crippled” became “handicapped”, which became “disabled” and then “differently-abled” or “mobility-challenged”.
“Stupid” and “slow” became “learning disabled”, which became “special needs” (a euphemistic category that houses more than the stupid).
“Poor” became “underprivileged”, which became “economically disadvantaged”, which became “entitled” (to other people’s money).
“Colored persons” became “Negroes”, who became “blacks”, “Blacks”, and “African-Americans”. They are now often called “persons of color”. That looks like a variant of “colored persons”, but it also refers to a segment of humanity that includes almost everyone but white persons who are descended solely from long-time inhabitants the British Isles and continental Europe.
How these linguistic contortions have helped the crippled, stupid, poor, and colored is a mystery to me. Tact is admirable, but euphemisms aren’t tactful. They’re insulting because they’re condescending.
Verboten Words
The list of such words is long and growing longer. Words become verboten for the same reason that euphemisms arise: to avoid giving offense, even where offense wouldn’t or shouldn’t be taken.
David Bernstein, writing at TCS Daily several ago, recounted some tales about political correctness (source no longer available online). This one struck close to home:
One especially merit-less [hostile work environment] claim that led to a six-figure verdict involved Allen Fruge, a white Department of Energy employee based in Texas. Fruge unwittingly spawned a harassment suit when he followed up a southeast Texas training session with a bit of self-deprecating humor. He sent several of his colleagues who had attended the session with him gag certificates anointing each of them as an honorary Coon Ass — usually spelled coonass — a mildly derogatory slang term for a Cajun. The certificate stated that [y]ou are to sing, dance, and tell jokes and eat boudin, cracklins, gumbo, crawfish etouffe and just about anything else. The joke stemmed from the fact that southeast Texas, the training session location, has a large Cajun population, including Fruge himself.
An African American recipient of the certificate, Sherry Reid, chief of the Nuclear and Fossil Branch of the DOE in Washington, D.C., apparently missed the joke and complained to her supervisors that Fruge had called her a coon. Fruge sent Reid a formal (and humble) letter of apology for the inadvertent offense, and explained what Coon Ass actually meant. Reid nevertheless remained convinced that Coon Ass was a racial pejorative, and demanded that Fruge be fired. DOE supervisors declined to fire Fruge, but they did send him to diversity training. They also reminded Reid that the certificate had been meant as a joke, that Fruge had meant no offense, that Coon Ass was slang for Cajun, and that Fruge sent the certificates to people of various races and ethnicities, so he clearly was not targeting African Americans. Reid nevertheless sued the DOE, claiming that she had been subjected to a racial epithet that had created a hostile environment, a situation made worse by the DOEs failure to fire Fruge.
Reid’s case was seemingly frivolous. The linguistics expert her attorney hired was unable to present evidence that Coon Ass meant anything but Cajun, or that the phrase had racist origins, and Reid presented no evidence that Fruge had any discriminatory intent when he sent the certificate to her. Moreover, even if Coon Ass had been a racial epithet, a single instance of being given a joke certificate, even one containing a racial epithet, by a non-supervisory colleague who works 1,200 miles away does not seem to remotely satisfy the legal requirement that harassment must be severe and pervasive for it to create hostile environment liability. Nevertheless, a federal district court allowed the case to go to trial, and the jury awarded Reid $120,000, plus another $100,000 in attorneys fees. The DOE settled the case before its appeal could be heard for a sum very close to the jury award.
I had a similar though less costly experience some years ago, when I was chief financial and administrative officer of a defense think-tank. In the course of discussing the company’s budget during meeting with employees from across the company, I uttered “niggardly” (meaning stingy or penny-pinching). The next day a fellow vice president informed me that some of the black employees from her division had been offended by “niggardly”. I suggested that she give her employees remedial training in English vocabulary. That should have been the verdict in the Reid case.
Gender Preciousness
It has become fashionable for academicians and pseudo-serious writers to use “she” where “he” long served as the generic (and sexless) reference to a singular third person. Here is an especially grating passage from an by Oliver Cussen:
What is a historian of ideas to do? A pessimist would say she is faced with two options. She could continue to research the Enlightenment on its own terms, and wait for those who fight over its legacy—who are somehow confident in their definitions of what “it” was—to take notice. Or, as [Jonathan] Israel has done, she could pick a side, and mobilise an immense archive for the cause of liberal modernity or for the cause of its enemies. In other words, she could join Moses Herzog, with his letters that never get read and his questions that never get answered, or she could join Sandor Himmelstein and the loud, ignorant bastards. [“The Trouble with the Enlightenment”, Prospect, May 5, 2013]
I don’t know about you, but I’m distracted by the use of the generic “she”, especially by a male. First, it’s not the norm (or wasn’t the norm until the thought police made it so). Thus my first reaction to reading it in place of “he” is to wonder who this “she” is; whereas, the function of “he” as a stand-in for anyone (regardless of gender) was always well understood. Second, the usage is so obviously meant to mark the writer as “sensitive” and “right thinking” that it calls into question his sincerity and objectivity. I call this in evidence of my position:
The use of the traditional inclusive generic pronoun “he” is a decision of language, not of gender justice. There are only six alternatives. (1) We could use the grammatically misleading and numerically incorrect “they.” But when we say “one baby was healthier than the others because they didn’t drink that milk,” we do not know whether the antecedent of “they” is “one” or “others,” so we don’t know whether to give or take away the milk. Such language codes could be dangerous to baby’s health. (2) Another alternative is the politically intrusive “in-your-face” generic “she,” which I would probably use if I were an angry, politically intrusive, in-your-face woman, but I am not any of those things. (3) Changing “he” to “he or she” refutes itself in such comically clumsy and ugly revisions as the following: “What does it profit a man or woman if he or she gains the whole world but loses his or her own soul? Or what shall a man or woman give in exchange for his or her soul?” The answer is: he or she will give up his or her linguistic sanity. (4) We could also be both intrusive and clumsy by saying “she or he.” (5) Or we could use the neuter “it,” which is both dehumanizing and inaccurate. (6) Or we could combine all the linguistic garbage together and use “she or he or it,” which, abbreviated, would sound like “sh . . . it.” I believe in the equal intelligence and value of women, but not in the intelligence or value of “political correctness,” linguistic ugliness, grammatical inaccuracy, conceptual confusion, or dehumanizing pronouns. [Peter Kreeft, Socratic Logic, 3rd ed., p. 36, n. 1, as quoted by Bill Vallicella, Maverick Philosopher, May 9, 2015]
I could go on about the use of “he or she” in place of “he” or “she”. But it should be enough to call it what it is: verbal clutter. (As for “they”, “them”, and “their” in place of plural pronouns, see “Encounters with Pronouns” at Imlac’s Journal.)
Then there is “man”, which for ages was well understood (in the proper context) as referring to persons in general, not to male persons in particular. (“Mankind” merely adds a superfluous syllable.)
The short, serviceable “man” has been replaced, for the most part, by “humankind”. I am baffled by the need to replaced one syllable with three. I am baffled further by the persistence of “man” — a “sexist” term — in the three-syllable substitute. But it gets worse when writers strain to avoid the solo use of “man” by resorting to “human beings” and the “human species”. These are longer than “humankind”, and both retain the accursed “man”.
6. Don’t split infinitives.
Just don’t do it, regardless of the pleadings of descriptivists. Even Follett counsels the splitting of infinitives, when the occasion demands it. I part ways with Follett in this matter, and stand ready to be rebuked for it.
Consider the case of Eugene Volokh, a known grammatical relativist, who scoffs at “to increase dramatically” — as if “to dramatically increase” would be better. The meaning of “to increase dramatically” is clear. The only reason to write “to dramatically increase” would be to avoid the appearance of stuffiness; that is, to pander to the least cultivated of one’s readers.
Seeming unstuffy (i.e., without standards) is neither a necessary nor sufficient reason to split an infinitive. The rule about not splitting infinitives, like most other grammatical rules, serves the valid and useful purpose of preventing English from sliding yet further down the slippery slope of incomprehensibility than it has slid.
If an unsplit infinitive makes a clause or sentence seem awkward, the clause or sentence should be recast to avoid the awkwardness. Better that than make an exception that leads to further exceptions — and thence to Babel.
Modern English Usage counsels splitting an infinitive where recasting doesn’t seem to work:
We admit that separation of to from its infinitive is not in itself desirable, and we shall not gratuitously say either ‘to mortally wound’ or ‘to mortally be wounded’…. We maintain, however, that a real [split infinitive], though not desirable in itself, is preferable to either of two things, to real ambiguity, and to patent artificiality…. We will split infinitives sooner than be ambiguous or artificial; more than that, we will freely admit that sufficient recasting will get rid of any [split infinitive] without involving either of those faults, and yet reserve to ourselves the right of deciding in each case whether recasting is worth while. Let us take an example: ‘In these circumstances, the Commission … has been feeling its way to modifications intended to better equip successful candidates for careers in India and at the same time to meet reasonable Indian demands.’… What then of recasting? ‘intended to make successful candidates fitter for’ is the best we can do if the exact sense is to be kept… [P. 581]
Good try, but not good enough. This would do: “In these circumstances, the Commission … has been considering modifications that would better equip successful candidates for careers in India and at the same time meet reasonable Indian demands.”
Enough said? I think so.
7. It’s all right to begin a sentence with “And” or “But” — in moderation.
It has been a very long time since a respected grammarian railed against the use of “And” or “But” at the start of a sentence. But if you have been warned against such usage, ignore the warning and heed Follett:
A prejudice lingers from the days of schoolmarmish rhetoric that a sentence should not begin with and. The supposed rule is without foundation in grammar, logic, or art. And can join separate sentences and their meanings just as well as but can both join sentences and disjoin meanings. The false rule used to apply to but equally; it is now happily forgotten. What has in fact happened is that the traditionally acceptable but after a semicolon has been replaced by the same but after a period. Let us do the same thing with and, taking care, of course, not to write long strings of sentences each headed by And or by But.
8. There’s No Need to End a Sentence with a Preposition
Garner says this:
The spurious rule about not ending sentences with prepositions is a remnant of Latin grammar, in which a preposition was the one word that a writer could not end a sentence with….
The idea that a preposition is ungrammatical at the end of a sentence is often attributed to 18th-century grammarians. But [there it is] that idea is greatly overstated. Bishop Robert Lowth, the most prominent 18th-century grammarian, wrote that the final preposition “is an idiom, which our language is strongly inclined to: it prevails in common conversation, and suits very well with the familiar style in writing.”…
Perfectly natural-sounding sentences end with prepositions, particularly when a verb with a preposition-particle appears at the end (as in follow up or ask for). E.g.: “The act had no causal connection with the injury complained of.”
Garner goes on to warn against “such … constructions as of which, on which, and for which” that are sometimes used to avoid the use of a preposition at the end of a sentence. He argues that
“This is a point on which I must insist” becomes far more natural as “This is a point that I must insist on.”
Better yet: “I must insist on the point.”
Avoiding the sentence-ending preposition really isn’t difficult (as I just showed), unnatural, or “bad”. Benjamin Dreyer, in “Three Writing Rules to Disregard“, acknowledges as much:
Ending a sentence with a preposition (as, at, by, for, from, of, etc.) isn’t always such a hot idea, mostly because a sentence should, when it can, aim for a powerful finale and not simply dribble off like an old man’s unhappy micturition. A sentence that meanders its way to a prepositional finish is often, I find, weaker than it ought to or could be.
What did you do that for?
is passable, but
Why did you do that?
has some snap to it.
Exactly.
Dreyer tries to rescue the sentence-ending preposition by adding this:
But to tie a sentence into a strangling knot to avoid a prepositional conclusion is unhelpful and unnatural, and it’s something no good writer should attempt and no eager reader should have to contend with.
He should have followed his own advice, and written this:
But to tie a sentence into a strangling knot to avoid a prepositional conclusion is unhelpful and unnatural. It’s something that no good writer should attempt, nor foist upon the eager reader.
See? No preposition at the end, and a punchier paragraph (especially with the elimination of Dreyer’s run-on sentence).
I remain convinced that the dribbly, sentence-ending preposition is easily avoided. And, by avoiding it, the writer or speaker conveys his meaning more clearly and forcefully.
I am repeating the introduction for those readers who haven’t seen part I, which is here. Parts II and IV are here and here.
This series is aimed at writers of non-fiction works, but writers of fiction may also find it helpful. There are four parts:
I. Some Writers to Heed and Emulate
A. The Essentials: Lucidity, Simplicity, Euphony B. Writing Clearly about a Difficult Subject C. Advice from an American Master D. Also Worth a Look
II. Step by Step
A. The First Draft
1. Decide — before you begin to write — on your main point and your purpose for making it. 2. Avoid wandering from your main point and purpose; use an outline. 3. Start by writing an introductory paragraph that summarizes your “story line”. 4. Lay out a straight path for the reader. 5. Know your audience, and write for it. 6. Facts are your friends — unless you’re trying to sell a lie, of course. 7. Momentum is your best friend.
B. From First Draft to Final Version
1. Your first draft is only that — a draft. 2. Where to begin? Stand back and look at the big picture. 3. Nit-picking is important. 4. Critics are necessary, even if not mandatory. 5. Accept criticism gratefully and graciously. 6. What if you’re an independent writer and have no one to turn to? 7. How many times should you revise your work before it’s published?
III. Reference Works
A. The Elements of Style B. Eats, Shoots & Leaves C. Follett’s Modern American Usage D. Garner’s Modern American Usage E. A Manual of Style and More
IV. Notes about Grammar and Usage
A. Stasis, Progress, Regress, and Language B. Illegitimi Non Carborundum Lingo
1. Eliminate filler words. 2. Don’t abuse words. 3. Punctuate properly. 4. Why ‘s matters, or how to avoid ambiguity in possessives. 5. Stand fast against political correctness. 6. Don’t split infinitives. 7. It’s all right to begin a sentence with “And” or “But” — in moderation. 8. There’s no need to end a sentence with a preposition.
Some readers may conclude that I prefer stodginess to liveliness. That’s not true, as any discerning reader of this blog will know. I love new words and new ways of using words, and I try to engage readers while informing and persuading them. But I do those things within the expansive boundaries of prescriptive grammar and usage. Those boundaries will change with time, as they have in the past. But they should change only when change serves understanding, not when it serves the whims of illiterates and language anarchists.
III. REFERENCE WORKS
A. The Elements of Style
If you could have only one book to help you write better, it would be The Elements of Style. Admittedly, Strunk & White, as the book is also known, has a vociferous critic, one Geoffrey K. Pullum. But Pullum documents only one substantive flaw: an apparent mischaracterization of what constitutes the passive voice. What Pullum doesn’t say is that the book correctly flays the kind of writing that it calls passive (correctly or not). Further, Pullum derides the book’s many banal headings, while ignoring what follows them: sound advice, backed by concrete examples. (There’s a nice rebuttal of Pullum here.) It’s evident that the book’s real sin — in Pullum’s view — is “bossiness” (prescriptivism), which is no sin at all, as I’ll explain in part IV.
There are so many good writing tips in Strunk & White that it was hard for me to choose a sample. I randomly chose “Omit Needless Words” (one of the headings derided by Pullum), which opens with a statement of principles:
Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine to unnecessary parts. This requires not that the writer make all of his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell. [P. 23]
That would be empty rhetoric, were it not followed by further discussion and 17 specific examples. Here are a few:
the question as to whether should be replaced by whether or the question whether
the reason why is that should be replaced by because
I was unaware of the fact that should be replace by I was unaware that or I did not know that
His brother, who is a member of the same firm should be replaced by His brother, a member of the same firm [P. 24]
There’s much more than that to Strunk & White, of course, (Go here to see table of contents.) You’ll become a better writer — perhaps an excellent one — if you carefully read Strunk & White, re-read it occasionally, and apply the principles that it espouses and illustrates.
Who would have thought a book about punctuation could cause such a sensation? Certainly not its modest if indignant author, who began her surprise hit motivated by “horror” and “despair” at the current state of British usage: ungrammatical signs (“BOB,S PETS”), headlines (“DEAD SONS PHOTOS MAY BE RELEASED”) and band names (“Hear’Say”) drove journalist and novelist Truss absolutely batty. But this spirited and wittily instructional little volume, which was a U.K. #1 bestseller, is not a grammar book, Truss insists; like a self-help volume, it “gives you permission to love punctuation.” Her approach falls between the descriptive and prescriptive schools of grammar study, but is closer, perhaps, to the latter. (A self-professed “stickler,” Truss recommends that anyone putting an apostrophe in a possessive “its”-as in “the dog chewed it’s bone”-should be struck by lightning and chopped to bits.) Employing a chatty tone that ranges from pleasant rant to gentle lecture to bemused dismay, Truss dissects common errors that grammar mavens have long deplored (often, as she readily points out, in isolation) and makes elegant arguments for increased attention to punctuation correctness: “without it there is no reliable way of communicating meaning.” Interspersing her lessons with bits of history (the apostrophe dates from the 16th century; the first semicolon appeared in 1494) and plenty of wit, Truss serves up delightful, unabashedly strict and sometimes snobby little book, with cheery Britishisms (“Lawks-a-mussy!”) dotting pages that express a more international righteous indignation.
C. Follett’s Modern American Usage
Next up is Wilson Follett’s Modern American Usage: A Guide. The link points to a newer edition than the one that I’ve relied on for about 50 years. Reviews of the newer edition, edited by one Erik Wensberg, are mixed but generally favorable. However, the newer edition seems to lack Follett’s “Introductory” which is divided into “Usage, Purism, and Pedantry” and “The Need of an Orderly Mind”. If that is so, the newer edition is likely to be more compromising toward language relativists like Geoffrey Pullum. The following quotations from Follett’s “Introductory” (one from each section), will give you an idea of Follett’s stand on relativism:
[F]atalism about language cannot be the philosophy of those who care abut language; it is the illogical philosophy of their opponents. Surely the notion that, because usage is ultimately what everybody does to words, nobody can or should do anything about them is self-contradictory. Somebody, by definition does something, and this something is best done by those with convictions and a stake in the outcome, whether the stake of private pleasure or of professional duty or both does not matter. Resistance always begins with individuals. [Pp. 12-3]
* * *
A great deal of our language is so automatic that even the thoughtful never think about it, and this mere not-thinking is the gate through which solecisms or inferior locutions slip in. Some part, greater or smaller, of every thousand words is inevitably parroted, even by the least parrotlike. [P. 14]
(A reprint of the original edition is available here.)
D. Garner’s Modern American Usage
I also like Garner’s Modern American Usage, by Bryan A. Garner. Though Garner doesn’t write as elegantly as Follett, he is just as tenacious and convincing as Follett in defense of prescriptivism. And Garner’s book far surpasses Follett’s in scope and detail; it’s twice the length, and the larger pages are set in smaller type.
E. A Manual of Style and More
I have one more book to recommend: The Chicago Manual of Style. Though the book is a must-have for editors, serious writers should also own a copy and consult it often. If you’re unfamiliar with the book, you can get an idea of its vast range and depth of coverage by following the preceding link, clicking on “Look inside”, and perusing the table of contents, first pages, and index.
Here is an oft-quoted observation, spuriously attributed to Socrates, about youth:
The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise. Children are now tyrants, not the servants of their households. They no longer rise when elders enter the room. They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannize their teachers.
Even though Socrates didn’t say it, the sentiment has nevertheless been stated and restated since 1907, when the observation was concocted, and probably had been shared widely for decades, and even centuries, before that. I use a form of it when I discuss the spoiled children of capitalism (e.g., here).
Is there something to it? No and yes.
No, because rebelliousness and disrespect for elders and old ways seem to be part of the natural processes of physical and mental maturation.
Not all adolescents and young adults are rebellious and disrespectful. But many rebellious and disrespectful adolescents and young adults carry their attitudes with them through life, even if less obviously than in youth, as they climb the ladders of various callings. The callings that seem to be most attractive to the rebellious are the arts (especially the written, visual, thespian, terpsichorial, musical, and cinematic ones), the professoriate, the punditocracy, journalism, and politics.
Which brings me to the yes answer, and to the spoiled children of capitalism. Rebelliousness, though in some persons never entirely outgrown or suppressed by maturity, will more often be outgrown or suppressed in economically tenuous conditions, the challenges of which which almost fully occupied their bodies and minds. (Opinionizers and sophists were accordingly much thinner on the ground in the parlous days of yore.)
However, as economic growth and concomitant technological advances have yielded abundance far beyond the necessities of life for most inhabitants of the Western world, the beneficiaries of that abundance have acquired yet another luxury: the luxury of learning about and believing in systems that, in the abstract, seem to offer vast improvements on current conditions. It is the old adage “Idle hands are the devil’s tools” brought up to date, with “minds” joining “hands” in the devilishness.
Among many bad things that result from such foolishness (e.g., the ascendancy of ideologies that crush liberty and, ironically, economic growth) is the loss of social cohesion. I was reminded of this by Noah Smith’s fatuous article, “The 1950s Are Greatly Overrated“.
Smith is an economist who blogs and writes an opinion column for Bloomberg News. My impression of him is that he is a younger version of Paul Krugman, the former economist who has become a leftist whiner. The difference between them is that Krugman remembers the 1950s fondly, whereas Smith does not.
I once said this about Krugman’s nostalgia for the 1950s, a decade during which he was a mere child:
[The nostalgia] is probably rooted in golden memories of his childhood in a prosperous community, though he retrospectively supplies an economic justification. The 1950s were (according to him) an age of middle-class dominance before the return of the Robber Barons who had been vanquished by the New Deal. This is zero-sum economics and class warfare on steroids — standard Krugman fare.
Smith, a mere toddler relative to Krugman and a babe in arms relative to me, takes a dim view of the 1950s:
For all the rose-tinted sentimentality, standards of living were markedly lower in the ’50s than they are today, and the system was riddled with vast injustice and inequality.
Women and minorities are less likely to have a wistful view of the ’50s, and with good reason. Segregation was enshrined in law in much of the U.S., and de facto segregation was in force even in Northern cities. Black Americans, crowded into ghettos, were excluded from economic opportunity by pervasive racism, and suffered horrendously. Even at the end of the decade, more than half of black Americans lived below the poverty line:
Women, meanwhile, were forced into a narrow set of occupations, and few had the option of pursuing fulfilling careers. This did not mean, however, that a single male breadwinner was always able to provide for an entire family. About a third of women worked in the ’50s, showing that many families needed a second income even if it defied the gender roles of the day:
For women who didn’t work, keeping house was no picnic. Dishwashers were almost unheard of in the 1950s, few families had a clothes dryer, and fewer than half had a washing machine.
But even beyond the pervasive racism and sexism, the 1950s wasn’t a time of ease and plenty compared to the present day. For example, by the end of the decade, even after all of that robust 1950s growth, the white poverty rate was still 18.1%, more than double that of the mid-1970s:
Nor did those above the poverty line enjoy the material plenty of later decades. Much of the nation’s housing stock in the era was small and cramped. The average floor area of a new single-family home in 1950 was only 983 square feet, just a bit bigger than the average one-bedroom apartment today.
To make matters worse, households were considerably larger in the ’50s, meaning that big families often had to squeeze into those tight living spaces. Those houses also lacked many of the things that make modern homes comfortable and convenient — not just dishwashers and clothes dryers, but air conditioning, color TVs and in many cases washing machines.
And those who did work had to work significantly more hours per year. Those jobs were often difficult and dangerous. The Occupational Safety and Health Administration wasn’t created until 1971. As recently as 1970, the rate of workplace injury was several times higher than now, and that number was undoubtedly even higher in the ’50s. Pining for those good old factory jobs is common among those who have never had to stand next to a blast furnace or work on an unautomated assembly line for eight hours a day.
Outside of work, the environment was in much worse shape than today. There was no Environmental Protection Agency, no Clean Air Act or Clean Water Act, and pollution of both air and water was horrible. The smog in Pittsburgh in the 1950s blotted out the sun. In 1952 the Cuyahoga River in Cleveland caught fire. Life expectancy at the end of the ’50s was only 70 years, compared to more than 78 today.
So life in the 1950s, though much better than what came before, wasn’t comparable to what Americans enjoyed even two decades later. In that space of time, much changed because of regulations and policies that reduced or outlawed racial and gender discrimination, while a host of government programs lowered poverty rates and cleaned up the environment.
But on top of these policy changes, the nation benefited from rapid economic growth both in the 1950s and in the decades after. Improved production techniques and the invention of new consumer products meant that there was much more wealth to go around by the 1970s than in the 1950s. Strong unions and government programs helped spread that wealth, but growth is what created it.
So the 1950s don’t deserve much of the nostalgia they receive. Though the decade has some lessons for how to make the U.S. economy more equal today with stronger unions and better financial regulation, it wasn’t an era of great equality overall. And though it was a time of huge progress and hope, the point of progress and hope is that things get better later. And by most objective measures they are much better now than they were then.
See? A junior Krugman who sees the same decade as a glass half-empty instead of half-full.
In the end, Smith admits the irrelevance of his irreverence for the 1950s when he says that “the point of progress and hope is that things get better later.” In other words, if there is progress the past will always look inferior to the present. (And, by the same token, the present will always look inferior to the future when it becomes the present.)
I could quibble with some of Smith’s particulars (e.g., racism may be less overt than it was in the 1950s, but it still boils beneath the surface, and isn’t confined to white racism; stronger unions and stifling financial regulations hamper economic growth, which Smith prizes so dearly). But I will instead take issue with his assertion, which precedes the passages quoted above, that “few of those who long for a return to the 1950s would actually want to live in those times.”
It’s not that anyone yearns for a return to the 1950s as it was in all respects, but for a return to the 1950s as it was in some crucial ways:
There is … something to the idea that the years between the end of World War II and the early 1960s were something of a Golden Age…. But it was that way for reasons other than those offered by Krugman [and despite Smith’s demurrer].
Civil society still flourished through churches, clubs, civic associations, bowling leagues, softball teams and many other voluntary organizations that (a) bound people and (b) promulgated and enforced social norms.
Those norms proscribed behavior considered harmful — not just criminal, but harmful to the social fabric (e.g., divorce, unwed motherhood, public cursing and sexuality, overt homosexuality). The norms also prescribed behavior that signaled allegiance to the institutions of civil society (e.g., church attendance, veterans’ organizations) , thereby helping to preserve them and the values that they fostered.
Yes, it was an age of “conformity”, as sneering sophisticates like to say, even as they insist on conformity to reigning leftist dogmas that are destructive of the social fabric. But it was also an age of widespread mutual trust, respect, and forbearance.
Those traits, as I have said many times (e.g., here) are the foundations of liberty, which is a modus vivendi, not a mystical essence. The modus vivendi that arises from the foundations is peaceful, willing coexistence and its concomitant: beneficially cooperative behavior — liberty, in other words.
The decade and a half after the end of World War II wasn’t an ideal world of utopian imagining. But it approached a realizable ideal. That ideal — for the nation as a whole — has been put beyond reach by the vast, left-wing conspiracy that has subverted almost every aspect of life in America.
What happened was the 1960s — and its long aftermath — which saw the rise of capitalism’s spoiled children (of all ages), who have spat on and shredded the very social norms that in the 1940s and 1950s made the United States of America as united they ever would be. Actual enemies of the nation — communists — were vilified and ostracized, and that’s as it should have been. And people weren’t banned and condemned by “friends”, “followers”, Facebook, Twitter, etc. etc., for the views that they held. Not even on college campuses, on radio and TV shows, in the print media, or in Hollywood moves.
What do the spoiled children have to show for their rejection of social norms — other than economic progress that is actually far less robust than it would have been were it not for the interventions of their religion-substitute, the omnipotent central government? Well, omnipotent at home and impotent (or drastically weakened) abroad, thanks to rounds of defense cuts and perpetual hand-wringing about what the “world” might think or some militarily inferior opponents might do if the U.S. government were to defend Americans and protect their interests abroad?
The list of the spoiled children’s “accomplishments” is impossibly long to recite here, so I will simply offer a very small sample of things that come readily to mind:
California wildfires caused by misguided environmentalism.
The excremental wasteland that is San Francisco. (And Blue cities, generally.)
Flight from California wildfires, high taxes, excremental streets, and anti-business environment.
The killing of small businesses, especially restaurants, by imbecilic Blue-State minimum wage laws.
The killing of businesses, period, by oppressive Blue-State regulations.
The killing of jobs for people who need them the most, by ditto and ditto.
Bloated pension schemes for Blue-State (and city) employees, which are bankrupting those States (and cities) and penalizing their citizens who aren’t government employees.
The hysteria (and even punishment) that follows from drawing a gun or admitting gun ownership
The idea that men can become women and should be allowed to compete with women in athletic competitions because the men in question have endured some surgery and taken some drugs.
The idea that it doesn’t and shouldn’t matter to anyone that a self-identified “woman” uses women’s rest-rooms where real women and girls became prey for prying eyes and worse.
Mass murder on a Hitlerian-Stalinist scale in the name of a “woman’s right to choose”, when she made that choice (in almost every case) by engaging in consensual sex.
Disrespect for he police and military personnel who keep them safe in their cosseted existences.
Applause for attacks on the same.
Applause for America’s enemies, which the delusional, spoiled children won’t recognize as their enemies until it’s too late.
Longing for impossible utopias (e.g., “true” socialism) because they promise what is actually impossible in the real world — and result in actual dystopias (e.g., the USSR, Cuba, Britain’s National Health Service).
Noah Smith is far too young to remember an America in which such things were almost unthinkable — rather than routine. People then didn’t have any idea how prosperous they would become, or how morally bankrupt and divided.
Every line of human endeavor reaches a peak, from which decline is sure to follow if the things that caused it to peak are mindlessly rejected for the sake of novelty (i.e., rejection of old norms just because they are old). This is nowhere more obvious than in the arts.
It should be equally obvious to anyone who takes an objective look at the present state of American society and is capable of comparing it with American society of the 1940s and 1950s. For all of its faults it was a golden age. Unfortunately, most Americans now living (Noah Smith definitely included) are too young and too fixated on material things to understand what has been lost — irretrievably, I fear.
the greatest danger to classical liberalism is the sharp left turn of the Democratic Party. This has been the greatest ideological change of any party since at least the Goldwater revolution in the Republican Party more than a half a century ago….
It is certainly possible that such candidates [as Bernie Sanders, Elizabeth Warren, and Pete Buttigieg] will lose to Joe Biden or that they will not win against Trump. But they are transforming the Democratic Party just as Goldwater did the Republican Party. And the Democratic Party will win the presidency at some time in the future. Recessions and voter fatigue guarantee rotation of parties in office….
Old ideas of individual liberty are under threat in the culture as well. On the left, identity politics continues its relentless rise, particularly on university campuses. For instance, history departments, like that at my own university, hire almost exclusively those who promise to impose a gender, race, or colonial perspective on the past. The history that our students hear will be one focused on the West’s oppression of the rest rather than the reality that its creation of the institutions of free markets and free thought has brought billions of people out of poverty and tyranny that was their lot before….
And perhaps most worrying of all, both the political and cultural move to the left has come about when times are good. Previously, pressure on classical liberalism most often occurred when times were bad. The global trend to more centralized forms of government and indeed totalitarian ones in Europe occurred in the 1920s and 1930s in the midst of a global depression. The turbulent 1960s with its celebration of social disorder came during a period of hard economic times. Moreover, in the United States, young men feared they might be killed in faraway land for little purpose.
But today the economy is good, the best it has been in at least a decade. Unemployment is at a historical low. Wages are up along with the stock market. No Americans are dying in a major war. And yet both here and abroad parties that want to fundamentally shackle the market economy are gaining more adherents. If classical liberalism seems embattled now, its prospects are likely far worse in the next economic downturn or crisis of national security.
McGinnis is wrong about the 1960s being “a period of hard economic times” — in America, at least. The business cycle that began in 1960 and ended in 1970 produced the second-highest rate of growth in real GDP since the end of World War II. (The 1949-1954 cycle produced the highest rate of growth.)
But in being wrong about that non-trivial fact, McGinnis inadvertently points to the reason that “the political and cultural move to the left has come about when times are good”. The reason is symbolized by main cause of social disorder in the 1960s (and into the early 1970s), namely, that “young men feared they might be killed in faraway land for little purpose”.
The craven behavior of supposedly responsible adults like LBJ, Walter Cronkite, Clark Kerr, and many other well-known political, media, educational, and cultural leaders — who allowed themselves to be bullied by essentially selfish protests against the Vietnam War — revealed the greatest failing of the so-called greatest generation: a widespread failure to inculcate personal responsibility in their children. The same craven behavior legitimated the now-dominant tool of political manipulation: massive, boisterous, emotion-laden appeals for this, that, and the other privilege du jour — appeals that left-wing politicians encourage and often lead; appeals that nominal conservatives often accede to rather than seem “mean”.
The rot set after World War II. The Taylorist techniques of industrial production put in place to win the war generated, after it was won, an explosion of prosperity that provided every literate American the opportunity for a good-paying job and entry into the middle class. Young couples who had grown up during the Depression, suddenly flush (compared to their parents), were determined that their kids would never know the similar hardships.
As a result, the Baby Boomers turned into a bunch of spoiled slackers, no longer turned out to earn a living at 16, no longer satisfied with just a high school education, and ready to sell their votes to a political class who had access to a cornucopia of tax dollars and no doubt at all about how they wanted to spend it. And, sadly, they passed their principles, if one may use the term so loosely, down the generations to the point where young people today are scarcely worth using for fertilizer.
In 1919, or 1929, or especially 1939, the adolescents of 1969 would have had neither the leisure nor the money to create the Woodstock Nation. But mommy and daddy shelled out because they didn’t want their little darlings to be caught short, and consequently their little darlings became the worthless whiners who voted for people like Bill Clinton and Barack Obama [and who Bill Clinton and Barack Obama], with results as you see them. Now that history is catching up to them, a third generation of losers can think of nothing better to do than camp out on Wall Street in hopes that the Cargo will suddenly begin to arrive again.
Good luck with that.
[From “The Spoiled Children of Capitalism”, posted in October 2011 at Dyspepsia Generation but no longer available there.]
I have long shared that assessment of the Boomer generation, and subscribe to the view that the rot set in after World War II, and became rampant after 1963., when the post-World War II children of the “greatest generation” came of age.
[W]hen leftist social scientists actually talk to and observe the poor, they confirm the stereotypes of the harshest Victorian. Poverty isn’t about money; it’s a state of mind. That state of mind is low conscientiousness.
Most social scientists who study poor families assume financial troubles are the cause of these breakups [between cohabitating parents]… Lack of money is certainly a contributing cause, as we will see, but rarely the only factor. It is usually the young father’s criminal behavior, the spells of incarceration that so often follow, a pattern of intimate violence, his chronic infidelity, and an inability to leave drugs and alcohol alone that cause relationships to falter and die.
Furthermore:
Conflicts over money do not usually erupt simply because the man cannot find a job or because he doesn’t earn as much as someone with better skills or education. Money usually becomes an issue because he seems unwilling to keep at a job for any length of time, usually because of issues related to respect. Some of the jobs he can get don’t pay enough to give him the self-respect he feels he needs, and others require him to get along with unpleasant customers and coworkers, and to maintain a submissive attitude toward the boss.
These passages focus on low male conscientiousness, but the rest of the book shows it’s a two-way street. And even when Edin and Kefalas are talking about men, low female conscientiousness is implicit. After all, conscientious women wouldn’t associate with habitually unemployed men in the first place – not to mention alcoholics, addicts, or criminals.
Low conscientiousness was the bane of those Boomers who, in the 1960s and 1970s, chose to “drop out” and “do drugs”. It will be the bane of the Gen Yers and Zers who do the same thing. But, as usual, “society” will be expected to pick up the tab, with food stamps, subsidized housing, drug rehab programs, Medicaid, and so on.
Before the onset of America’s welfare state in the 1930s, there were two ways to survive: work hard or accept whatever charity came your way. And there was only one way for most persons to thrive: work hard. That all changed after World War II, when power-lusting politicians sold an all-too-willing-to-believe electorate a false and dangerous bill of goods, namely, that government is the source of prosperity — secular salvation. It is not, and never has been.
McGinnis is certainly right about the decline of classical liberalism and probably right about the rise of leftism. But why is he right? Leftism will continue to ascend as long as the children of capitalism are spoiled. Classical liberalism will continue to wither because it has no moral center. There is no there there to combat the allure of “free stuff“.
Scott Yenor, writing in “The Problem with the ‘Simple Principle’ of Liberty”, makes a point about J.S. Mill’s harm principle — the heart of classical liberalism — that I have made many times. Yenor begins by quoting the principle:
The sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. . . . The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. . . .The only part of the conduct of anyone, for which he is amenable to society, is that which concerns others. In the part that merely concerns himself, his independence is, of right, absolute. Over himself, over his own body and mind, the individual is sovereign.
This is the foundational principle of classical liberalism, and it is deeply flawed, as Yenor argues (successfully, in my view). He ends with this:
[T]he simple principle of [individual] liberty undermines community and compromises character by compromising the family. As common identity and the family are necessary for the survival of liberal society—or any society—I cannot believe that modes of thinking based on the “simple principle” alone suffice for a governing philosophy. The principle works when a country has a moral people, but it doesn’t make a moral people.
Conservatism, by contrast, is anchored in moral principles, which are reflected in deep-seated social norms, at the core of which are religious norms — a bulwark of liberty. But principled conservatism (as opposed to the attitudinal kind) isn’t a big seller in this age of noise:
I mean sound, light, and motion — usually in combination. There are pockets of serenity to be sure, but the amorphous majority wallows in noise: in homes with blaring TVs; in stores, bars, clubs, and restaurants with blaring music, TVs, and light displays; in movies (which seem to be dominated by explosive computer graphics), in sports arenas (from Olympic and major-league venues down to minor-league venues, universities, and schools); and on an on….
The prevalence of noise is telling evidence of the role of mass media in cultural change. Where culture is “thin” (the vestiges of the past have worn away) it is susceptible of outside influence…. Thus the ease with which huge swaths of the amorphous majority were seduced, not just by noise but by leftist propaganda. The seduction was aided greatly by the parallel, taxpayer-funded efforts of public-school “educators” and the professoriate….
Thus did the amorphous majority bifurcate. (I locate the beginning of the bifurcation in the 1960s.) Those who haven’t been seduced by leftist propaganda have instead become resistant to it. This resistance to nanny-statism — the real resistance in America — seems to be anchored by members of that rapidly dwindling lot: adherents and practitioners of religion, especially between the two Left Coasts.
That they are also adherents of traditional social norms (e.g., marriage can only be between a man and a woman), upholders of the Second Amendment, and (largely) “blue collar” makes them a target of sneering (e.g., Barack Obama who called them “bitter clingers”; Hillary Clinton called them “deplorables”)….
[But as long] as a sizeable portion of the populace remains attached to traditional norms — mainly including religion — there will be a movement in search of and in need of a leader [after Trump]. But the movement will lose potency if such a leader fails to emerge.
Were that to happen, something like the old, amorphous society might re-form, but along lines that the remnant of the old, amorphous society wouldn’t recognize. In a reprise of the Third Reich, the freedoms of association, speech, and religious would have been bulldozed with such force that only the hardiest of souls would resist going over to the dark side. And their resistance would have to be covert.
Paradoxically, 1984 may lie in the not-too-distant future, not 36 years in the past. When the nation is ruled by one party (guess which one), foot–voting will no longer be possible and the nation will settle into a darker version of the Californiandystopia.
It is quite possible that — despite current disenchantment with Democrats and given the fickleness of the electorate’s “squishy center” — the election of 2024 will bring about the end of the great experiment in liberty that began in 1776. And with that end, the traces of classical liberalism will all but vanish, along with liberty. Unless something catastrophic shakes the spoiled children of capitalism so hard that their belief in salvation through statism is destroyed. Not just destroyed, but replaced by a true sense of fellowship with other Americans (including “bitter clingers” and “deplorables”) — not the ersatz fellowship with convenient objects of condescension that elicits virtue-signaling political correctness.
I am repeating the introduction for those readers who may not have seen part I, which is here. Parts III and IV are here and here.
This series is aimed at writers of non-fiction works, but writers of fiction may also find it helpful. There are four parts:
I. Some Writers to Heed and Emulate
A. The Essentials: Lucidity, Simplicity, Euphony B. Writing Clearly about a Difficult Subject C. Advice from an American Master D. Also Worth a Look
II. Step by Step
A. The First Draft
1. Decide — before you begin to write — on your main point and your purpose for making it. 2. Avoid wandering from your main point and purpose; use an outline. 3. Start by writing an introductory paragraph that summarizes your “story line”. 4. Lay out a straight path for the reader. 5. Know your audience, and write for it. 6. Facts are your friends — unless you’re trying to sell a lie, of course. 7. Momentum is your best friend.
B. From First Draft to Final Version
1. Your first draft is only that — a draft. 2. Where to begin? Stand back and look at the big picture. 3. Nit-picking is important. 4. Critics are necessary, even if not mandatory. 5. Accept criticism gratefully and graciously. 6. What if you’re an independent writer and have no one to turn to? 7. How many times should you revise your work before it’s published?
III. Reference Works
A. The Elements of Style B. Eats, Shoots & Leaves C. Follett’s Modern American Usage D. Garner’s Modern American Usage E. A Manual of Style and More
IV. Notes about Grammar and Usage
A. Stasis, Progress, Regress, and Language B. Illegitimi Non Carborundum Lingo
1. Eliminate filler words. 2. Don’t abuse words. 3. Punctuate properly. 4. Why ‘s matters, or how to avoid ambiguity in possessives. 5. Stand fast against political correctness. 6. Don’t split infinitives. 7. It’s all right to begin a sentence with “And” or “But” — in moderation. 8. There’s no need to end a sentence with a preposition.
Some readers may conclude that I prefer stodginess to liveliness. That’s not true, as any discerning reader of this blog will know. I love new words and new ways of using words, and I try to engage readers while informing and persuading them. But I do those things within the expansive boundaries of prescriptive grammar and usage. Those boundaries will change with time, as they have in the past. But they should change only when change serves understanding, not when it serves the whims of illiterates and language anarchists.
II. STEP BY STEP
A. The First Draft
1. Decide — before you begin to write — on your main point and your purpose for making it.
Can you state your main point in a sentence? If you can’t, you’re not ready to write about whatever it is that’s on your mind.
Your purpose for writing about a particular subject may be descriptive, explanatory, or persuasive. An economist may, for example, begin an article by describing the state of the economy, as measured by Gross Domestic Product (GDP). He may then explain that the rate of growth in GDP has receded since the end of World War II, because of greater government spending and the cumulative effect of regulatory activity. He is then poised to make a case for less spending and for the cancellation of regulations that impede economic growth.
2. Avoid wandering from your main point and purpose; use an outline.
You can get by with a bare outline, unless you’re writing a book, a manual, or a long article. Fill the outline as you go. Change the outline if you see that you’ve omitted a step or put some steps in the wrong order. But always work to an outline, however sketchy and malleable it may be. (The outline may be a mental one if you are deeply knowledgeable about the material you’re working with.)
3. Start by writing an introductory paragraph that summarizes your “story line”.
The introductory paragraph in a news story is known as “the lead” or “the lede” (a spelling that’s meant to convey the correct pronunciation). A classic lead gives the reader the who, what, why, when, where, and how of the story. As noted in Wikipedia, leads aren’t just for journalists:
Leads in essays summarize the outline of the argument and conclusion that follows in the main body of the essay. Encyclopedia leads tend to define the subject matter as well as emphasize the interesting points of the article. Features and general articles in magazines tend to be somewhere between journalistic and encyclopedian in style and often lack a distinct lead paragraph entirely. Leads or introductions in books vary enormously in length, intent and content.
Think of the lead as a target toward which you aim your writing. You should begin your first draft with a lead, even if you later decide to eliminate, prune, or expand it.
4. Lay out a straight path for the reader.
You needn’t fill your outline sequentially, but the outline should trace a linear progression from statement of purpose to conclusion or call for action. Trackbacks and detours can be effective literary devices in the hands of a skilled writer of fiction. But you’re not writing fiction, let alone mystery fiction. So just proceed in a straight line, from beginning to end.
Quips, asides, and anecdotes should be used sparingly, and only if they reinforce your message and don’t distract the reader’s attention from it.
5. Know your audience, and write for it.
I aim at readers who can grasp complex concepts and detailed arguments. But if you’re writing something like a policy manual for employees at all levels of your company, you’ll want to keep it simple and well-marked: short words, short sentences, short paragraphs, numbered sections and sub-sections, and so on.
6. Facts are your friends — unless you’re trying to sell a lie, of course.
Unsupported generalities will defeat your purpose, unless you’re writing for a gullible, uneducated audience. Give concrete examples and cite authoritative references. If your work is technical, show your data and calculations, even if you must put the details in footnotes or appendices to avoid interrupting the flow of your argument. Supplement your words with tables and graphs, if possible, but make them as simple as you can without distorting the underlying facts.
7. Momentum is your best friend.
Write a first draft quickly, even if you must leave holes to be filled later. I’ve always found it easier to polish a rough draft that spans the entire outline than to work from a well-honed but unaccompanied introductory section.
B. From First Draft to Final Version
1. Your first draft is only that — a draft.
Unless you’re a prodigy, you’ll have to do some polishing (probably a lot) before you have something that a reader can follow with ease.
2. Where to begin? Stand back and look at the big picture.
Is your “story line” clear? Are your points logically connected? Have you omitted key steps or important facts? If you find problems, fix them before you start nit-picking your grammar, syntax, and usage.
3. Nit-picking is important.
Errors of grammar, syntax, and usage can (and probably will) undermine your credibility. Thus, for example, subject and verb must agree (“he says” not “he say”); number must be handled correctly (“there are two” not “there is two”); tense must make sense (“the shirt shrank” not “the shirt shrunk”); usage must be correct (“its” is the possessive pronoun, “it’s” is the contraction for “it is”).
4. Critics are necessary, even if not mandatory.
Unless you’re a skilled writer and objective self-critic, you should ask someone to review your work before you publish it or submit it for publication. If your work must be reviewed by a boss or an editor, count yourself lucky. Your boss is responsible for the quality of your work; he therefore has a good reason to make it better (unless he’s a jerk or psychopath). If your editor isn’t qualified to do substantive editing, he can at least correct your syntax, grammar, and usage.
5. Accept criticism gratefully and graciously.
Bad writers don’t, which is why they remain bad writers. Yes, you should reject (or fight against) changes and suggestions if they are clearly wrong, and if you can show that they’re wrong. But if your critic tells you that your logic is muddled, your facts are inapt, and your writing stinks (in so many words), chances are that your critic is right. And you’ll know that your critic is dead right if your defense (perhaps unvoiced) is “That’s just my style of writing.”
6. What if you’re an independent writer and have no one to turn to?
Be your own worst critic. If you have the time, let your first draft sit for a day or two before you return to it. Then look at it as if you’d never seen it before, as if someone else had written it. Ask yourself if it makes sense, if every key point is well-supported, and if key points are missing, Look for glaring errors in syntax, grammar, and usage. (I’ll list and discuss some useful reference works in part III.) If you can’t find any problems or more than trivial ones, you shouldn’t be a self-critic — and you’re probably a terrible writer. If you make extensive revisions, you’re on the way to become an excellent writer.
7. How many times should you revise your work before it’s published?
That depends, of course, on the presence or absence of a deadline. The deadline may be a formal one, geared to a production schedule. Or it may be an informal but real one, driven by current events (e.g., the need to assess a new economics text while it’s in the news). But even without a deadline, two revisions of a rough draft should be enough. A piece that’s rewritten several times can lose its (possessive pronoun) edge. And unless you’re an amateur with time to spare (e.g., a blogger like me), every rewrite represents a forgone opportunity to begin a new work.
If you act on this advice you’ll become a better writer. But be patient with yourself. Improvement takes time, and perfection never arrives.
I’ll start with a look at the state of the economy before turning to the labor market.
The Bureau of Economic Analysis (BEA) issues a quarterly estimate of constant-dollar (year 2009) GDP, from 1947 to the present. BEA’s numbers yield several insights about the course of economic growth in the U.S.
I begin with this graph:
FIGURE 1
The exponential trend line indicates a constant-dollar (real) growth rate for the entire period of 0.77 percent quarterly, or 3.1 percent annually. The actual beginning-to-end annual growth rate is 3.1 percent.
The red bands parallel to the trend line delineate the 95-percent (1.96 sigma) confidence interval around the trend. GDP has been below the confidence interval since the recession of 2020, which has been succeeded by the incipient recession of 2022.
Recessions are represented by the vertical gray bars in figure 1. Here’s my definition of a recession: two or more quarters in which real GDP (annualized) is below real GDP (annualized) for an earlier quarter.
Recessions as I define them don’t correspond exactly to recessions as defined by the National Bureau of Economic Research (NBER). NBER, for example, dates the Great Recession from December 2007 to June 2009 — 18 months in all; whereas, I date it from the first quarter of 2008 through the second quarter of 2011 — 42 months in all. The higher figure seems right to me, and probably to most people who bore the brunt of the Great Recession (i.e., prolonged joblessness, loss of savings, foreclosure on home, bankruptcy).
My method of identifying recessions is more objective and consistent than the NBER’s method, which one economist describes as “The NBER will know it when it sees it.” Moreover, unlike the NBER, I would not presume to pinpoint the first and last months of a recession, given the volatility of GDP estimates.
The following graph illustrates that volatility, and something much worse — the downward drift of the rate of real economic growth:
FIGURE 2
It’s not a pretty picture. The dead hand of the “tax, spend, and regulate” economy lies heavy on the economy. (See “The Bad News about Economic Growth”.)
Here’s another ugly picture:
FIGURE 3
Rates of growth (depicted by the exponential regression lines) clearly are lower in later cycles than in earlier ones, and lowest of all represents the 2009-2020 cycle (the most recent of completed cycles).
In tabular form:
There is a statistically significant, negative relationship between the length of a cycle and the robustness of a recovery. But the 2009-2020 cycle (represented by the data point at 2 months and 3.0% growth) stands out as an exception:
FIGURE 4
Note: The first, and brief, post-World War II cycle is omitted.
By now, it should not surprise you to learn that the 2009-2020 cycle was the weakest of all post-war cycles (though the previous one took a dive when it ended in the Great Recession):
FIGURE 5
Which brings me to the labor market. How can it be “red hot” when the economy is obviously so weak? In a phrase, it isn’t.
The real unemployment rate is several percentage points above the nominal rate. Officially, the unemployment rate stood at 3.5 percent as of July 2022. Unofficially — but in reality — the unemployment rate was 10.8 percent.
How can I say that the real unemployment rate was 10.8 percent, even though the official rate was 3.5 percent? Easily. Just follow this trail of definitions, provided by the official purveyor of unemployment statistics, the Bureau of Labor Statistics:
Unemployed persons (Current Population Survey) Persons aged 16 years and older who had no employment during the reference week, were available for work, except for temporary illness, and had made specific efforts to find employment sometime during the 4-week period ending with the reference week. Persons who were waiting to be recalled to a job from which they had been laid off need not have been looking for work to be classified as unemployed.
Unemployment rate The unemployment rate represents the number unemployed as a percent of the labor force.
Labor force (Current Population Survey) The labor force includes all persons classified as employed or unemployed in accordance with the definitions contained in this glossary.
Labor force participation rate The labor force as a percent of the civilian noninstitutional population.
Civilian noninstitutional population (Current Population Survey) Included are persons 16 years of age and older residing in the 50 States and the District of Columbia who are not inmates of institutions (for example, penal and mental facilities, homes for the aged), and who are not on active duty in the Armed Forces.
In short, if you are 16 years of age and older, not confined to an institution or on active duty in the armed forces, but have not recently made specific efforts to find employment, you are not (officially) a member of the labor force. And if you are not (officially) a member of the labor force because you have given up looking for work, you are not (officially) unemployed — according to the BLS. Of course, you are really unemployed, but your unemployment is well disguised by the BLS’s contorted definition of unemployment.
What has happened is this: Since the first four months of 2000, when the labor-force participation rate peaked at 67.3 percent, it declined to 62.3 percent in 2015 before rising to 63.4 percent just before the pandemic wreaked havoc on the economy. The participation rate dropped to 60.2 percent during the COVID recession before recovering to a post-recession peak of 62.4 percent and dropping to 62.1 percent in July 2022 (during what may prove to be another recession). The post-recession recovery still leaves the participation rate well below its (depressed but recovering) pre-recession level:
FIGURE 6
Source: See figure 7.
The decline that began in 2000 came to a halt in 2005, but resumed in late 2008. The economic slowdown in 2001 (which followed the bursting of the dot-com bubble) can account for the decline through 2005, as workers chose to withdraw from the labor force when faced with dimmer employment prospects. But what about the sharper decline that began near the end of Bush’s second term?
There we see not only the demoralizing effects of the Great Recession but also the growing allure of incentives to refrain from work, namely, disability payments, extended unemployment benefits, the relaxation of welfare rules, the aggressive distribution of food stamps, and “free” healthcare” for an expanded Medicaid enrollment base and 20-somethings who live in their parents’ basements*. That’s on the supply side. On the demand side, there are the phony and even negative effects of “stimulus” spending; the chilling effects of regimeuncertainty, persisted beyond the official end of the Great Recession; and the expansion of government spending and regulation (e.g., Dodd-Frank), as discussed in Part III.
More recently, COVID caused many workers to withdraw from the labor force out of an abundance of caution, because they couldn’t work from home, and because of the resulting recession. As noted, the recovery has stalled, resulting in a low but phony unemployment rate.
I constructed the actual unemployment rate by adjusting the nominal rate for the change in the labor-force participation rate. The disparity between the actual and nominal unemployment rates is evident in this graph:
FIGURE 7
Derived from SeriesLNS12000000, Seasonally Adjusted Employment Level; SeriesLNS11000000, Seasonally Adjusted Civilian Labor Force Level; Series LNS11300000, Seasonally Adjusted Civilian labor force participation rate; and Series LNS12500000, Employed, Usually Work Full Time. All are available at BLS, Labor Force Statistics from the Current Population Survey.
So, forget the Biden administration’s hoopla about the “red hot” labor market. The real unemployment rate has actually risen recently and is about where it was at the beginning of the Trump administration . _________ * Contrary to some speculation, the labor-force participation rate is not declining because older workers are retiring earlier. The participation rate among workers 55 and older rose between 2002 and 2012. The decline is concentrated among workers under the age of 55, and especially workers in the 16-24 age bracket. (See this table at BLS.gov.) Why? My conjecture: The Great Recession caused a shakeout of marginal (low-skill) workers, many of whom simply dropped out of the labor market. And it became easier for them to drop out because, under Obamacare, many of them became eligible for Medicaid and many others enjoy prolonged coverage (until age 26) under their parents’ health plans. For more on this point, see Salim Furth’s “In the Obama Economy, a Decline in Teen Workers” (The Daily Signal, April 11, 2015), and Stephen Moore’s “Why Are So Many Employers Unable to Fill Jobs?” (The Daily Signal, April 6, 2015). On the general issue of declining participation among males aged 25-54, see Timothy Taylor’s “Why Are Men Detaching from the Labor Force?“, (The Conversible Economist, January 16, 2020), and follow the links therein. See also Scott Winship’s “Declining Prime-Age Male Labor Force Participation” (The Bridge, Mercatus Center, September 26, 2017). More recently, there have been rounds of “stimmies” issued by the federal government and some State governments in response to the COVID crisis (inflicted by the governments). The “stimmies” were topped off by extended, expanded, and downright outlandish unemployment benefits (e.g., these).