Americans are suffering from a bad case of loneliness. The number of people in the United States living alone has gone through the studio-apartment roof. A study released by the insurance company Cigna last spring made headlines with its announcement: “Only around half of Americans say they have meaningful, daily face-to-face social interactions.” Loneliness, public-health experts tell us, is killing as many people as obesity and smoking. It’s not much comfort that Americans are not, well, alone in this. Germans are lonely, the bon vivant French are lonely, and even the Scandinavians—the happiest people in the world, according to the UN’s World Happiness Report—are lonely, too. British prime minister Theresa May recently appointed a “Minister of Loneliness.” …
Still, the loneliness thesis taps into a widespread intuition of something true and real and grave. Foundering social trust, collapsing heartland communities, an opioid epidemic, and rising numbers of “deaths of despair” suggest a profound, collective discontent. It’s worth mapping out one major cause that is simultaneously so obvious and so uncomfortable that loneliness observers tend to mention it only in passing. I’m talking, of course, about family breakdown. At this point, the consequences of family volatility are an evergreen topic when it comes to children; this remains the subject of countless papers and conferences. …
The [20th century social/demographic] transition helped shape a social ecology that would worsen some of our most vexing social problems, including growing inequality. Throughout the Western world, wealthier, more educated parents tend more often to be married before they have children, and to stay married, than do their less advantaged fellow citizens. Their children benefit not just from their parents’ financial advantages, with all the computer camps and dance lessons that a flush checking account can buy, but from the familiar routines and predictable households that seem to help the young figure out the complex world they’ll be entering. The children of lower-income, less educated parents, by contrast, are more likely to see their married parents divorce or their cohabiting parents separate, and then to have to readjust to the strangers—stepparents, boyfriends or girlfriends, step- or half-siblings—who come into their lives. Some children will be introduced to a succession of newcomers as their parents divorce or separate a second or even third time.
Why, after the transition, did the rich continue to have reasonably stable and predictable domestic lives while the working class and poor stumbled onto what family scholar Andrew Cherlin calls the “marriage-go-round”? Observers typically point to deindustrialization and the loss of stable, decent-paying low-skilled jobs for men. True enough. A jobless man, especially one without a high school diploma, is no one’s idea of a good catch. But there’s more to the marriage gap than that. While the loosening of traditional rules gave women freedom to leave violent or cruel husbands, it also changed the cultural environment for couples trying to weather less dangerous stresses and disappointments, including a pink slip. Lower-income men and women are bound to have more financial anxieties, more work accidents, and more broken-down cars and evictions, and they lack the funds for Disneyland vacations, massages, and psychotherapists that might take some of the edge off a struggling marriage. And they see few, if any, long-term married couples who could offer a successful model. With single parenthood and cohabitation both on the lifestyle menu, what they see instead is an easy out.
When so many marriages melt into thin air, lower-income kin networks, a source of job connections, child care, and family meals, attenuate as well. Your mother’s sister’s husband—your uncle by marriage—might give you a tip about a job opening at a local machine shop; an uncle separated from your aunt and living with a girlfriend with her own kids in the next town over, maybe not. Communities flush with fatherless households tend to be troubled. In his landmark study of county-level social mobility, economist Raj Chetty discovered that places thick with married-couple families created more opportunity for kids, regardless of whether they were living in a married or single-parent household; places with large numbers of single-parent homes, on the other hand, pulled kids down—including those living with married parents. It’s hard to imagine more concrete evidence of the truth of the old cliché that family is the building block of society.
W. James Antle writes on Rep. Justin Amash’s leaving the Republicans:
What came after in the form of the Tea Party brought together fiscal and social conservatives in defense of the Constitution… At its peak, this new movement helped elect two important skeptics of military interventionism, Rand Paul and Justin Amash. With fellow traveler Mike Lee and such later additions as Thomas Massie, they outnumbered more hawkish newcomers like Marco Rubio, even if they remained a minority among congressional Republicans overall.
It looked like a free market populism could take hold of the GOP. Instead populism without the modifier took over via Donald Trump and Amash is now out of the party, declaring his own independence on the Fourth of July. While Amash’s frustration with partisan politics had been growing for years, it was his break with Trump that made this move seem inevitable.
To some extent, we’re witnessing a fight between those who want conservative leaders to be good and those who want conservatism itself to be less individualistic and more oriented toward the common good. …
The federal government keeps getting bigger no matter which party holds the pursestrings. There’s a case to be made that fusionism as practiced by the GOP and mainstream conservative movement shortchanged both libertarians and social conservatives.
But tax cuts and deregulation happen more frequently than any real progress on social issues, even though evangelicals and conservative Catholics supply most of the votes for Republican candidates. The most electorally viable economic conservatism is really a form of social conservatism, a secularized version of the Protestant work ethic. Yet even making tax cuts more family-friendly, whether through child tax credits or incentives for parental leave, inspires considerable pushback.
Moreover, atomistic individualism, if not real libertarianism, has played a role in social conservative setbacks on abortion and marriage, among other issues, without producing similar gains for religious liberty. This has led many traditionalists to question at a more fundamental level the concepts of personal autonomy at least partially fueling trends they dislike.
All this has occurred amid shrinking libertarian influence over Republican voters in general. A The Hill/Harris poll conducted in June found Republicans resistant to cutting federal spending in all 19 categories tested. This includes not just traditional GOP priorities like law enforcement or defense, but also education, infrastructure, health care, and unemployment insurance.
Many libertarians have doubled down in the face of this resistance. It would be better to abolish the welfare state than to regulate immigration, they say, without identifying a political constituency for such plans.
This phrase could describe an incredible number of advocacy groups and lobbyists in Washington: “…they say, without identifying a political constituency for such plans.“
Happy Independence Day! I was in Washington last July 4th, when I shared the Oneida Indian Nation’s narrative of their role as America’s first ally, and John Paul the Great’s reflection on America from his 1995 visit in Baltimore. This year we’re at the National Mall for President Trump’s “Salute to America” address.
The main draftsman of the document was Thomas Jefferson, probably a Deist, but it blended the thinking of Deists, Puritans, Anglicans, and a Catholic, all of whom shared a belief in natural law and of traditional English liberty.
This day, however, did not come in a vacuum or suddenly.
Englishmen, after a few unsuccessful attempts, founded a permanent colony in Virginia in 1607, and in Massachusetts in 1620. For about a hundred years the inhabitants of the English colonies thought of themselves as Englishmen, Scots, Welshmen, and Irish. In the early 1700s, they all began to think of themselves also as Britons. Indeed Georgia, the last of the colonies, was created as a British, not an English colony.
Americans began to think of themselves as American, not British. In 1753, the French in Canada invaded what is now Ohio. This led to the French and Indian War from 1754 to 1763. This left the Americans with a bad taste in their mouths from the British Army. Americans played a major role in our victory. George Washington won one of the major battles. The colonies sent large militias to war. Massachusetts alone sent eight regiments and two generals. The British Army, however, did not recognize the ranks of American generals, colonels and majors, treating them as mere captains. The conduct of the British soldiers was a scandal to the pious American militiamen.
The British government wanted to keep its troops in America after the war but wanted the colonies to pay for them. In 1765 it passed the Stamp Act, which repudiated the long-established practice of having American taxes determined by colonial legislative bodies and replaced it by taxation by the British Parliament. The Stamp Act was so unenforceable in enough of America that it was repealed and replaced by laws giving a monopoly of the tea trade in America to the East India Company and taxing the importation of tea into America.
This led to a number of hostile acts on both sides. Some Bostonians threw a shipload of tea into the harbor and burned a ship. The British cancelled the Charter of Massachusetts, blockaded Massachusetts, and fired on and killed several people on the streets of Boston. The Americans convened a Continental Congress to provide some America-wide policy. At that time, it decided not to declare independence or to elect an American Parliament.
Open war broke out in 1775. By the summer of 1776 it was clear that America and Great Britain should go their separate ways. A Continental Congress was reconvened. Most of it favored independence, but America’s leaders wanted unanimity. With some difficulty it was achieved. There were a number of Americans who would remain loyal to Britain for the rest of their lives. Some went to Canada, some found a way to get along with an independent America.
There was probably a pro-British majority in Georgia, but Georgia decided to send the only Georgian who was familiar with the question of independence to the Continental Congress, thereby achieving unanimity of the states. In the end three Georgians signed the Declaration.
On July 4, the Declaration of Independence, mostly the draftsmanship of Thomas Jefferson of Virginia, was signed. Virginia, New York, and New England provided most of the main spokesmen for independence. Charles Carroll of Maryland, the most prominent Catholic layman in America, signed, probably one of the reasons that religious liberty would grow so quickly after independence.
The war would continue until 1783, when Great Britain finally decided that the cost of continuing it was too great. It would take more than another five years for us to get a Constitution and Bill of Rights. It would take another war (1812-1815) with Britain before Great Britain decided to leave us alone.
The Declaration of Independence was a revolutionary idea, but it was also a carefully written justification of American independence under both natural law and English Common Law. It is over 250 years old, but it has aged well and deserves careful study by not only students but all Americans.
I remember heading to the polls in Philadelphia on Election Day in November 2016 and being surprised by the fact that there was simply no line at the Center City polling station. I had been there in 2012 when Barack Obama beat Mitt Romney, and there was a line out of the door. If there wasn’t high turnout for Hillary Clinton in Philadelphia, I thought, she might not have a certain victory in Pennsylvania.
What I witnessed that Election Day was a case of a depressed and unmotivated Democratic voter base handing the vote to its opponent in a critical state. And that happened in every state it needed to happen in for Donald Trump to win the presidency. And so it’s always been doubtful that he could be keep office, in the fact of a highly energized opposition:
A university election model that predicted the blue wave in the House in 2018 almost to the seat is predicting a big loss by President Trump next year due to an explosion of bitter partisanship and Trump hate.
An election forecast model designed by Rachel Bitecofer, assistant director of the Wason Center for Public Policy at Christopher Newport University, predicted that Trump will lose the Electoral College 297-197, with 270 of 538 needed to win.
Three key states that helped push Trump over Hillary Rodham Clinton in 2016, despite her winning the popular vote, Pennsylvania, Michigan, and Wisconsin, will turn back to the Democrats, she said.
“Trump’s 2016 path to the White House was the political equivalent of getting dealt a Royal Flush in poker,” said Bitecofer. “It’s probably not replicable in 2020 with an agitated Democratic electorate.”
That partisanship, added to the spark in anti-Trump protests by liberals and even left-leaning independents, is likely to overwhelm the increase in GOP voters, she said.
“The country’s hyperpartisan and polarized environment has largely set the conditions of the 2020 election in stone,” Bitecofer said in a release. “The complacent electorate of 2016, who were convinced Trump would never be president, has been replaced with the terrified electorate of 2020. Under my model, that distinction is not only important, it is everything,” she added.
Her model in 2018 predicted a 42 seat House Democratic pickup, and the Democrats won 40. Most models did not predict such a big victory.
Whether you think this is a good thing or a bad thing, what remains true is that your own life, your own family, and your own community all matter a thousand times more. It’s worth staying focused on what matters most over the 18 months to come, and as much as possible mentally bracketing the noise of the campaigns.
The question of why the Bladensburg Cross should stand is inseparable from the question of whether it violates the Establishment Clause which prohibits the Federal government from establishing an official religion, or from favoring one religion over others. …
Justice Alito remarked in the first section that a Christian symbol can accrue additional symbolic meanings which are not in themselves religious. Alito writes:
“The fact that the cross is undoubtedly a Christian symbol should not blind one to everything else that the Bladensburg Cross has come to represent: a symbolic resting place for ancestors who never returned home, a place for the community to gather and honor all veterans and their sacrifices for this Nation, and a historical landmark. For many, destroying or defacing the Cross would not be neutral and would not further the ideals of respect and tolerance embodied in the First Amendment.” …
Justice Gorsuch takes the judgment further, joined by Justice Thomas, in a concurring opinion. He writes that the “offended observer” theory, which the American Humanist Association based their case on in part — so deeply does the cross offend them as they drive by — “has no basis in law.” It’s not enough to be offended. There has to be injury that is “concrete and particularized,” and no one is injured by seeing a cross. …
“Although the plurality does not say it in as many words, the message of today’s decision for the lower courts must be this: whether a monument, symbol, or practice is old or new, apply Town of Greece v. Galloway, 572 U. S. 565, not Lemon, because what matters when it comes to assessing a monument, symbol, or practice is not its age but its compliance with ageless principles. Pp. 6–9.” [Emphasis not in original]
Now ageless principles can well be philosophical or theological, and they can be arrived at by reason unaided by the act of faith, or by divine revelation. But what Gorsuch does here strikes me as important because he recognizes a non-positivist standard. I don’t know exactly how Gorsuch would develop this standard, but he is right that law must not become relativistic. The Establishment Clause was made to protect religion — indeed, Christian religion — from excessive government interference. It recognized the substantive good of religion as something which is more than just “history and tradition” but as something which orients us to what is permanently true. In this sense, Gorsuch recognizes that the more modest test should not privilege “secular purpose” but respect transcendent principles.
An important case for religious liberty, and interesting to see both the “history and tradition” and “ageless principles” standards being articulated as a means for restraining the government from destroying public religious symbols.
Sen. Josh Hawley writes that American culture has become dominated by a false philosophy of liberty:
For decades now our politics and culture have been dominated by a particular philosophy of freedom. It is a philosophy of liberation from family and tradition, of escape from God and community, a philosophy of self-creation and unrestricted, unfettered free choice.
It is a philosophy that has defined our age, though it is far from new. In fact, its most influential proponent lived 1,700 years ago: a British monk who eventually settled in Rome named Pelagius. So thoroughly have his teachings informed our recent past and precipitated our present crisis that we might refer to this era as the Age of Pelagius.
But here is the irony. Though the Pelagian vision celebrates the individual, it leads to hierarchy. Though it preaches merit, it produces elitism. Though it proclaims liberty, it destroys the life that makes liberty possible. …
Pelagius was born sometime between A.D. 350 and 360 in Britain, possibly Wales. Highly educated, unusually gifted, a scholar of both Latin and Greek, he made his way to Italy and then to Rome. There he became famous for his teaching on Paul’s letters.
Pelagius held that the individual possessed a powerful capacity for achievement. In fact, Pelagius believed individuals could achieve their own salvation. It was just a matter of them living up to the perfection of which they were inherently capable. As Pelagius himself put it, “Since perfection is possible for man, it is obligatory.” The key was will and effort. If individuals worked hard enough and deployed their talents wisely enough, they could indeed be perfect.
This idea famously drew the ire of Augustine of Hippo, better known as Saint Augustine, who responded that we humans are not achievement machines. We are fragile. We are fallible. We suffer weakness and need. And we all stand in need of God’s grace.
But Pelagius was not satisfied. He took his stand on an idea of human freedom. He responded that God gave individuals free choice. And he insisted that this free choice was more powerful than any limitation Augustine identified.
Augustine said that human nature was a permanent thing, but Pelagius didn’t think so. Pelagius said that individuals could use their free choice to adopt their own purposes, to fix their own destinies—to create themselves, if you like.
That’s why a disciple of Pelagius named Julian of Eclanum said freedom of choice is that by which man is “emancipated from God.”
Now as you might expect with followers who say things like that, Pelagius was condemned as a heretic by the Council of Ephesus in 431.
But his philosophy lived on in late-20th-century America. And if you listen closely today, you can hear it almost everywhere—in our fiction and our film, in our school curricula and self-help books. …
Perhaps the most eloquent contemporary statement of Pelagian freedom appears in an opinion from the United States Supreme Court, in a passage written by former Justice Anthony Kennedy. In 1992, in a case called Casey v. Planned Parenthood of Southeastern Pennsylvania, he wrote this: “At the heart of liberty is the right to define one’s own concept of existence, of meaning, of the universe, and of the mystery of human life.”
It’s the Pelagian vision. Liberty is the right to choose your own meaning, define your own values, emancipate yourself from God by creating your own self. Indeed, this notion of freedom says you can emancipate yourself not just from God but from society, family, and tradition.
The Pelagian view says the individual is most free when he is most alone…
Hawley concludes by connecting the problem of American Pelagianism with the opportunity to recover not only an American sense of grace, but also one of solidarity, too.
I recently listened to Joe Rogan’s conversation with Naval Ravikant:
“With my family, I’m a communist. With my close friends, I’m a socialist. At the state level of politics, I’m a Democrat. At higher levels, I’m a Republican, and at the federal levels, I’m a Libertarian.”
Nassim’s point, Naval explains, is that “the larger the group of people you have together, the less trust there is and the more cheating takes place [and] the more you gear towards capitalism, [but] the smaller the group you’re in—then by all means be a socialist.”
This is provocative and perhaps it is helpful in provoking thought, but thinking it through is tough in light of the incredible cultural/political baggage that all of these words carry into the attempt to think clearly. Better to start fresh by returning to first principles.
A way to return to first principles, and to avoid stale political blind alleys in our thought, is to look to Catholic social teaching and particularly to subsidiarity as an organizing principle. I think Naval/Nassim are grasping for the principle of subsidiarity, which the catechism of the Catholic Church describes in this way:
Excessive intervention by the state can threaten personal freedom and initiative. The teaching of the Church has elaborated the principle of subsidiarity, according to which ‘a community of a higher order should not interfere in the internal life of a community of a lower order, depriving the latter of its functions, but rather should support it in case of need and help to coordinate its activity with the rest of society, always with a view to the common good.’
“God … entrusts to every creature the functions it is capable of performing, according to the capacities of its own nature. This mode of government ought to be followed in social life. The way God acts in governing the world, which bears witness to such great regard for human freedom, should inspire the wisdom of those who govern human communities. They should behave as ministers of divine providence.
“Subsidiarity is opposed to all forms of collectivism. It sets limits for state intervention. It aims at harmonizing the relationships between individuals and societies. It tends toward the establishment of true international order.” (CCC 1883-1885).
Subsidiarity, however, is not mere local control. In fact, the word comes from the Latin “subsidium,” meaning to provide aid. So, the principle of subsidiarity is really about the duty of the higher order to provide assistance to the lower order when appropriate. One example is when the lower order cannot provide a necessary function, such as defense, or has failed to protect the rights of persons and the common good, such as civil rights. …
Subsidiarity, therefore, is not “make local and leave alone.” It is “presume local and assist when needed through appropriate means.”
Whenever you act to do some good thing that no one else could better do, that is subsidiarity.
Sohrab Ahmari has written against what he calls “David French-ism,” which I’ll describe as the tendency of conservatives to attempt to maintain social peace through accommodation with cultural forces that don’t necessarily seek accommodation so much as replacement of America’s older social order with a wholly new order—and a new order with a wholly new set of moral goods. “Though culturally conservative,” Ahmari writes, “French is a political liberal, which means that individual autonomy is his lodestar.” And the problem with the logic of individual autonomy is that it ends with an unraveling of human relationships, duties, responsibilities, and rights in pursuit of an abstracted sort of liberty that believes its fulfillment will be found in the transgression of all limits, and the dissolution of what conservatives would recognize as social order.
There’s an aspect of Ahmari’s piece that is being widely misinterpreted; many are reading his piece as if he’s deriding conservatives for being “too nice,” when what he’s really doing is point out that calls for civility and niceness are not effective tactics for sustaining pluralism if your opponents no longer care about accommodation. Susannah Black highlights this:
“[Ahmari] wrote that ‘Civility and decency are secondary values. They regulate compliance with an established order and orthodoxy. We should seek to use these values to enforce our order and our orthodoxy, not pretend that they could ever be neutral. To recognize that enmity is real is its own kind of moral duty.’ This has been read by some as a call to do away with civility and decency. It is not. At least, it is not as I read it. It’s rather pointing out—at least, this is what I take—that if they are in service to an inverted moral order, an un-peace, then these things are not actually civility and decency. … True civility, true decency, are not neutral tactics of conversation which we can use to avoid confrontation. If you’re using something you call ‘civility’ that way, you are not civil. You are dodging. It is not the office of love of one’s enemy to ‘get along with’ him no matter what, to fail to tell him the truth. We must love our enemies—our hosti, as well as our inimici. But the way to do that is sometimes a face off. And there’s nothing noble about shirking.
As with most debates within conservatism, what’s unfolding is an attempt to resolve the question, “What are the things we’re seeking to conserve?”
It’s a warm, sun-lit, breezy Memorial Day in Georgetown. I took a walk earlier and am reading Gabriel Garcia Marquez’s One Hundred Years of Solitude.
In honor of American soldiers both killed in action and departed in the course of time, here’s a bit from Laurence Binyon’s “For the Fallen,” which I first heard in Peter Jackson’s They Shall Not Grow Old earlier this year:
They shall grow not old, as we that are left grow old;
Age shall not weary them, nor the years condemn.
At the going down of the sun and in the morning,
we will remember them.
I’m in Dallas today, where I looked out my car window at one point while in traffic to see this:
It seems as if a majority of American suburbs are surrounded by the sort of strip malls and shopping centers I’m seeing as I’ve been driving along the highway. So maybe it’s only somewhat by chance that I came across Leo Babauta’s piece on purchasing as a response to uncertainty and insecurity:
We don’t like the feeling of uncertainty and insecurity – we try to get rid of it as soon as we can, get away from it, push it away. We have lots of habitual patterns we’ve built up over the years to deal with this uncertainty and insecurity … and buying things is one of the most common, other than procrastination.
Here’s the thing: it doesn’t actually give us any certainty or security. We buy things and we’re not really more prepared, in control, or secure. We hope we will be, and yet the feelings of uncertainty and insecurity are still there. So we have to buy some more stuff.
We’re looking for the magical answer to give us control and security, but it doesn’t exist. Life is uncertain. Always. It’s the defining feature of life. Read the quote from Pema Chodron at the top — it says it all, we have to accept the uncertainty of life.
And in fact, this is the answer to our drive to buy too much stuff — if we lean into the uncertainty, embrace it, learn to become comfortable with it, we can stop buying so much.
We can learn to live with little, sitting with the uncertainty of it all.