The Rise Of Anti-Humanism

n 2009, one of Google’s self-driving cars came to an intersection with a four-way stop. It came to a halt and waited for other cars to do the same before proceeding through. Apparently, that is the rule it was taught—but of course, that is not what people do. So the robot car got completely paralyzed, blocked the intersection, and had to be rebooted. Tellingly, the Google engineer in charge said that what he had learned from this episode was that human beings need to be “less idiotic.”

Let’s think about that. If there is an ­ambiguous case of right-of-way, human drivers will often make eye contact. Maybe one waves the other through or indicates by the movements of the car itself a readiness to yield, or not. It’s not a stretch to say that there is a kind of body language of driving, and a range of driving dispositions. We are endowed with social intelligence, through the exercise of which people work things out among themselves, and usually manage to cooperate well enough. Tocqueville thought it was in small-bore practical activities demanding improvisation and cooperation that the habits of collective self-­government were formed. And this is significant. There is something that can aptly be called the democratic personality, and it is cultivated not in civics class, but in the granular features of everyday life. But the social intelligence on display at that intersection was completely invisible to the Google guy. This, too, is significant.

The premise behind the push for driverless cars is that human beings are terrible drivers. This is one instance of a wider pattern. There is a tacit picture of the human being that guides our institutions, and a shared intellectual DNA for the governing classes. It has various elements, but the common thread is a low regard for human beings, whether on the basis of their fragility, their cognitive limitations, their latent tendency to “hate,” or their imminent obsolescence with the arrival of imagined technological possibilities. Each of these premises carries an important but partial truth, and each provides the master supposition for some project of social control.

We are already sliding toward a post-political mode of governance in which expert administration replaces democratic contest, and political sovereignty is relocated from representative bodies to a permanent bureaucracy that is largely unaccountable. Common sense is disqualified as a guide to reality, and with this disqualification the political standing of the majority is demoted as well. The new antihumanisms can only accelerate these trends: They serve as apologetics for a further concentration of wealth and power, and the further erosion of the concept of the citizen—by which I mean the wide-awake, imperfect but responsible human being on whom the ideal of self-government rests. 

That older ideal has its roots in the long arc of Western civilization. In the Christian centuries, man was conceived to be fallen, yet created in the image of God. You don’t have to be a Christian to see that this doubleness—this awareness of sin and of our orientation toward perfection—can help us to clarify the effects of our current antihumanisms, criticize their presuppositions, and look for an exit from the uncanny new forms of tyranny that are quickly developing.

The four antihumanisms, as I see it, are these: Human beings are stupid, we are obsolete, we are fragile, and we are hateful. I submit that these four premises are mutually supporting and that, together, they serve to legitimize, and usher in more fully, the post-political condition. One thing they have in common is that, if taken to heart, they attenuate the citizenly pride that is both cause and effect of self-government.


In the decades after World War II, the “­rational actor” model of human behavior was the foundation of economic thinking. It treated people as agents who act to maximize their own utility, which required the further assumption that they act with a perfectly lucid grasp of where their interests lie and how they can be secured. These assumptions may seem psychologically naive, but they provided the tacit anthropology for what we might call the party of the market—what is called “liberalism” in Europe but in the Anglophone world is associated with figures such as Ronald Reagan and Margaret Thatcher.

In the 1990s, this intellectual edifice was deposed by the more psychologically informed school of behavioral economics, which teaches that our actions are largely guided by pre-reflective cognitive biases and heuristics. These offer “fast and frugal” substitutes for conscious deliberation, which is a slow and costly activity. This was a necessary correction of our view of the human person, in the direction of realism.

But something went awry in the institutionalization of these insights. In the psychological literature, one thing that stands out is that our “sub-rational” modes of coping with the world are actually pretty rational, in the Bayesian sense. That is, the biases and heuristics we rely on correspond to real regularities in the world and provide a good basis for action.

But the practical adequacy of “sub-rational” modes of coping with the world dropped out of consideration when the social engineers got ahold of what looked like a promising new tool kit for “evidence-based interventions,” as well as a fresh rationale for intervening. Biases? Those are bad. People are sub-rational? We knew it all along. Their takeaway was that people need all the help they can get in the form of external “nudges” and cognitive scaffolding if they are to do the rational thing.

In a sense they are correct. A level-headed, Burkean version of their thesis would stress that with the external scaffolding of settled usages and inherited forms, we don’t have to wake up every morning and deduce the necessity of each action from first principles, entirely on our own. It would acknowledge the rationality of tradition as a set of framing conditions for individual choice. Instead, for the nudgers, rationality is to be located neither in the individual nor in tradition, but in a separate class of social managers, acting according to a vision that is theirs alone. They aim to create a “choice architecture” that will guide us beneath the threshold of our awareness.

The nudge is a non-coercive way to alter people’s behavior without having to persuade them of anything. That is, without the inconvenience of having to engage in democratic politics. Following the publication of Nudge by Cass Sunstein and Richard Thaler in 2009, both the Obama White House and the government of David Cameron in the UK immediately established “behavioral insight” teams. Such units are currently operating in the European Commission, the United Nations, the WHO, and, by Thaler’s reckoning, about four hundred other entities in government and the NGO world, as well as in countless private corporations. It would be hard to overstate the degree to which this approach has been institutionalized.

The innovation achieved here, at scale, is in the way government conceives of its subjects: not as citizens whose considered consent must be secured, but as particles to be steered through a ­science of behavior management that relies on their pre-reflective biases.

The glee and sheer repetition with which this diminished picture of the human subject (as being cognitively incompetent) was trumpeted by journalists and popularizers in the 2010s indicate that it has some moral appeal, quite apart from its intellectual merits. Perhaps it is the old Enlightenment thrill at disabusing human beings of their pretensions to specialness, whether as made in the image of God or as “the rational animal,” seen in Aristotle (not to be confused with the purely calculative “rational market actor”). A likely effect of this demotion is to attenuate the pride of the citizen, and so make us more acquiescent to the work of those whom C. S. Lewis called “the conditioners.”


Closely allied with the idea that we are stupid is the idea that human beings are essentially inferior versions of computers, and therefore the weak link in any system. To return to my opening example, human beings are said to be terrible drivers—which is why we need ­driverless cars. The first thing to know is that the push for driverless cars is not in response to consumer demand; it is a top-down effort. When Pew polls people about their attitudes on this, majorities express reservations about autonomous cars, and many people say they prefer to drive themselves. So this might be viewed as a case of for-profit social engineering.

Recall the Google guy who was dealing with the paralyzed robot car and concluded from the experience that human beings need to be “less idiotic.” Maybe he was a classic computer dork who has a hard time perceiving social phenomena, such as the way an intersection actually works. But he needn’t have been. He need only have been steeped in the prevailing account of how the human mind works, which is called the computational theory of mind. The origins of this lie with cybernetics in the years immediately after World War II. (See the excellent account by Jean-Pierre Dupuy, The Mechanization of the Mind.) Mentation as computation continues to provide the intellectual foundation for the mainstream of cognitive science, despite coming in for devastating critique from the more phenomenologically oriented dissidents within that discipline—such as Hubert Dreyfus, Andy Clark and Alva Noë—who emphasize the embodied nature of human intelligence and the fact that it is socially bootstrapped. That is, our apprehension of the world isn’t something that takes place entirely within our heads, like a brain in a vat. As Maurice Merleau-Ponty has argued, even for such straightforward processes as visual perception, we rely on taken-for-granted cultural norms that cannot really be captured algorithmically. As a zombie metaphor that can’t be killed, mind-as-computer anchors the popular superstition and marketing hype according to which machines are said to have “artificial intelligence,” a term of mystification that carries a tacit assertion that what a binary computer does is something very like human intelligence. And reciprocally, what the human mind does (not very well, unsurprisingly) is compute.

Bad philosophy of mind tends to be the most well-capitalized because it is the most easily operationalized, and one should not underestimate the genius of capital, no less than the state, for remaking the world so it will better fit some simplistic model and thereby make the model more true. As the Google dork said, human beings need to become less idiotic. That is, more like computers. That is, more legible to systems of control, and better adapted to the systems’ need for clean inputs.

Günther Anders spoke of “the rising cost of fitting man to the service of his tools.” Iris Murdoch said that man is the animal who makes pictures of himself and then comes to resemble the pictures. Here is the real mischief done by conceiving human intelligence in the image of the computer: It carries an impoverished idea of what thinking is, but it is one that can be made adequate enough if only we can change the world to reduce the scope for the exercise of our full intelligence as embodied agents who do things.

Five to eight years ago there was a lot of breezy talk by tech journalists and well-capitalized futurists about banning human drivers from the road, given the difficulties that arise when autonomous cars and human drivers have to interact (which have turned out to be a far greater engineering challenge than anticipated). Of course, from a business perspective, it is ideal if we become dependent on some proprietary and opaque system to do what we once did for ourselves, issuing in what Ivan Illich called “radical monopoly.” As the space for intelligent human action gets colonized by machines, our own capacity for intelligent action atrophies, leading to calls for yet more automation. The demands of skill and competence give way to a promise of safety and convenience, leading us ever further into passivity.

Safety is the sentimental face worn by each of these antihumanisms, and it does crucial work in legitimizing the post-political order. Which brings me to my third item.


How are we to understand the dramatically different responses of our society to the Spanish flu of a century ago and to Covid today? According to the NIH, the 1918 flu pandemic had a mortality rate of 8 to 10 percent for younger people, making it at least fifty times deadlier than Covid for that part of the population. Covid has overwhelmingly been fatal only to the already sick and the very old (with a mortality rate comparable to that of the flu for the rest of the population), making the measures recommended in pandemic war games before 2020 entirely appropriate: Isolate the most vulnerable. Instead, the social and economic activity of the entire population was suppressed. There is, in other words, an inverse relationship between the severity of these pandemics and the severity of measures to control them.

In 2020, a fearful public acquiesced to an extraordinary extension of expert jurisdiction over every domain of life, and a corresponding transfer of sovereignty from representative bodies to the unelected functionaries of various agencies. Notoriously, polling indicated that perception of the risks of Covid outstripped the reality by one to two orders of magnitude.

This is not surprising, as official channels consistently favored scientific interpretations (of a messy empirical landscape) that induced fear, even at the cost of omitting relevant context. As we now know, since the release of the Twitter files, the FBI, CDC, and various “disinformation” NGOs (themselves sometimes misinformed) worked closely with social media companies to censor information they knew to be true, but which would tend to induce “vaccine hesitancy” or lessen the sense of crisis.

The pandemic brought to the foreground a broader tendency in the West to govern by the invocation of emergency powers, rather than by the avowed principles of representative government. Fear and a corresponding sense of fragility play an important role here. I don’t think you need to posit a conspiracy of hidden manipulators to understand this—there is a kind of gravitational pull in this direction that is exerted by the nature of the modern state. To grasp this, it helps to glance at the origins of modern politics, where the ruling principle is most plainly put forth. There we see that liberalism is not merely a political doctrine but an anthropological project to remake man so he better fits the suppositions upon which the modern state pins its legitimacy: Man is a vulnerable creature, a potential victim in need of protection.

I am referring to the thought of Thomas Hobbes, and I am influenced here by Mark Shiffman’s account of “the role of the victimological imagination in legitimating the modern state.” First, in what sense is Hobbes a liberal? He is certainly no advocate of limited government, and the regime he imagines is basically monarchical. It is liberal in the sense that it is founded on consent. But it turns out this consent depends on a reeducation program that reaches quite deep, and is never finished. 

Hobbes offers a fable of human origins, the state of nature, according to which we are originally in a condition of acute vulnerability. Even after the rise of political society, civil war is always a threat and is the problem that his politics is meant to solve. The problem comes down to the fact that we are prone to pride, or vainglory. This is based on a false consciousness in which we place too high a value on ourselves; we then feel slighted and insulted when others fail to recognize us. Such aristocratic brittleness leads to faction and civil strife. The good news is that it can be overcome through a shift in perspective, if we (and especially the proud) come to identify with the weak rather than think ourselves strong. We are all potential victims, and this is the self-awareness that grounds political authority in consent. Out of fear, we consent to a social compact in which we all submit to Leviathan, whom Hobbes calls “King of the proud.”

Liberalism begins with the politics of emergency, then. Leviathan is supposed to end this state of emergency; that is the whole point of it. But the emergency must be renewed, over and over again, if Leviathan is to thrive. This requires renewal of the consciousness-raising program as well, cultivating the vulnerable self. This is the self that is implicit in the cult of safetyism that children are brought up in. It is also the guy you see riding his bicycle double-masked.

There is a cult-like quality to public ­spaces in the Bay Area where I live, with lots of people wearing masks outdoors. You have to think they know the facts by now. It may be that they are acting not out of fear for themselves or for others, but in a gesture of identification with the Vulnerable One who is currently elevated, the immunocompromised. How many of these are there, really? (The answer might matter if hygiene theater did anything to protect them, but unfortunately it does not—as we now know from the Cochrane Library’s meta-study of randomized control trials of mask efficacy.)

Note that in this Hobbesian dynamic, the perpetuation of a sense of crisis is accomplished by highlighting the vulnerability of a particular group of innocents, just as in racial victimology. Perhaps this helps us understand how, in the summer of 2020, the health emergency of Covid and the moral emergency of white supremacism seemed to merge into a single thing. Social distancing guidelines had to be adjusted to accommodate mass protests, as these, too, served to advance the general crisis. Again, you don’t need a conspiracy of hostile elites to explain this. It is sufficient to have a shared political morality that sacralizes the victim, issuing in moral demands that are categorical, even if contradictory.

Victimological dramas provide a mood of permanent moral emergency, justifying an ever-­deeper penetration of society by bureaucratic authority in both the public and private sectors. (The latter incudes HR departments and university administration, for example.)

In order to have victims, it helps to have victimizers. This brings us to the final item on my list.


Every author in the pagan or biblical traditions, or indeed in the modern tradition, would agree that we are ruled by our passions—not simply or irredeemably, but for the most part. We are cruel, we are selfish, we are small-souled in any number of ways. We are prone to hate, if you like. What is novel, I think, is the role that the accusation of “hate” plays in forestalling the possibility of a politics of solidarity among those who are said to be haters. The majority is demoralized and dispirited through the inculcation of shame at its hateful past, or the attribution of blood guilt that cannot be expiated. Such a population disappears, politically, through a kind of moral self-erasure, leaving the field relatively clear for top-down projects that may not be popular.

But rather than attribute a cynical intention to manipulate, which would require an oligarchy that is clear-sighted and competent, I want to offer what I hope is a more realistic psychology that tries to understand the appeal of this kind of politics. The idea of a common good has given way to a partition of citizens along the lines of a moral hierarchy. The decision-making class has discovered that it enjoys the mandate of heaven, and with this comes certain permissions, certain exemptions from democratic scruple.

The permission structure is built around grievance politics. Very simply: If the nation is fundamentally racist, sexist, and homophobic, I owe it nothing. More than that, conscience demands that I repudiate it. Hannah Arendt spelled out this logic of high-minded withdrawal from the claims of community in the essays she wrote in response to the protest movements of the 1960s. Conscience “trembles for the individual self and its integrity,” appealing over the head of the community to a higher morality. The heroic pose struck by ­Thoreau in Civil Disobedience is one example she gives; conscientious objectors to the draft are another. Whatever the merits of the objections offered, the pattern is one of individual conscience versus communal demands.

In The Revolt of the Elites, Christopher Lasch spells out in greater detail the role that claims of racial and sexual oppression play in securing release from allegiance to the nation—not just for those who identify as its victims, but for those with the moral sensitivity to see victimization where it may not be apparent, and who make this capacity a touchstone of their identity. It becomes a token of moral elevation by which we recognize one another and distinguish ourselves from the broader run of citizens. Both Lasch and Arendt argue that black Americans serve a crucial function for the white bourgeoisie. As the emblem and proof of ­America’s illegitimacy, black people anchor a politics of repudiation in which the idea of a common good has little purchase.

The moral authority of the black person, as victim, gave the bourgeoisie permission to withdraw its allegiance from the social order just as black people were gaining fuller admittance to it. Consider the images that had so impressed the nation in the 1950s and led to the passage of civil rights legislation: marchers demanding equal treatment, and being willing to go to jail as a demonstration of this allegiance to the rule of law impartially applied. The civil rights movement began as an attack on the injustice of double standards; it was a ­patriotic appeal to the common birthright of citizenship, as against the local sham democracy of the South. Notably, the civil rights activists of this time wore suits and ties, the costume of adult obligations and standards of comportment. But in a stunning reversal achieved by the New Left working in concert with the Black Power movement, Lasch points out, “the idea of a single standard was itself attacked as the crowning example of ‘institutional racism.’” Such standards were said to have no other purpose than keeping black people in their place. This shift was fundamental, for shared standards are what make for a democratic social order, as against the ancien régime of special privileges and exemptions—or “protected classes,” as we say now.

For the New Left, then, it was not simply capitalism that was the source of oppression; it was the claims made upon us by the impersonal demands of a common moral order (the cultural superego, if you like) that was oppressive. And oppressive not just of black people, or of workers, but of us, the college bourgeoisie. The civil rights movement of black Americans became the template for subsequent claims by women, gays, and transgender persons, each based on a further discovery of moral failing buried deep in the heart of America.

But the black experience retains a special role as the template that must be preserved. The black man is specially tuned by history to pick up the force field of oppression, which may be hard to discern in the more derivative cases that are built by analogy with his. Therefore, his condition serves a wider diagnostic and justificatory function. If it were to improve, denunciation of “society” would be awkward to maintain and, ­crucially, my own conscience would lose its self-­certifying independence from the community. My wish to be free of the demands of society would look like mere selfishness.

The white bourgeoisie became invested in a political drama in which their own moral standing depends on black people remaining permanently aggrieved. Unless their special status as ur-victim is maintained, African Americans cannot serve as patrons for the wider project of liberation. If you question this victimization, you are questioning the rottenness of America. And if you do that, you are threatening the social order, strangely enough. 

For, as Christopher Caldwell has shown in The Age of Entitlement, the legal, bureaucratic, and moral machinery of civil rights, predicated on our hateful nature, now penetrates every aspect of American society. The doctrine of disparate impact posits that any departure from proportional representation in any field whatever, due to the application of some standard of judgment that is itself entirely race neutral, is nonetheless presumptively due to racial discrimination unless one can prove otherwise. The premise, once again, is that common standards have no other purpose than keeping black people (or women) in their place. Notice that this doctrine expresses suspicion not only toward standards of excellence, but also toward the possibility of a common good rooted in a common reality. What it embraces is the necessity of social management, that substitute both for the frictions of political contest and for the civic friendship that may arise among free and responsible citizens, of whatever race.

I have just traced a development in which the “civilized minority” transferred its allegiance from the nation to its own circle of the elect, consisting of those with the moral sensitivity to see victimization where it may not be readily apparent. Moral conscience of this sort is supra-political, and therefore this tribe is supranational. Partiality toward one’s own countrymen is a moral failing, typical of the demos. To draw this out, let’s cast an eye to Europe, where the problem of the nation is more explicitly at issue.

The grand project of “building Europe” after World War II was a response to the truly decisive event of the twentieth century, the Holocaust. Preventing its recurrence was thought to require the dissolution of the nation state and the quarantine of democratic preferences. Because the Nazis had won the support of a national plurality in their rise to power, the concept of “liberal democracy” was no longer tenable as a conceptual unity. The demos is not reliably liberal.

The solution was to offer an idealized concept of democracy, sharply distinguished from “mere majoritarianism.”

As Pierre Manent put it, the Europeans have been trying since the war to “separate the democratic regime completely . . . from any underlying conception of what it means to be a people.” The goal has been to build a democracy without a demos and “separate their democratic virtue from all their other characteristics.” The ­experience of a century of wars, as well as regrets about ­imperialism, convinced the governing classes that “their future lay with a clean break with their whole past, and . . . henceforth belonging to this or that people should be devoid of any specific political meaning or import.” In this, a “methodical process of self-erasure” by particular peoples was required to carry out a transfer of sovereignty from merely national bodies to a supranational administration, the European Union. The famous post-war German “will to disappear” would have to be generalized to all the European peoples to bring about the post-political condition. This is accomplished if the Holocaust is taken to be revelatory of the inner truth about the ­European peoples, and the inexorable destination of ­nationalism.

In America, the most influential work of sociology in the 1950s was The Authoritarian Personality. With the impressive machinery of social science, it ranked Americans on the “F scale” (where F stands for pre-fascist) and demonstrated that Americans, too, are latent Nazis, as demonstrated by, for example, the way the working-class family clings to traditional gender roles. However tendentious this work was (and it was), in fact there really was a deep well of anti-Semitism in America, the most telling index of which was perhaps the fact that Father Coughlin’s frothing radio show had been one of the most listened-to shows in America in the 1930s.

But the attribution of fascist sympathies to Americans could only be so effective, given that the American conscience was not burdened with the Holocaust, nor with collaboration, as so many European nations and institutions were. Indeed, America had spent no small amount of blood and treasure defeating the Nazis.

But we had something else, and that was slavery. Slavery at mid-century was taken to have been a contradiction of American principles, resolved in blood through a great civil war. Jim Crow remained an embarrassment and an outrage to many Northerners, but because it was comfortably confined to a South that seemed to them exotically and stubbornly pre-modern, it posed no threat to their own self-image. This exculpatory view of the matter would have to change, and Americans would have to take up their own version of the society-wide German project of “coming to terms with the past” (Vergangenheitsbewältigung) if the American version of the EU project could proceed on its way: the gathering of sovereignty to a federal bureaucracy dedicated to racial equalization and insulated from electoral politics.

The precondition for the arrival of a post-political condition is the moral disqualification of the demos, just as the precondition for the arrival of the driverless future is convincing people that human beings are terrible drivers. On both fronts, the people are incompetent.

Once again, you don’t need to posit a conspiracy of shadowy elites bent on world domination to put such a dynamic in motion. The early twentieth century saw the birth of the administrative state under Woodrow Wilson, and a new political consciousness in which progressives came to regard themselves as a “civilized minority” as defined against a backward people. In the writings of Walter Lippmann and many others, the demos was regarded as an unreliable partner in the democratic project. Combine such an ambient elitism with faith in progress and confidence in the direction of History, as well as the dynamics of bureaucracies that must always expand and institutions that must reproduce themselves through personnel selection and educational formation, and you will get the kind of self-reinforcing cascade of sincere belief and class interests that can remake the world. Shaming the population permits the concentration of power, and it just seems to be in the nature of power that it wants to concentrate.

The 1619 Project might be understood as an attempt to consummate this logic, retroactively making slavery the very principle of the American regime and the animating spirit of the American people. Every crevice of American life stands revealed as needing supervision and correction. Fittingly, Joe Biden announced in the first week of his presidency that Diversity, Equity, and Inclusion would henceforth provide the master principle of the federal government. Systemic racism provides the premise for the growth of the “immense tutelary power” that Tocqueville foresaw. If war is the health of the state, racial shame is the engine of administration. It makes men less proud, more administrable.

The line separating innocent victims from guilty oppressors, or the compassionate elect from the deplorable haters, has come to bear a lot of weight in a post-Christian politics that has forgotten the universal nature of sin.

In The Gulag Archipelago, Aleksandr ­Solzhenitsyn wrote: “The line separating good and evil passes not through states, nor between classes, nor between political parties either—but right through every human heart.” This truth, if recovered, could have a moderating effect on projects for social control, which are so often rooted in a lack of self-­awareness about this doubleness in our nature.

Such a lack of self-awareness is evident when, under the pretense of their own rationality and benevolence, some men seek to manipulate other men as beings incapable of reason. Or, deputizing themselves as racial tribunes, they treat others as incapable of civic friendship across demographic divides. Those who adopt such postures exempt themselves, of course, from what they posit about human beings in general. Special pleading is a perennial tendency of rulers when they don’t feel themselves responsible to an authority higher than themselves.

A tendency to manipulation is perhaps inevitable, given the big anthropological picture we are operating within. This picture, in turn, grows out of a more fundamental metaphysical error. I will try to tie these things together and give some tentative indications of where a more adequate anthropology might be sought.

The pride of our rulers is perhaps better called arrogance or hubris. We are all subject to this vice. We differ in the scale on which we have the opportunity to act on it.

Pride can also mean something positive, indeed, something that is indispensable to any politics that could be called free: a readiness to make a claim on one’s own behalf and to stand up for the dignity of the human animal as a creature capable of reason and generous feeling. Such pride is what C. S. Lewis called “chest” in his magnificent essay “Men ­Without Chests.” It is Plato’s thumos, the evaluative capacity that assigns praise and blame. It is also a man’s concern that he be valued rightly himself and not disparaged as a slave incapable of ­self-command. It provides the motive force for his pursuit of ­excellence.

But what is excellence? Thumos works in concert with eros, which has a perceptive dimension in Plato. We are erotically attracted to beauty because it carries intimations of good, of an objective order of value. These intimations give thumotic striving its proper direction; thumos without eros would be mere self-assertion.

Lewis points out that every civilization rests on a conviction about the existence of objective good. The denial of such a moral order is a truly novel development in the modern West and has a disheartening effect. The debunking of the metaphysical stature of the Good appears to have short-circuited the prideful basis of self-­government, because it has made it harder to perceive the degradation of man that comes when he is treated as raw material for a kind of social cybernetics. That is, he is not treated as a creature who has an inherent worth that must be ­recognized.

That worth lies with his participation in something greater than himself, which he is erotically attracted to. On the Christian understanding, man is fallen yet drawn toward a perfection that is, in fact, the source of his being, in the image of which he was created. When it is in good order, thumos provides the motive force for this movement toward excellence. It is a man’s spirited readiness to overcome the distraction and self-absorption that tend to make him, well, stupid, replaceable, fragile, and hateful: the very image of the human being favored by our conditioners. It is a half-image that can be made more fully true by a politics predicated on it, and by an economy in which a few profit by bringing this degraded picture further to fruition.

We may regard the doctrine of the ­Incarnation—God becoming man—as an assertion of the dignity of man. It is an assertion that could serve to moderate the contempt of the powerful. But let’s not count on that. What seems certain is that it is an idea that can only embolden the self-respect of the citizen. If taken to heart in numbers, it may lead a people to insist on reclaiming that status for themselves.

Matthew B. Crawford is a research fellow at the Institute for Advanced Studies in Culture at the University of Virginia. He writes the Substack Archedelia. This essay was delivered as the 2023 First Things Lecture in Washington, D.C.

Comments are closed.