recognize the game
Why we shouldn't play the game we never chose to enter.
Recently, I've noticed that the term "late-stage capitalism" gets thrown around a lot, usually as a vague gesture at inequality or corporate greed; but I don't think that's what the term ought to represent. Late-stage capitalism is the moment when a critical mass of people become conscious that they're operating inside a system that is optimizing around them, not for them. You can see the game. You can describe the game. You just can't leave it.
This has become especially pronounced beginning in the back half of the 1900s when neoliberalism took the optimizer logic that had always existed in markets and turned it into explicit ideology. The Friedman Doctrine. Shareholder primacy. Even philanthropy got rationalized into utilitarian calculus. Everything that couldn't be measured got crowded out by everything that could. The result was the over-financialization and over-rationalization of nearly every domain of human life (and now we've arrived at Kalshi and Polymarket glorifying the posing of life as one big numerical game).
the optimizer always wins
Humans are, at our core, optimizers. This is baked into us by evolution. Just like any other creature trying to survive and propagate, we tend toward efficiency, toward maximization, toward winning whatever game is in front of us. Even if it takes a while to eventually gradient descent our way down there.
We see these also within our institutions; as a simple example, the expansion of the Executive Branch's powers. Within the last year or so, it has become increasingly obvious that the majority of checks that we have on the executive branch within the United States political system are based on social taboos, and are easily broken by someone who unapologetically optimizes for specific self-interests and outcomes, whether it is personal involvement in FBI investigations or behind-the-scenes deals with foreign countries for personal gain, eventually systems are broken by those who tend towards optimization, as humans are simply built to be.
But humans are not only optimizers. We also have this irrational, "right-brained" piece of ourselves (I don't mean this in a scientific sense, but a more abstract sense). The part that invests in art, in community, in relationships that have no measurable ROI. There's a well-documented tension between these two hemispheres of how we process the world: the left brain is narrow, analytic, grasping. The "right brain" is broad, contextual, alive. In a healthy person or a healthy society, the "right brain" leads and the left brain serves. In other words, at our best, we are essentially rational idealists with big dreams and pragmatic methods to achieve them.
The problem is that in the long run, the optimizer wins. Not because it's better in any moral sense, it's more legible, more fundable, more defensible, and better adapted for the world we live in with its limited resources and persistent competition.
You can put a number on efficiency. You can't put a number on beauty. And so over time, the left brain takes over.
What we're experiencing now is a civilizational peak of left-brain dominance. Everything is measured. Everything is financialized. And people can feel it. That feeling, that growing awareness that the game is structured in a way that no individual chose but everyone is stuck inside, is what late-stage capitalism means.
the coordination trap
So if everyone can feel it, why can't we fix it?
This is where it becomes a coordination problem, and coordination problems are the hardest problems in the world.
When I was exploring the AI-for-private-equity thesis at the beginning of our startup journey, one of the first things you realize is that the vertical you absolutely want to avoid acquiring during a technological transformation is one where the customer is extremely cost-sensitive. Because in cost-sensitive industries, every efficiency gain gets competed away immediately. There is no margin to capture. There is only a race to the bottom.
Ship agencies are a great example. These are the firms that coordinate with ship crews to get them whatever they need: services for the vessel, provisions for the crew, port logistics. The ships themselves operate on razor-thin margins, which means the agencies do too. Even if every single ship agency owner wanted to invest in better conditions for their workers, build something more thoughtful, or slow down and do things right, the competitive dynamics of the industry make it nearly impossible. The next agency will just undercut you.
This isn't unique to shipping. It's the structure of every cost-sensitive market. And it illustrates the broader point: even when every individual participant might want to do the right thing, the system punishes anyone who tries to do it alone. We even have laws, antitrust and anti-collusion regulations, that explicitly prevent the kind of coordination that would solve this (without the government deciding things are bad enough to act). The system doesn't just fail to fix the problem. It actively prevents the fix.
We've had institutions and norms that acted as temporary stopgaps against this pull. Think of it like a ball rolling downhill. Cultural norms, term limits, professional ethics, civic traditions: these were bumps in the terrain that slowed the descent. But they were always temporary. They never changed the slope. And gravity, the underlying psychology of human optimization, is the more enduring force.
And, concerningly, I don't think we're going to get a full reset anytime soon. We also can't afford one. The way our institutions foster dependence, the way nuclear states are structured, the complexity of the global system we've created, there's no wiping the slate clean with a revolution that reforms our government, economy, and culture.
Over the last several decades, and especially the last few years, many of these stopgaps have eroded. The ball is rolling faster. And people are watching it roll, aware of the physics, unable to stop it.
what do we actually do
So the question becomes: now that we can see the game, how do we change it?
The AI sovereign. Some people argue that the logical endpoint is handing governance to an all-knowing AI system. If humans can't coordinate, maybe a sufficiently intelligent machine can coordinate for us.
I don't buy this. Not because I think AI won't become incredibly capable, but because this path is fundamentally a surrender of agency. It assumes that the alignment problem between an AI and 8 billion humans is easier to solve than the coordination problem among those humans themselves. I don't think that's true. And even if it were, I don't think a world where we've outsourced our collective will to a machine is one worth living in. That's not solving the problem. That's giving up on solving it.
Cultural recalibration, and why it probably isn't enough on its own. The hopeful, humanistic answer is that we embrace a culture of doing good for its own sake. Not reciprocity, not tit-for-tat, but genuine unilateral goodness. You pay your workers well even if your competitor doesn't. You build something beautiful when the spreadsheet says to build something cheap. You become deliberately irrational in small ways that compound over time.
I find this deeply compelling. There's a real argument that the "right-brain" reclamation is already underway, driven by people who are fed up with optimization and hungry for beauty and meaning. The growing backlash against slop, against the soullessness of AI-generated content, against cities that all look the same. People are starved for something real. And there's historical precedent for this kind of cultural shift: the Renaissance was, in many ways, a society-wide reassertion of the "right brain" after centuries of institutional rigidity.
But I'm honest about its fragility. You can't just get a few celebrities or politicians to start saying "we all need to be friends." You can't will people into collectivism. This path asks individuals to eat costs that the system actively punishes them for. It works when enough people do it simultaneously. It fails when they don't. And that brings us right back to the coordination problem.
Which is why I think cultural recalibration, if it's going to work at scale, probably needs a deeper intervention.
Technological modification of human psychology. We already accept artificial interventions into our psychology. Antidepressants restore a neurochemical baseline for people whose serotonin regulation has fallen out of balance. We don't consider this a violation of their humanity. If anything, we consider it a restoration of it. GLP-1 agonists are rewriting appetite. Again, largely accepted.
But what happens when we push past "restoring baseline" into "improving baseline"? What if we could make humans measurably more empathetic? More cooperative? Less susceptible to the optimization loops that trap us in races to the bottom?
Maybe the most viable version of cultural recalibration is one that's technologically assisted. Not engineering humans to be compliant, not flattening the diversity of human psychology into something manageable. But something more like: defining humanity down to its core set of psychological principles, the things that actually make us human, and using technology to nudge us toward those principles more consistently. What if augmented reality reminds road ragers that the driver who cut them off might be a senior going to a doctor's appointment? What if psychedelic-assisted therapy lowers the psychological defenses that keep people locked in narrow, self-interested patterns? Going a step further, what if embryo selection begins to include consideration of traits like kindness and anger regulation?
At what point is it inhumane or overly artificial to become more loving, to care more about our neighbors, to be wired more strongly for empathy? Because as we attempt to recover aspects of humanity we must diligently protect the richness of our flaws — the very psychological vulnerabilities that the optimizer exploits. If we reduced those vulnerabilities, at what point are we no longer us?
the diagnosis matters
I don't have a clean answer to any of this, and I'm skeptical of anyone who claims to.
But I think the diagnosis itself is valuable. Late-stage capitalism isn't an economic phase. It's the psychological event of waking up inside a system that none of us chose, recognizing the game for what it is, and deciding what to do next. Defining the problem clearly is the prerequisite for solving it. Whether the solution is cultural, technological, some combination, or something none of us have thought of yet, that's for each of us to work through.
The ball is rolling downhill. Gravity hasn't changed. The question is whether we can change the slope, and what we're willing to change about ourselves to do it.
Additional readings that inspired my thoughts:
- Meditations on Moloch by Scott Alexander
- Modern Magnificenza by Packy McCormick
- A Friedman Doctrine