A friend texts me after reading the Citrini piece. Two years out of school, early in finance. “By the time I’m a partner, I’ll run the associate work on AI.”
Every rational actor is running the same calculation, and given the recent market activity, the whole problem is stated plainly.
Citrini calls it a demand collapse; a negative feedback loop with no brake. The slope is changing with cognitive labor repricing toward a new floor, institutions capturing the surplus, the pyramid steepening. The offset is rising as cheaper tools, diffused productivity, and autonomous improvement accrues regardless of where you sit. They’re inherently decoupled even though they dance as AI seeps into every industry.
This is a factor price equalization scenario, the same way manufacturing labor was repriced and redistributed after China WTO entry. Instead, this time intelligent labor is repriced with a faster diffusion rate with unclear protected sectors.
The Citrini scenario is a Keynesian demand deficiency problem. Income shifts from labor to capital, capital spends less of it, purchasing power drains out of the consumer economy, circular flow breaks. His fixes (compute taxes, sovereign wealth transfers) are designed to restore that flow.
The mechanism is real; my dev job right now is 90% me orchestrating agents. Extrapolated across every vertical requiring mid-level cognitive output, income distribution shifts toward capital owners even as total output rises. What Citrini calls a demand collapse is factor price equalization, the fastest and broadest in history, applied to cognitive labor.
Factor price equalization is the process by which the price of an input (cognitive labor, repetitive manual labor, etc.) converges towards a new equilibrium as a superior substitute enters the market. With this repricing, the floor of productivity moves (probably up). The socioeconomic pyramid steepens. A good historical case study is the China Shock between 2001-2019. Due to Chinese import competition, over 2 million manufacturing jobs were eliminated and the pyramid steepened, US consumer purchasing power increase by nearly 2%, and a new equilibrium was found.
The Keynesian fix assumes the old equilibrium is restorable. FPE produces a new one. You can transfer income to displaced workers, but you cannot restore the cognitive labor premium that justified their prior compensation. That price has moved permanently.
The standard rebuttal is Schumpeter: creative destructioon creates industries we can’t yet conceive of, and it’s been right for two centuries. AI-exposed occupations grew 38 percent between 2019 and 2024 versus 65 percent for less-exposed ones, but new categories are emerging: datacenter architects, AI consultants and rollups, robotics infrastructure that didn’t exist five years ago. The theoretical horizon where Schumpeter actually breaks is the sci-fi doomerslop scenario where agents live in a parallel world and economy with no human input at all. This could come at horizon, but the mechanism still runs on the way there. The more immediate question is whether the displacement spiral Citrini describes even reaches that horizon before something stops it.
The Hayek rebuttal is that the cost of capital rises and kills the buildout; however, capex is shifting from labor to hard assets and raw commodities, and inference costs have dropped over 280x in the past two years. The bottleneck currently priced by the market are energy and silicon, both of which are on deflationary trajectories. Neither rebuttal addresses how aggregate demand can stabilize while the professional middle class experiences genuine immiseration. Macro stability and individual catastrophe are compatible outcomes.
Citrini treats this slope as a rupture; “smut for crypto bros” is how a mutual described the AI doomer essays. Crypto culture and retail investing trained an entire generation to expect spiked short termed volatility and that everything could go to zero; a compelling fiction.
The floor is rising and everything is shipping faster. Anthropic constantly cuts Opus-level pricing by 67% every generation, and flagship intelligence can handle long tailed autonomous tasks in near production quality. Harmonic, Cognition, Palantir, DeepMind, Anthropic, and every new agent company out of YC are all closing the same window on general cognitive labor. I barely write code anymore; I found myself orchestrating agents to do work as another long short analyst at one of big HFs used OpenAI for research and sanity checks, while talking to a law firm who uses AI FDE on their Foundry system. This transition from doing work to directing systems will happen across every vertical that requires mid-level cognitive output. The baseline productivity expectation of intelligent work has been permanently reset upward, benefiting everyone with access to the tools, regardless of where they sit in the pyramid.
The optimist calls this democratization. That’s true and mostly irrelevant. Democratizing intelligence compresses the rent of everyone currently monetizing its scarcity: the $800/hour lawyer, the research team, the mid-level analyst. The floor rising is adversarial to whoever owns the cognitive premium. When the baseline resets upward, the threshold for what justifies human employment resets with it.
The correct analogue is Korea post 1997.
Japan is what happens when the system refuses to reprice; zombie companies kept alive by suppressed rates, real wages falling while productivity grows, demographic decline as the mechanism of stability (honestly a great bullshit jobs case study). Brazil is what happens when the floor never gets built and there’s a steep concentration at the top, immobility calcified into caste. The US is posed to borrow from both in the bear scenario.
Korea did both simultaneously. A steep pyramid emerged after the IMF arrived, the won collapse, and the restructuring of chaebols. The top 30 chaebols account for over 76% of the Korean GDP, the 1 percent income share over doubled since 1996, education hit the highest levels in the OECD, and the aggregate floor rose. The pyramid produced genuine improvement and individual immobility simultaneously (”stuck in the permanent underclass”). Private tutoring and sponsored bootcamps run higher each year, leading many students to seek education and jobs outside internationally to find pockets of mobility. Korea is the only major OECD country where the youth NEET rate went up. The credential inflation is total: a degree is necessary and insufficient.
We see the same shape tracking in corporate data. Salesforce revenue is up over 43% and margins expanded 1260 bps in the past few years; Benioff cut engineering hires. Adobe, McKinsey, and even OpenAI all are simultaneously growing revenue and freezing hires. The person is repriced in the institutions.
This is where the slope lives. Labor’s share of GDP ran at 64 percent in 1974 and 56 percent today. Citrini projects 46 percent by end of decade. At $31.5 trillion in US GDP, the difference between 56 and 46 percent is roughly $3.15 trillion annually rerouting from labor to capital. The mechanism now becomes position.
I am long on institutions most well positioned to capture AI surplus; brand, data, distribution, regulations, and deal making are the structure behind most of these companies’ worth. Claude is a capability multiplier of existing advantage. S&P Global does not die. The analyst who writes credit reports does. Salesforce does not die. The Salesforce admin is getting replaced by an AI agent. Moat taxonomy matters here. Workflow moats are the weakest (Asana, Zapier, Monday.com). Network moats are much more durable. Regulatory and exclusivity moats might be the strongest (NRSRO, government relationships etc). The interesting corollary: large institutions with failed experiments get a second look. Google’s Stadia failed because the infrastructure wasn’t there, not because the vision was wrong. When capability compounds every four months, optionality on shelved projects is underpriced.
The capital question is more direct; salary is the most commoditizable form of compensation and reprices the moment the market of the underlying skill reprices. Equity in institutions capturing AI surplus reprices upward as the cost of intelligence flattens. The transition from employee to principal, from directing labor to owning the capital that directs AI, is the only durable position. The distribution of who can make that transition is itself a steepening pyramid.
Every cognitive moat cited as protection against the bear scenario is already automated or on the roadmap with a visible end.
We can go on and on.
The standard response is to point to fields requiring human judgment in ambiguous or high-stakes situations. This is correct as a near-term observation and incorrect as a structural claim. The ambiguity that protects human judgment today is a function of current capability levels. The capability trajectory does not support assuming that ambiguity is permanent. It supports assuming the ambiguity window shrinks on a compressing timeline.
The one exception is accountability surface: the human node that legal and social infrastructure requires to attache liability to. When a Goldman analyst recommends a trade and it fails, the analyst owns his reputation, bonus, employment. When an AI system makes the same call and it fails, the consequence diffuses across the vendor, the deployer, the compliance team. The RL rebuttal is that consequence signals get baked into training. That closes the cognitive loop, not the liability loop. The human remains in the loop not because they add cognitive value but because the legal architecture has no other place to attach consequence for now.
What makes this more than regulatory lag: as AI systems get more capable, the consequences of deploying them incorrectly get larger. A wrong AI medical diagnosis at scale carries more liability than a wrong human diagnosis at scale. Humans occupying accountability nodes in heavily regulated verticals who use that position to write the institutional playbook for how AI operates within those verticals are capturing something durable. This is a Worldcoin bull case: proof of personhood becomes infrastructure when you need to know which human is attached to which AI action before you can hold anyone accountable. Everything else is just runway.
The steep pyramid is not the failure state of this transition. It is the load-bearing structure through which the productivity gains get allocated and the buildout accelerates. The debate about redistribution assumes the prior wage structure is recoverable; once the floor has moved, it isn’t.
The load bearing claim is conditional. Korea built the floor before 1997 with universal healthcare and other infrastructure changes in the late 1980s. The US is entering the AI displacement shock before the floor is built. Working-age benefit spending runs at 2 percent of GDP versus over 8 percent in comparable economies. Healthcare remains employer-tied, meaning job loss creates a displacement trap with no parallel in any G7 country. The US intergenerational earnings elasticity runs at 0.47 — closer to Brazil at 0.58 than Korea at 0.20. The Great Gatsby Curve predicts that AI-driven inequality increases will reduce mobility mechanically as the Gini rises. Unlike the China Shock, cognitive labor is getting hit across every sector simultaneously with no adjacent sector to absorb from.
This is where MEV lives. Position at the seams before they close and at the frontier before they move is the best way to capture value as an institution and individual now. The practical question is whether you are accumulating principal position or riding slope. Slope is beta; everyone rides the same new Opus or Deepseek release, getting the same return. Principal position is residual: equity in surplus-capturing institutions, accountability nodes in regulated verticals, network that converts when deals happen, deep knowledge built before the category commoditizes.
Legal, finance, consulting, software, research; any workflow where mid-level intelligence was the scarce input is repricing simultaneously, with no adjacent sector to escape into. The MEV accrues to whoever understood the difference between slope and offset earliest and moved from riding the augmentation premium to building principal position before the window closed. The canary is alive; the cage is the only question.
Disclaimer: In the theme of the topic at hand, I used Claude and Gemini to help me research.