An OpenAI lead posted last week that this is the "last opportunity to secure employment before the fast takeoff," later clarifying he meant roles at major AI labs, not the broader job market. A romance writer told the New York Times she publishes 200 novels a year using Claude. In the US, community college enrollment is up 3% while private four-year colleges are down 1.6%. Students are choosing what experts now call "un-college": short credentials, vocational training, skip the degree entirely. [3][4]
These are not separate stories. They are the same story.
The labor market is splitting into two economies. One rewards people who use AI to do things that were never possible before. The other still pays people to do tasks that AI is learning to do faster and cheaper every quarter. The first economy is desperate for talent. The second is quietly shrinking.
I left a corporate career six weeks ago because I saw this split forming. I was not certain. I am still not certain. But I had spent enough years watching organizations automate around people, while the people kept optimizing the old work, to know which side I wanted to be on.
What Anthropic sees
Steve Yegge spent four months talking to nearly 40 Anthropic employees: researchers, engineers, sales, product, leadership. His conclusion: everyone there feels "sweetly but sadly transcendent." [1]
They are excited about what they are building. They are also genuinely sorry for the rest of us.
"2026 is going to be a year that just about breaks a lot of companies, and many don't see it coming. Anthropic is trying to warn everyone, and it's like yelling about an offshore earthquake to villages that haven't seen a tidal wave in a century."
Inside Anthropic, the productivity multiplier is not 10% or 2x. Yegge estimates their engineers are 100x to 1,000x as productive as traditional developers were in 2005. Not because they are smarter. Because they work differently. No specs. No plans beyond 90 days. Full transparency. Every wrong turn visible to the entire team. Claude Cowork launched publicly 10 days after someone first had the idea. [1]
"It is the death of the ego," Yegge writes. That phrase stuck with me. Most organizations I have worked in ran on the opposite: carefully managed visibility, polished deliverables, controlled information flow. That model produces predictability. It does not produce 1,000x.
Why more efficiency creates more work, not less
The natural reaction to a machine that does your job faster is fear. Garry Tan, president of Y Combinator, argues that this reaction is backwards. [2]
He points to the Jevons Paradox. When steam engines made coal more efficient, coal consumption did not drop. It exploded. Efficiency made coal useful for things no one had imagined. The same is happening with intelligence.
Tan visited a university endowment whose engineers were "terrified for their jobs after seeing what Claude Code could do." His response: "Our fear of the future is directly proportional to how small our ambitions are."
That line is uncomfortable. It is also precise.
If your plan is to keep doing exactly what you are doing, a machine that does it faster is genuinely threatening. But if your ambition is bigger than your current job description, if you want to build something that was not possible when it required a team of 50, then the machine is not your replacement. It is your material.
A romance writer publishing 200 novels a year is not a cautionary tale about authors losing work. She is a one-person publishing house. The job she lost was "writer who produces two books a year." The job she built is something else entirely.
What I see in legal and compliance
I spent over a decade in legal operations, compliance, and governance. Big 4, DAX corporate, a major European retailer's international IT division. I built 12+ tools end-to-end. The pattern I see now is the same one playing out across every knowledge-work function.
Most legal teams still review contracts manually, track obligations in spreadsheets, and respond to regulatory changes by reading PDFs and updating Word documents. These are tasks. Valuable tasks, today. But they are the same category of work that AI learns to replicate every quarter.
The teams pulling ahead are doing something different. They are building contract triage systems, automated regulatory monitoring, self-service compliance portals. They are not doing the same work faster. They are building infrastructure that did not exist before.
The EU AI Act makes this tangible. Article 4 requires "AI literacy" for everyone deploying or using AI systems. But the regulation was written assuming humans use AI as a tool. You prompt, it responds, you decide. Lazar Jovanovic works at Lovable full-time building production software using only AI. No coding background. The AI writes the code. He provides judgment and taste. [6]
That is not "using a tool." That is a different division of labor. Most AI literacy programs are already teaching last year's model.
What actually separates the two sides
The split is not about technical skill. I have met developers who are firmly on the worker side. They use AI to autocomplete code they would have written anyway. I have met data protection consultants with no coding background who are on the builder side. They design systems, connect tools, and create workflows that serve hundreds of people.
The difference is orientation. Workers optimize existing processes. Builders create new ones. Workers ask "how do I do this faster?" Builders ask "should this be done at all, and if so, what system handles it?"
Yegge describes a three-person startup called SageOx where every conversation, every agent action, and every mistake is recorded, versioned, and visible to the whole team. The founders told him that information older than two hours is already stale. They announce everything in real time: "I am deleting the database." "OK." [1]
That sounds chaotic. It is also how you move when everyone is, as one Amazon principal engineer put it, "slightly oversubscribed." Always more work than people. That imbalance is uncomfortable. It is also the condition that produces innovation instead of politics.
The window is narrower than it looks
Y Combinator's latest batch is building products that would have required 50-person teams two years ago. Anthropic ships features in 10 days. The endowment engineers are terrified. The romance writer publishes 200 novels.
The opportunity is not shrinking. The Jevons Paradox guarantees more demand for intelligence, not less. But the gap between the two sides widens every quarter. Crossing from worker to builder takes months of rewiring how you think about problems. The tools will only get more powerful while you are learning.
I do not have a five-step framework for making the switch. What I have is a question that has been useful for me: when I look at my work, am I building something that gets better without me, or am I the bottleneck that makes it run?
The honest answer changes depending on the day. But the direction is clear.
Sources
1] [Steve Yegge: "The Anthropic Hive Mind"
2] [Garry Tan: "Boil the Ocean"
3] [CNBC: "Trump's 'big beautiful bill' may spur the rise of 'un-college'"
4] [The New York Times: Romance writers publishing 200 novels a year using AI
