Anthropic Scientist Warns: AI Could Replace Most White-Collar Jobs Within Three Years

Anthropic Scientist Warns: AI Could Replace Most White-Collar Jobs Within Three Years
X
Anthropic’s chief scientist Jared Kaplan warns rapidly advancing AI could overtake white-collar jobs, outperform students, and heighten global risks by 2030.

Artificial intelligence may transform professional work much sooner than expected, according to Jared Kaplan, chief scientist and co-founder of Anthropic. In a recent conversation with a popular publication, Kaplan offered a stark assessment of how fast AI capabilities are accelerating—and how profoundly they could reshape society before the decade ends.

Kaplan believes AI systems will be competent enough to handle “most white-collar work” within just two to three years. His warning comes at a time when leading AI labs—including Anthropic, OpenAI, Google DeepMind, and Meta—are racing toward artificial general intelligence (AGI), the kind of system that can outperform humans across a wide spectrum of tasks.

To illustrate the pace of progress, Kaplan pointed to his own family. “My six-year-old son will never be better than an AI at academic work such as writing an essay or doing a maths exam,” he said. This example underscores how quickly AI tools are surpassing human abilities even in areas long considered uniquely suited for people.

Recent research supports Kaplan’s concern. Cutting-edge models are improving at extraordinary speed, doubling in capability in short timeframes. Anthropic’s latest systems can already develop advanced software agents, handle difficult programming assignments for extended periods, and generate sophisticated reasoning chains with almost no human oversight.

But Kaplan’s projections go beyond enhanced productivity. He believes a critical shift could arrive between 2027 and 2030: the moment AI systems begin actively contributing to the development of their own successors. This self-improvement loop, he suggests, could be both a breakthrough and a new source of uncertainty. “If you imagine you create this process where you have an AI that is smarter than you, or about as smart as you, it’s then making an AI that’s much smarter. It’s going to enlist that AI to help make an AI smarter than that. It sounds like a kind of scary process. You don’t know where you end up.”

As systems grow more capable and autonomous, Kaplan outlines two major risks facing humanity. The first is the possibility of losing control—being unable to fully understand or predict what highly advanced AIs might do, or whether they will remain aligned with human goals. “Are the AIs good for humanity? Are they going to be harmless? Do they understand people? Are they going to allow people to continue to have agency over their lives and over the world?” he asks.

The second concern is the misuse of powerful AI tools. Kaplan warns that if such systems fall into the hands of bad actors, the consequences could be serious. “You can imagine some person deciding: ‘I want this AI to just be my slave. I want it to enact my will.’ Preventing power grabs, preventing misuse of the technology, is also very important.”

As the global AI race intensifies, Kaplan’s message serves as a reminder that the biggest challenges may not only be technological—but deeply human.

Next Story
    Share it