Understanding the Economic Impacts of Transformative AI: A Guide for the Perplexed

Danny Buerkli, 18 June 2025

It is becoming increasingly evident that AI systems may have disruptive effects on labor markets and the wider economy. Despite this, the economic impacts of transformative AI remain oddly understudied. While there are disagreements over the pace of change and the magnitude of these effects, it is clearly worth taking these questions seriously.

Over recent months, we have seen a surge in thoughtful publications on the topic. Given the breadth of this emerging field, however, it can be difficult to understand how different analyses relate to one another.

This “guide for the perplexed” aims to highlight some of the most significant contributions, structured around a simple framework.


Map1

Mechanical birds when?

To make sense of the complex economic effects of transformative AI, it helps to break down the issue into three central questions:

Additionally, two further questions will likely become increasingly important:

Let’s explore each question in detail.

How long will it take us to see transformative AI?

A key concern is how much time we all have to prepare for a future with transformative AI. Debates over timelines are highly contentious. This section highlights arguments that clarify the core disagreements between competing perspectives.

Two recent articles illustrate two starkly contrasting visions:

“AI 2027” (April 2025), by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean, concludes that we may well see runaway superintelligence by the end of 2027. This enormous impact within a very short time frame, leaving almost no room for intervention, makes AI fundamentally different from other technologies we’ve had to contend with.

“AI as Normal Technology” (April 2025), by Arvind Narayan and Sayash Kapoor, argues the opposite case: AI systems will take a long time to develop and diffuse through the economy. AI is, in other words, a powerful but fundamentally mundane technology. All previous lessons we have learned about handling technology still apply.

The core disagreement between the two teams revolves around whether AI research itself will be automated. This would lead to rapidly self-improving AI systems.

The AI 2027 team believes this to be not only possible but in fact likely. Arvind and Sayash don’t think so.

Arvind and Sayash additionally believe that there exist important bottlenecks that will constrain AI diffusion. The AI 2027 is not materially concerned with diffusion, since a superintelligent system would be so capable that it would blow past any hurdles immediately.

A helpful deep dive into the self-improvement aspect is presented in “Three Types of Intelligence Explosion” (March 2025) by Tom Davidson, Rose Hadshar, and Will MacAskill.

Unfortunately, there isn’t yet a similarly detailed exploration of bottlenecks to AI diffusion. Tyler Cowen, in “Why I think AI take-off is relatively slow” (February 2025), comes closest and outlines reasons why AI take-off may take a long time.

What will the economic effects of transformative AI be?

Once transformative AI does diffuse through the economy, what will its economic impacts look like?

This question is hard to reason about because it refracts immediately into a vast set of interlinked issues of wages, power, identity, social relations, and more. As a result, many of the articles mentioned below treat the economic effects of transformative AI as one question of concern, among others.

We can group existing work into two categories: broad-stroke analyses outlining general economic and societal shifts, and more targeted investigations addressing specific economic questions.

Let’s begin with the broad stroke analyses:

Anton Korinek is one of the few macroeconomists to explicitly address transformative AI. In “Scenarios for the Transition to AGI” (March 2024) Anton and Donghyun Suh model potential trajectories for economic output and wages under different AGI scenarios. Later, in “Economic Policy Challenges for the Age of AI” (September 2024), Anton considers future roles for human labor and identifies production factors likely to grow in importance post-AGI.

In a somewhat similar exercise, Epoch AI has published GATE, a competing macroeconomic model. The model, for example, predicts significantly accelerated economic growth and concludes that the “global economy can marshal enough effective compute to automate most tasks within two decades.”

In their “Intelligence Curse” (April 2025) essay series Luke Drago and Rudolf Laine draw parallels to the resource curse. They argue that powerful entities may lose incentives to invest in ordinary people, similar to how many resource-rich states today neglect their populations. Their analysis highlights early warning signals, potential shifts in power structures, the erosion of existing social contracts, and possible interventions.

“Gradual Disempowerment” (January 2025), by Jan Kulveit, Raymond Douglas, Nora Ammann, Deger Turan, David Krueger, and David Duvenaud takes a cautionary stance, suggesting that even in the absence of an intelligence explosion incremental increases in AI capabilities “pose a substantial risk of eventual human disempowerment”. Their economics chapter outlines how the human labor share of GDP might gradually asymptote towards zero, with attendant disempowering consequences.

The Labor Market Risks chapter of the International AI Safety Report 2025 (February 2025) provides a concise, extensively referenced overview of the current evidence. As a scientific report, it remains cautious about speculative scenarios but offers a robust foundation for understanding labor market impacts.

Now let’s look at the more focused explorations:

In “What Will Remain for People to Do?” (March 2025) Daniel Susskind examines the effect of transformative AI on jobs, starting from the premise that AI could eventually perform all economically valuable tasks more productively than humans. His relatively optimistic conclusion is that comparative advantages, preferences for human-led processes, and normative considerations will preserve certain tasks for human workers.

Similarly, Noah Smith, in “Plentiful, high-paying jobs in the age of AI” (March 2024), looks closely at comparative advantage. He argues that as long as compute is finite and not literally free, the use of AI systems comes with an opportunity cost, ensuring some tasks remain economically viable for humans.

Finally, what do the leaders of leading AI labs say? While you may choose to not take what they are writing at face value, it is still instructive.

In “Machines of Loving Grace” (October 2024) Dario Amodei, the CEO of Anthropic, sketches “what a world with powerful AI might look like if everything goes right.” He argues that 20% annual GDP growth is achievable in the developing world.

Sam Altman, CEO of OpenAI, in “The Gentle Singularity” (June 2025) hedges his bets but writes that “2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world.”

What should we do about the economic effects of transformative AI?

Given the potentially profound economic changes caused by transformative AI, how should societies respond? Recent discussions focus on reforming our social and economic institutions to meet these challenges.

Deric Cheng is currently editing an anthology titled the “AGI Social Contract” (June 2025). In the anthology’s introductory piece, he summarizes the state of play and outlines key questions for a new social contract.

As part of this anthology, “The Missing Institution: A Global Dividend System for the Age of AI” (June 2025), by Anna Yelizarova proposes a global dividend system that would share the proceeds from concentrated AI wealth.

This suggestion belongs to a broader set of proposals known as “benefits sharing,” which explores mechanisms for the fair distribution of AI-generated wealth.

An earlier and influential proposal along these lines is the Windfall Clause, which describes an “ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits garnered from the development of transformative AI.” In their paper from January 2020 Cullen O’Keefe, Peter Cihon, Ben Garfinkel, Carrick Flynn, Jade Leung, and Allan Dafoe outline the mechanics of the idea. While the Windfall Clause as initially conceived has somewhat fallen out of favor, it remains influential as a foundational concept.

Building on critiques of the Windfall Clause, “Predistribution Over Redistribution” (October 2024) by Saffron Huang and Sam Manning argues that we should focus on early structural interventions that prevent extreme wealth concentration before it occurs, rather than trying to redistribute wealth after the fact.

Taking a macroeconomic perspective, Era Dabla-Norris at the International Monetary Fund (IMF) published a staff discussion note titled “Broadening the Gains from Generative AI: The Role of Fiscal Policies” (June 2024). This paper outlines fiscal tools and policies that governments could employ to harness generative AI’s benefits, mitigate negative labor market impacts, and promote fair distribution.

What are the politics of the economic effects of transformative AI?

Politics fundamentally is about “who gets what, when, and how.” Under any scenario the impacts of transformative AI are unavoidably political. So it is surprising that there has been relatively little analysis of the political dimensions.

A notable exception is the recent work by Anton Leicht. In his essays “AI and Jobs: Politics without Policy” (June 2025) and “Three Fault Lines in Conservative Thinking” (May 2025), Anton offers helpful context on the policy landscape, especially in the US. These essays explore some of the structural and political challenges that will shape how the political debate about transformative AI might play out.

How do governments need to change?

If transformative AI arrives, it stands to reason that governments and public administrations will need to adapt significantly, not just in terms of specific policy decisions but also in their fundamental operations and structures. While the preceding sections considered which policies might be necessary, this section asks: how should government institutions themselves change?

One of the few pieces to explicitly look at this question is Ed de Minckwitz’s “Government in the Age of Superintelligence” from June 2025. In his report for Policy Exchange, the London think tank, he looks at the impact of AGI on healthcare, education, infrastructure, defense, and politics.


To close, four reflections:

1) I have intentionally stacked the deck and sidestepped the question of whether or when we will have transformative AI. Instead, I have focused on asking what will happen once transformative AI is a reality.

2) The rate at which thoughtful analyses and proposals are coming out is already high and will only increase. Specific articles and reports highlighted above may well go stale but the overall framework will hopefully remain useful.

3) This guide is necessarily woefully incomplete. There is far more excellent work than can be featured in an overview like this.

4) Much more thinking and preparation is urgently needed. Given the scale and importance of transformative AI’s potential impact on jobs and economies, the quantity and depth of thinking and policy preparation ought to increase fast.


Thanks to my Windfall Trust colleagues Deric Cheng, Helen Stevenson, Anna Yelizarova, and Adrian Brown for their helpful comments.