AI Strategy for Middle Powers – with Anton Leicht
Danny Buerkli: My guest today is Anton Leicht. Anton is a visiting scholar at the Carnegie Endowment for International Peace and writes an excellent Substack called Threading the Needle. Anton, welcome.
Anton Leicht: Thank you so much for having me. Excited to be here.
Danny: How long do you think frontier labs will continue to serve their best models via API?
Anton: An API limited to select customers outside the labs themselves — I think for quite some time. For the API to be widely available as something you can just get a key to and access from anywhere in the world, I’d be surprised if that continued much longer. You could argue it has already stopped, with Mythos not being publicly available. But more broadly, for the tier of something like Opus 4.7, maybe a year or two is my realistic estimate.
Danny: How did you get to that estimate?
Anton: Mythos is the main thing that should shorten your timelines on this. It doesn’t seem like there will always be step changes in capabilities when new model generations release. These models will be surprisingly good at some new subsector of their broader expertise, and it will always be somewhat risky to get that model and system out there before the respective defenses have been built. With the cyber capabilities we’re now seeing around Mythos, I would hope that doesn’t take too long to address, and the companies involved seem fairly optimistic about making meaningful progress in fixing the exploits that can be found. I’m less sure about bio vulnerabilities — that’s more unclear.
We’ll keep seeing these dynamics: a lab discovers its model can do certain things, so they share it semi-internally first, try to plug the gaps, and then deploy — at which point it’s no longer the best model available. Because we’re already seeing progress towards this pattern, I think it’s realistic for this to happen soon, though not immediately, because there are commercial incentives to release these models as they come out, especially heading into IPO season.
That’s argument one — responsible companies that don’t want the bad PR and genuinely don’t want the harm. Argument two is US government intervention. Unlimited international API access starts to look like a security vulnerability to the US government, either because it widely disseminates capabilities the government would rather not share with adversaries, or because of ongoing distillation concerns. To the extent that China continues — as people report — to keep the gap closed at six to nine months via an approach that partly hinges on distillation, at some point this will be very frustrating for both the US government and US developers. They’re spending all this money pushing the frontier, and China can in theory stay close behind just through distillation. I don’t see a stable situation in which that continues to be allowed. Insofar as the architecture continues to depend on models that can be distilled, I would expect open API access to be restricted in some way to get ahead of that structurally.
Danny: Continuing in the same vein — what probability would you put on a frontier lab being nationalized in the next twelve months?
Anton: I’ll give a caveat answer: I don’t know exactly what being nationalized means in this context. If you mean the government actually taking ownership of the data centers and the weights — this becoming government property in some formal sense — I think that’s very unlikely, especially under the current administration, and I would put it below 5%. If you think about softer nationalization — increasing government involvement in internal governance, more decisions ultimately shaped through procurement policy, the government taking a greater interest in what capabilities get deployed, a growing interlinkage between frontier developers and the US national security apparatus — I would put it at about 50% that the US government has, in effect, substantial say over development and deployment decisions at frontier labs in twelve months’ time. For that to take any formal shape, I would be surprised if it happened before 2028.
Danny: Taken together, what would that imply for countries that don’t have a frontier lab within their borders — which is the vast majority of the world outside the US and arguably China?
Anton: They would have to start thinking about how to ensure their access to frontier systems in the medium to long run. I don’t think this should be treated as a hypothetical question. There’s an ongoing discussion in these countries, and most of it either doesn’t take that prospect seriously at all or assumes it’s a concern for five to ten years out. I don’t think that’s the realistic timeframe. I think access limitations — both from the government intervention we just discussed and from shortages in inference compute that seem very likely in a year or two — make this a question for the next year or two.
So first, these countries have to start thinking about how to ensure access. Second, when thinking specifically about how to ensure it, there are two bad attractors. The first is: the US will definitely constrain access, they’re going to cut us off from the frontier, and we have no chance of staying at the frontier if we rely on US access. I think that’s not entirely true — there’s still substantial commercial incentive to share these models, and many people in the administration remain very interested in reaching a big global market and ensuring the American AI stack runs everywhere.
But the opposite view, which is prevalent in many middle powers, is also mistaken: that this is basically like a software license — prices will be pinched, it’ll get annoying, but they have an interest in selling and we have an interest in buying, so we’ll reach a contract and maybe overpay a little. That’s wrong because of the security reasons we discussed, and also because if you have limited inference compute available, this isn’t like a software license where marginal cost is extremely low. You’re actively competing for capacity that US deployers and the US government would rather have for themselves. The takeaway is that you need to find some way to ensure access to frontier models, because you’re not going to build your own and you’re not going to get them anywhere else. That feeds into a whole discussion about contract design and leverage, but I’ll leave it there for now.
Danny: You very helpfully, in one of your pieces, outlined a set of beliefs widely held in Europe — and I can confirm I too have encountered these. They include the idea that capabilities at the frontier have already or will soon plateau, so given that open-source models are catching up, there’s nothing to worry about. Second, the idea that the vast majority of economic value will accrue to those who diffuse models through the economy rather than those who build or provide them. And third, the idea that domestic good-enough AI can match frontier models for essentially all use cases that matter. Why are these wrong?
Anton: Let’s take them one by one. On capabilities plateauing — I think we simply don’t see technical evidence of that happening. We’re seeing continuously more impressive capabilities out of these models. Most descriptions of a plateauing trend are narrowly focused on specific paradigms or specific benchmarks. The benchmarks have saturation problems. The paradigm focus has the structural problem that many people in Europe are over-indexing on the extent to which progress in model size and success in larger pretraining runs translates to capabilities. There was a point maybe sixty months ago where this seemed reasonable — compute returns were getting worse, GPT-4.5 didn’t appear to have been a particularly successful training run. But at least since reasoning models, about fifteen months ago, that position has no longer been tenable. The gains in capability aren’t about pushing the pretraining paradigm further specifically — they’re about finding new ways to use compute effectively. We have inference scaling, RL environments we can build and scale up, and ways to use these models to build better models. We’re seeing the results in practical capabilities, economic impacts, and the cybersecurity implications that everyone on the scaling trend has been warning about. The trend seems robust.
Taking these out of order — on the third point, whether you can catch up with good-enough models: there are two problems. First, it’s entirely unclear whether there is such a thing as a stable fast follower that stays six months behind the frontier at all times. You have compounding effects: the winners get all the revenue, all the compute, then use their lead to enter a recursive loop of making models even better. By all reports from the labs, the models are already very good at making the next generation of models better, and if you don’t have that kind of model in-house, you’re going to struggle to catch up.
Second, what people underrate about the feasibility of deploying a fast follower even a year ago is that we were still in a phase where data centers and talent teams were expensive but not completely out of reach for a government to fund. You could build a 100K GPU data center and run a training run on it — that was payable out of government pocket. But now that we’re seeing revenue and valuation flywheel effects from frontier developers’ capabilities, they’re building gigawatt-scale data centers. These infrastructure projects only make sense with huge amounts of private capital, which requires revenue, growth projections, and a plausible story for capturing a big part of the market. While you might have been able to prop up a fast follower a year ago because it wasn’t that expensive, I’m not sure that holds in the world of big infrastructure projects that require commensurate big revenue streams.
On the diffusion question and where value accrues — that’s more speculative. There are good arguments for why you might capture a lot of value by just using models well. But the question is what kinds of models you need. The argument from the previous point implies it’s not going to be using the second-best models, because they’ll be far behind the actual frontier. Your competitors will be using better AIs to supercharge their economic activity, and in many domains that matters enormously for how well you perform — you just get outcompeted because your models are worse. The diffusion story might be true, and the jury is out on where value gets captured, but we can say with pretty high certainty that the value won’t be captured by anyone who’s particularly good at diffusing a very bad model.
So it still gets you back to the question of how you structure access to frontier systems. From there, maybe the diffusion strategy is viable for middle powers — import these models on good terms, bet on the commodification trend in the US, hope you get them fairly cheap, and use them well. But the first step has to be getting these models, and I don’t think most countries are on track for doing that.
Danny: On the commodification point — that’s a central sticking point in the story. Why should we not believe that frontier models will trend towards commodification?
Anton: Commodification among frontier models isn’t entirely implausible. But commodification doesn’t necessarily imply abundance. Just because margins compress and developers don’t capture a lot of the value doesn’t mean you then freely get to import the commodity — the commodity can still be very scarce. And the risk of that seems pretty high if you look at projections for inference demand versus inference supply, at the build-outs, and at the strategic constraints the US might place on these capabilities.
You might end up in a world where on the marginal token AI developers earn very little, and yet you still can’t get the commodity from elsewhere because you don’t have good contracts. Even if they operate on very low margins, they could still be huge firms with huge revenues, and it could still be the case that you simply don’t get the commodity without good arrangements. Betting on commodification doesn’t necessarily give you strategic upside on the question of frontier access. It does mean the pricing won’t be prohibitive — you won’t be spending your entire economic output on buying tokens. But in the short term, pricing doesn’t seem like the main barrier to access.
Danny: There seem to be a couple of risky assumptions that get made quite liberally. One is commodification, which may turn out to be true but the jury is very much out. The second is the trend towards smaller, specialized models — the story that cheaper-to-train models that are cheaper to catch up to will win the day. That’s possible, but it feels like a very risky proposition to stake your entire strategy on.
Anton: That’s the main point worth making in the middle powers discussion. It’s prudent for many middle powers to plan around different scenarios. It’s also not necessarily a good idea for every economy in the world to be, as Dimbos put it, a leveraged bet on AGI being true. That seems like a very risky synchronization of risks and exposure. Even if you think AI is very real and scaling is very real, you can still get financial crashes, policy decisions that crash the market — some of which are on the table right now. Everyone being overexposed to that specific risk doesn’t seem great.
To an extent, it’s worth approaching this as scenario planning. But the situation in which AI becomes really big and access becomes essential — that situation requires the most specific AI policy. If the other scenarios are true, it’s a continuation of normal economic policy: fix your energy, fix your permitting, improve growth rates, make your services sector good at adopting new technologies. Those aren’t very specific AI policy recommendations, and they’re what middle powers have been pursuing anyway.
The point where you need very specific, very decisive AI policy is if the big story is true. So even in the scenario-planning mindset, you should weight according to how much action each scenario requires. In the scenario where this is a really big deal, you have to act decisively and fast. You can’t afford to hedge your bets too much, because hedging still means you need to go big just for that eventuality. Otherwise you fully lose out in that scenario.
Danny: So the point you’re making — and I think it’s one that should be heeded — is: how do you maintain access to frontier capabilities given everything we’ve discussed? One meme that comes up frequently is open source, often not particularly well specified. But if we take it seriously — why shouldn’t a middle power bet on open source? You could argue it’s a relatively cheap insurance policy. We may end up in a version of the future where it works, and we’ll have invested maybe a couple billion, which in the grand scheme of things may well be positive expected value. Why would that not be true?
Anton: The question is what you mean by betting on open source. Does it mean developing open-source models yourself and sharing them across the middle-power stack? Or betting on someone else open-sourcing models and building on them?
Danny: Either. It could be investing money — this is what the EU might do — or incentivizing or legislating for the use of open-source models to encourage that market to develop. I see the strategic logic: in pure theory, it would be very nice to encourage developers to compete on open-source models to avoid lock-in effects.
Anton: There’s a good version of this story. As a middle power, you want to be able to use open-source capabilities when they’re around, incentivize their use if you can get them, and build software around the idea of open-source models so you can easily switch between open-source and closed-source. You want proprietary domestic scaffolding that helps you use different models from different APIs as well as different open-source models. You want sovereign computing infrastructure, or at least deals with inference providers near you, so you can deploy open-source models on your own terms and maintain oversight. That all works well.
This is a hedging and switchability layer: it reduces the cost of switching between models, makes it easier to negotiate with closed-source developers against the open-source alternatives, and makes it easier to signal to open-source developers that you’re interested. If it turns out that building sufficiently capable open-source models is feasible and your ecosystem goes through with it, you can use them. If not, the scaffolding layer still helps you negotiate with closed-source providers. I think that’s a good strategy — the software-based part of the resilience stack.
What may not work is ensuring that open-source capabilities actually exist near the frontier. Can you develop these capabilities yourself and deploy them? Can your national fast-follower champion open-source its model? Sure, but that brings us back to the earlier discussion: you need a lot of revenue, a lot of infrastructure investment, and even then it’s very unclear whether you can continuously stay six to nine months behind the curve. That’s equally or even more true for open-source providers that don’t get to generate the same revenue around the models they’re developing.
Then the question becomes: is it realistic that you’ll get open-source models out of the US or China? On China, there are good theories for why they might continue open-sourcing and good theories for why they might not. But one thing I would warn against is thinking that importing open-source models makes you less strategically dependent. Paradoxically, being reliant on an open-source provider can make you more strategically dependent, because you don’t have a mutual dependency in the same way you do with a closed-source provider. If I buy big access to an Anthropic or OpenAI model as a government or private consortium in a middle power, and they build a large data center with a line of credit tied up in this contract on favorable terms, they have a strong incentive to keep the contract and lobby the US government to continue servicing it, because they want the revenue and they’re wary of losing a data center physically located in that country.
If instead I build a data center myself, with my own money, put NVIDIA GPUs in the box, and just take weights from China, I have no such leverage over China’s future decisions to continue open-sourcing. They could stop tomorrow. They could threaten to stop as a negotiating tactic. Sure, they’d lose market share and reputation, but that’s still much less hard leverage compared to what you get with a closed-source contract.
The US open-source question is basically the same: it’s unclear what the incentive is for open-sourcing these capabilities, especially if they have substantial security implications or allow others to copy US success at very low cost. Combined with the revenue point, I don’t think you can trust the idea of open-source models being around, and I don’t think you can find a way to guarantee them. All of that means you should prepare for the option of open-source models being around — it probably helps to have an architecture that’s agnostic about how the open-source situation plays out. But it’s not robust enough to make your bet on. You need to accommodate it within a framework that also gives room to importing frontier capabilities in other ways.
Danny: If I am a middle power, what then are my strategic options? The UK seems to have gone clearly with some kind of bandwagoning with the US. What other options are out there?
Anton: You can do the bandwagoning thing — with the US or with China — and just seek close involvement in their ecosystem, becoming an extended part of the domestic tech ecosystem. That’s option one. Option two is playing both sides. We’ve seen this in other domains of tech competition more than in AI, but we’re seeing hints of it. Southeast Asia and India are usually the leading examples — trying to attract investment from both, having America and China compete for market share in your country, and opening the market to both. The question is to what extent the competition remains purely economic versus how much national security implications play a bigger role. The more national-security-forward the paradigm becomes, the less enthusiastic both the US and China — but the US specifically — will be about allowing this playing-both-sides approach.
Danny: That would be the Huawei scenario, essentially — the US coming in and saying you will not use this provider.
Anton: Exactly. And I think they’ll have an easier time doing it this time. One structural issue with the Huawei situation was that there wasn’t a US-based competitor to a lot of the telecom equipment, which was what the first Trump administration struggled with. You couldn’t actually prop up your champion or do the big export program. You had to do these tricky things with Ericsson that didn’t work particularly well. Right now, the US controls much of the supply chain and the leading champions are all American-based, so it’s going to be a lot easier for the US to push on this.
We’ll probably see implicit or explicit conditions on the extent to which middle powers that want to use the American stack can also use the Chinese stack. This is a very effective way to get good conditions in a market with abundant supply of second-best models and chips — just get broad access on cheap terms. But I’m not sure it ever gives you the leverage to get trusted, exclusive access to the high-security, frontier capabilities. To the extent that you think that’s important, the downside of the playing-both-sides strategy is that no frontier provider genuinely trusts you and is willing to give you that capacity.
The third option is taking the sovereignty sentiment seriously and trying to build your own. I don’t think this is particularly realistic, but under two assumptions you might think it works. First, if the second-best thing is enough: it’s plausible and feasible for a middle power or coalition of middle powers to pour in a lot of money, effort, and protectionist legislation to prop up a domestic champion that’s six to nine months behind the curve. It’ll get really expensive and get outcompeted in different domains, but it’s in principle possible. If France were to really double down on doing this with Mistral, I could see a world where that kind of works.
Second, can you run a maximalist sovereignty play that actually reaches the frontier? I don’t think that works. In practical terms, you need the data centers, the revenue flywheels to fund infrastructure build-outs, a way to actually get the US to give you the compute, and leverage to ensure that. There are many parts to building a competitive frontier model, and many more to scaling it into a strategically relevant actor. Politically, it’s even harder. The US will try to pick off allies one by one and offer favorable terms to draw them back into the US stack. The UK obviously comes to mind, but there are also East Asian and European US allies that would take this deal right now.
Domestically, AI models cost a lot of money and a lot of energy, and both of those trade off against very important domestic goals — not to speak of using pension funds and sovereign wealth funds, which are extremely politically charged. There’s a lot of political incentive for oppositions in these countries to derail this. I don’t see how you get a stable level of political support for this kind of mega project that trades off against so many politically salient goods.
What I think you can do is latently maintain some capacity that would help you do this in the future. Right now, the politics don’t work because it’s not serious enough — publics aren’t close to making this kind of sacrifice, and frankly neither are many governments. But if you structure your current approach — whether bandwagoning or playing both sides — in a way that still builds capabilities that could be refocused towards a sovereignty project later, that gives you strategic optionality. Where do you concentrate your compute? Do you concentrate in clusters big enough that they could in theory be repurposed for training? How do you keep talent in your country? What contractual agreements do you have about what happens with data centers after the first few years of privileged use by whoever built them?
You can make many arrangements for latent capability across the middle powers, and then you have the future option of saying: now we’re taking this seriously, and we go all in. That’s worth doing not just because it might be the right thing, but because the option of being able to do it improves your negotiating position. If you have some fallback to losing frontier access from elsewhere, it’s much harder for the other side to engage in extortion.
Danny: I want to hook into one specific point there — can you say more about why onshore compute matters? I think it’s not necessarily evident why you’d want to insist on having data centers inside your physical borders.
Anton: It depends on how this onshore compute comes to your shores. If a hyperscaler just builds a data center with no special contract or provisions, the security benefits are comparatively marginal. What you still get is the latent option of intervening: you can throttle energy to the data center, in extreme scenarios you can potentially expropriate, and you have limited ability to tax the flow of tokens and revenues. You get some de-pooling of political risk across countries. But in the direct comparison of an unconstrained hyperscaler data center on your soil versus a data center you yourself own, the trade-off is clear — you’d rather own it.
But two things make this more attractive. First, realistically, you often don’t have that choice. Many domestic neo-cloud-based compute build-outs aren’t working as well as hoped — there isn’t enough demand to pull from them, and hyperscalers know how to build these things much better. Often the trade-off isn’t “our chips or their chips” but “data center there or no data center at all.” In that trade-off, the marginal benefits help.
Second, you can structure the deals around hyperscale expansions very differently. You can structure them as joint ventures, using them for implicit tech transfer into your ability to build data centers and scale compute infrastructure, while also giving yourself a stake in the growth and revenue generated. Beyond joint ventures, you can tie this into broader government-to-government or government-to-business deals, like the tech prosperity deals we’ve seen. In theory, you make a big deal with a US company or the US government: they build a data center, and part of the agreement says that data center will be used to exclusively service a particular frontier model for specific parts of your industry and government.
If the compute is physically in your country, you can enforce that contract. If they’re found in breach — if they stop providing the frontier model they guaranteed — the data center stops. You cut the energy, stop the maintenance, the compute reverts to you. They’re still on the hook for the line of credit. That’s a really bad situation for them, which gives you clear, short-path leverage to ensure the other party sticks to the deal.
Obviously, these agreements don’t exist in a vacuum — we see with the UK-US tech deal how other policy considerations influence deployment — so you don’t get out of the broader issue of asymmetric leverage just through having compute located onshore. But at a level of fairly low salience, it still helps to have an object-level agreement as a basis for the broader leverage fight. It’s much harder to push the US into continuing access if you have no contractual basis. Having a piece of paper that says these guys promised to give us their best model in perpetuity — even in today’s world, that’s a helpful starting point for thinking about leverage around semiconductors, tariffs, and everything else.
Danny: It feels like an every-little-helps logic. The other obvious thing, which you just touched on, is being embedded in the supply chain — ASML in the Netherlands being the canonical example. But how much leverage does that really give the Dutch government? In a tit-for-tat game, you’re tapped out after the first move.
Anton: Much less leverage than you’d think. There are two categories of leverage that directly relate to the AI supply chain. The first is upstream chokepoints — everything that feeds into AI production in the US, from chips to model development to deployment: semiconductor manufacturing equipment including ASML, HVM production, chip production, raw materials, possibly even data and talent. Everything the US needs to build these models is an upstream point of leverage.
Downstream, you have everything needed to turn tokens into real-world value: advanced manufacturing, robotics, and all the things needed for that proverbial country of geniuses in the data center to make a tangible impact. These are also bottlenecks because the US realistically needs them to turn its advantage in AI development into broader economic advantage and strategically relevant production.
How much leverage does control of one of these chokepoints really give you? On the surface, it helps — if you stop giving us access, we stop giving you access. But in that basic tit-for-tat, the US wins because the US has much more leverage and many other things you rely on. We’ve seen reports of intelligence provided to Ukraine being brought into trade negotiations with the EU, leading to the EU accepting fairly unfavorable trade conditions. In the current geopolitical situation, the US is pretty far ahead.
For the simple naive version of leverage to work, Europeans and middle powers would need to reach a very different point in terms of economic and defense dependencies. But I think there are narrower ways to get this right, and they have more to do with how quickly your squeezing the bottleneck bites versus how quickly their squeezing bites. That’s the underrated dimension of what bottlenecks matter.
A lot of the discussion is about how important a bottleneck is for the AI supply chain — if we turn off ASML, eventually there are no chips. That’s true, but it takes a long time from no longer producing ASML machines to there being no Opus 4.8. If you stop the API access to Opus 4.7, they can tell tomorrow. This timing difference, more than just the importance of the leverage points, cuts against the supply-chain-focused view of leverage.
That said, you can still do a lot. There are short-term parts of supply chain elements that are easier to leverage — ASML doesn’t only make machines, it also does maintenance, which is easier to restrict and less obviously subject to US legislation. You can build legal scaffolding and frameworks for how countries could quickly control exports and the diffusion of their bottleneck capabilities.
But the real answer is coordination. In a single-country-to-US exchange or even an EU-US exchange, the US substantially out-leverages you. What you can do is have a middle-power-wide coordination: whenever you make a deal with the US, one of the breach-of-contract stipulations is that the middle-power alliance stops deliveries of certain parts of the AI supply chain. If this applies across a broad coalition with different bottlenecks at different speeds, you diversify the portfolio, collectivize the risk, and can use models from one place to soften the blow to a country that gets access cut off, while that country stops its bottleneck deliveries. The more you distribute these effects across a broader collection of middle powers, the easier it is to find leverage that works robustly across different timescales and that is less vulnerable to US counter-leverage.
That said, I think even with all the middle powers together, this kind of leverage is probably limited in the current situation to serving as a backstop to a specific contract. You can’t just go to the US and say: we’re shutting off all your access unless you do what we want. But if the US says it’s offering certain terms in good faith, you can say: great, we’ll just insure it with this middle-power alliance chokepoint-based contractual guarantee. Only if you’re found in breach do we pull the trigger. That’s a way to keep the US to its word, and I think it probably works fairly well.
There’s also an entirely different part of the supply chain conversation, which is about economic effects. ASML can potentially have substantial growth effects if you do real work on widening that bottleneck — lots of analysts say ASML isn’t ramping up capacity quickly enough based on what you’d expect if AI becomes really big. If these are bottlenecks to how fast AI progress goes, you’d expect them to capture a lot of revenue simply because they’re scarce and important. Focusing on that not as hard leverage against the US but as a way to get a slice of the AI pie and generate revenue — that makes a lot of sense independently of the strategic relevance.
All in all, it works to ensure a narrow contract, it works to generate an economic share, it works to force close alignment with the US. Probably doesn’t work as a nuclear option to get the US to do whatever you want, but it does something — and it’s probably worth building out these bottlenecks even if you don’t think they’re the big nuclear option.
Danny: It feels like there’s a crawl-walk-run logic: you can get very sophisticated with strategic scenarios and do the clever stuff that’s contingent on many assumptions — and you have to hope it doesn’t get unpicked by someone who doesn’t like what you’re building. On the other hand, there’s some fairly difficult but conceptually straightforward first-order stuff, like what you’ve just described, that probably should come first before anyone attempts the more sophisticated plays.
Anton: That sounds right. But it’s still nice to keep in mind — and also, in many ways, we’ve gotten lucky with where our paradigm is. There’s this huge and complicated supply chain that by default cuts many countries into the leverage over where this technology goes, and that supply chain happens to be concentrated largely in liberal democracies that we might think would be responsible stewards of what happens with this technology. We don’t have, as in basically any other area of emerging technology, a China that is highly competitive on these fronts. They’re lagging far behind on semiconductor production, still a decent amount behind on building the models, and obviously behind on semiconductor manufacturing upstream. One of the few highly sophisticated manufacturing supply chains that is still clearly dominated by liberal democracies also turns out to be perhaps the most important in the world right now.
Danny: Reasons to be cheerful, as the Brits would say. Given the absolutely immense pressure on those chokepoints — the size of the prize if you solve it is very large, and capitalism is quite good at figuring these things out given the right incentives — why should we not think the Chinese supply chain will catch up?
Anton: They will. A lot of getting this right is about a window that is closing. There are different projections for where Chinese semiconductor manufacturing capacity grows, and some of that depends on how stringently export controls are applied. The US is doing work to nudge — or coerce — its allies to stop exporting these capacities to China. Realistically, it’s also in these allies’ interests not to do that, though you can play around with it as a negotiating token. But you can’t push it too far, because at the end of the day, the Dutch interest is also not for China to develop these capacities quickly.
They will catch up. They will have a functioning semiconductor manufacturing industry. They’ll build decent AI models on top of it. They’ll fill data centers with chips that run decently well. But a window of a few years until that’s the case is a lot, especially if you think we’re in a recursively self-improving paradigm that can quickly spill out across many other domains.
The question is what we do with that lead. On the diplomatic and geopolitical side, it means making sure that while the American AI stack is in the lead, getting the world running on that stack — isolating China and Chinese expansion interests around using its tech stack, generating all the surplus value from AI diffusion across the world by exporting comparatively abundant American compute and American AI models, and capturing the entire market. Even if Chinese semiconductor manufacturing catches up, there will be lock-in effects and not much market demand left. I wrote about this in a paper — it’s called “The Closing Window to Win.”
The other part is being able to use good models to actual geopolitical, practical, and industrial effect. It means not just having the best version of Opus in your data center but deploying it for widespread economic gains — specifically, in the China competition, industrial production. How do we use this to get good at advanced manufacturing, advanced robotics, pharmaceuticals, weapons? For that, you need not only the American reindustrialization project people talk about but also the notion of allied scale: many of these industries are in much better positions in US allies — scaling biotech production in India, robotics in South Korea and Japan, advanced manufacturing in parts of Western Europe. The image is always an American-built data center sitting next to super-sophisticated manufacturing capacity somewhere in an ally country, feeding data into the model, the model improving the workflows, and you get this accelerated, self-improving effect. If you do that, you’ll probably have a decisive lead by the time China has caught up.
Danny: All of this is increasing in political salience, surely so in the US. We’re seeing signs of that looking toward the midterms, not to speak of presidential primaries and elections. It’s unclear what the relevant cleavages will be. What will the political landscape of arguments look like come October, November?
Anton: For the midterms, AI is going to be a factor but not a particularly big primary driver of salience. I think we’ll see the most aggressive politicization by the presidential primaries, once we have the polling, the midterm results, and the first wave of political spending on AI. That’s the point where many policymakers will be actively looking to build a profile on this issue. The policymakers talking about it now are mostly trying to build a profile, as opposed to responding to what they think is public sentiment.
As for the political attractors and cleavages — on both sides there’s a very strong incentive to lean into a populist-flavored anti-AI set. It maps neatly onto many existing grievances: general elite skepticism, anti-big-tech sentiment that’s strong on both sides of the horseshoe. You can build a great jobs message, a great privacy and individual rights message. There’s a lot you can say on AI that fits deep societal conflicts and always makes for very effective electoral politics. On both sides, there will be politicians looking to make this their niche.
On the Republican side, you might think there’s a strong anchor for a more pro-AI message because it would be difficult for Vice President Vance to run too far from the administration’s record, which has been very pro-AI. One interesting thing to watch is the extent to which Vance tries to distance himself from the administration’s record. I suspect he’d like to keep the tech money and substantively does think AI is an important strategic priority for America. But he also doesn’t want to lose to someone who attacks his tech connections from the flank — we’ve seen DeSantis and Hawley testing that message.
On the Democratic side, it’s even more difficult because there’s no obvious moderate candidate who would run on a particularly pro-AI message. If you look at early speculative polling and prediction markets, probably someone like Newsom — but his record on AI is mixed: anti-regulation in some ways, deep ties to California donors, but also a record of regulating technology.
In general terms, primaries tend to pull towards extremes. The question is whether the AI industry and everyone who thinks the populist direction would be bad can find a realistic-about-AI message to anchor the moderate camp. And then whether that message can be stabilized in policy that addresses people’s anxieties. You can message-test boosterish things about how AI will change lives and create jobs, but I don’t think voters will trust politicians on their word alone.
Is there an AI policy you could pass in 2027, in the new Congress before the primaries, that renews voters’ faith in policymakers’ ability to handle the disruptions? That’s very hard. Even a super-ambitious expandable safety net or a big public wealth fund won’t pay out massively and noticeably by 2028. To some extent, moderates running on a less AI-skeptical message are going to have to do so against public judgment — this will be an elite consensus versus popular sentiment dynamic. It’s plausible that by 2029 or 2030 we see more positive economic effects from AI, and by the next midterm we get the electoral payoff. But I don’t see a way to stabilize this purely through policy by 2028. It’s going to have to be a mix of message and willingness to confront the gap between elite and public opinion.
Danny: Sounds like we’ll have to cross our fingers, because that is truly one way of derailing the project. You started writing your Substack early last year. What have you changed your mind on since?
Anton: I’ll split this into the international and domestic parts. On the international part, I started out much more pessimistic about the general awareness and ability of middle powers to coordinate. Even six months ago, I was very skeptical there would be any interest in coordinating or any progress being made on middle-power alliance building. By now, my main concern is that the motivation — whatever shape it takes as downstream of broader political salience — is running through channels so far away from what the conversation in San Francisco and DC is like. I now see that as the bigger issue in getting middle-power policy right. It’s not so much the gap in motivation but the gap in awareness of the specifics of the technical trajectory and what it might imply on the geopolitical level.
That’s a very different scope of challenge. A year ago, it was directionally helpful to just grab people by the shoulders and say: this is real, do something, and you’d hope that doing something would be directionally good. Now I think the “do something” impulse has many ways that go badly and end up counterproductive — on the sovereignty thing, for example.
On the domestic part, I was writing about the trajectory towards populist politics in general terms and the possibility of voter anxieties coming up. I’m still surprised by the extent to which policymakers seem willing and ready to run ahead of that sentiment. I was squarely in the camp of: we have a lot of time to get the technocratic project right, make compromises on informed legislation, pass it, and then see the political reality. The idea of a populist backlash was always more of a backdrop — if we don’t do this, it will eventually emerge. Now it turns out that whatever good AI policy we might want to make domestically, we have to do it while this movement is already live, while there are already people in Congress and partly on the streets, and significant popular reporting. The political salience makes everything much harder. It’s a live exercise, not: we get to do the technocracy first and then the political salience comes along. I’ve changed my mind on that, though I don’t quite know what it implies for strategy. I think it just means things are going to be a lot harder than I thought.
Danny: In terms of situational awareness, which countries do you rate most highly outside the US?
Anton: The UK, probably number one, in terms of government capacity and awareness. That’s interesting because the UK has by default been dealt a pretty bad hand in terms of economic structure and geopolitical situation — not the easiest starting position. But a lot of what the UK government has done, and the people in UK government, show very good awareness of what’s happening in the technology and the geopolitics. That’s partly downstream of AISI, partly the cultural and intellectual proximity across the Anglophone world, partly the presence of DeepMind. It’s head and shoulders above the rest of the middle powers.
Honorable mentions: Singapore, perhaps Japan, perhaps South Korea, and in its own way France — which draws distinctly French conclusions from a fairly similar set of information. I disagree with some of the conclusions they draw, but at least the information they have is sound.
Danny: Let’s say you were advising someone senior from one of the countries not on that list. What is the one thing that would most meaningfully shift their perception?
Anton: In terms of arguments, the question of frontier access is the one I would make and have made. As soon as you realize that frontier access is contingent and you have to do something about it, a lot of your other strategic priorities fall into place. In terms of what to actually communicate to get to that realization, it’s harder because there are so many factors and it’s mostly about future trajectory predictions. I still think just the projected inference demand versus inference supply makes such a big difference. It really turns around the logic of the buyer’s versus seller’s market on compute in the future and makes it much more clear why there’s a very realistic scenario in which you just don’t have access to the extent you’d like.
Danny: What should I have asked but didn’t?
Anton: You were fairly comprehensive. I would say on the political strategy side, it’s interesting to think about what the developers should do specifically and what their role is.
Danny: Yes, please.
Anton: I’m not sure the current strategy is working well for them. OpenAI specifically and many of the developers are still trying, in some of their public communications, to appear as grand arbiters of good policy ideas. And in other parts of their understandable industry-focused government affairs work, they’re working for what every industry works for — reducing the regulatory burden and shifting it around. There’s tension between those two, which results in less policy progress and less trustworthiness being built than could be.
What would be good to see is closer alignment between these two strands. There’s a version of the industrial policy blueprint that can be read this way: a policy that distinctly and credibly favors the developer relative to other regulatory arrangements — shifting a lot of responsibility to deployers and away from developers. The message becomes: let’s build a society that can handle us going at full speed, basically unconstrained. That’s an obviously self-interested policy play, but it’s one that still takes risks and threats seriously. Converging the government affairs work and the “let’s do what’s good for the world” work into one coherent policy vision that you can proactively campaign on, and then selectively advancing legislation that corresponds to it, makes for a much easier position to negotiate from than what we’ve seen recently — a wide gap between “here are all these risks we’re taking very seriously” and engagement in seventeen different state-level legislative efforts to stop anything that addresses these risks from happening. Seeing some convergence in that would be great.
Danny: With that, Anton — thanks so much.
Anton: Thank you so much.