Intelligence Saturation and the Economics of AI – with Ioana Marinescu
Danny Buerkli: My guest today is Ioana Marinescu. Ioana is an associate professor at the University of Pennsylvania and a research associate at the NBER, the National Bureau of Economic Research. She was a principal economist at the US Department of Justice antitrust division from 2022 to 2024. She sits on Anthropic’s Economic Advisory Council and has done much work on labor markets, antitrust, and AI. Ioana holds not one but two PhDs, one from LSE and one from the School for Advanced Studies in the Social Sciences, EHESS, in Paris.
Ioana, welcome.
Ioana Marinescu: I’m very happy to be here.
Danny: Ioana, is the “lump of labor” fallacy a fallacy, or should we call it the “lump of labor fallacy” fallacy?
Ioana: I think it pretty much is a fallacy. It’s really easy to get carried away with simplistic reasoning about the economy, especially when you think about a sophisticated economy like we have today.
Danny: We’re recording this in February 2026. A small research firm, Citrini, published a 7,000-word scenario report just two days ago and ended up — maybe surprisingly or not — moving markets down.
Ioana: Crazy.
Danny: Surprising. I did not see that coming. Maybe to get us started, what did you think of the scenario they described?
Ioana: I read that and thought it was interesting — they’re thinking through all of the bad things that could happen, and many of those things could happen. But I felt the economics links were often somewhat tenuous and not fully thought through. That’s why we need to sit down and think through every piece of the mechanism. Could that happen? Under what conditions? And so on. It is useful in terms of pointing out possible adverse effects, but I don’t think they’re necessarily going to happen.
To be fair to them, they did frame it as a scenario. And I think the markets are generally feeling a little nervous right now about AI. The scenario somehow hit a nerve and triggered a panic.
Danny: If I were to summarize the core thesis of the scenario: AI gets really good, it displaces a lot of humans, and that leads to a drop in aggregate demand — and that’s a bad thing. What exactly is wrong about that story?
Ioana: The fundamental problem is that they don’t really think about how workers reallocate. Their story is largely about intelligence workers ending up as, say, Uber drivers — that was one of their examples. They seem to think there will necessarily be a wage decline for these people. But if you think about it in a more economically principled way, that doesn’t necessarily follow, because the progress of AI in some sectors will often have a positive effect on what we call the marginal productivity of workers in other sectors.
As a generic example: the Uber driver, thanks to AI, might now be able to use more different services. The service itself becomes higher value to customers, so the driver’s value could go up. They don’t think through that side of things, which seems quite plausible. The question is: which effect dominates? That’s part of what we’re trying to explore in our paper.
Danny: Speaking of your paper — you have a recent paper out, the intelligence saturation paper. One of the novel bits is that you posit we may wish to think of the economy as having two parts: the intelligence part and the physical part. Maybe introduce the idea briefly.
Ioana: Interestingly, the Citrini report talks about the “intelligence crisis,” so I was struck by the parallel with my paper. The basic assumption is that AI is possibly going to replace a lot — potentially all — intelligence jobs. What do we mean by intelligence jobs? It’s essentially any job you could do remotely at your computer. If you can do it at your computer, plausibly down the line AI can do it. And don’t tell me “what about Zoom calls?” — the video generation is getting much better, and you’ll be able to have an AI avatar that does calls.
If that’s the case, then people have to work in what we call the physical sector, which is everything else — essentially any job that requires some in-person activity. It’s not necessarily physical as in “I need to do something with my hands,” but I need to be there in person as a flesh-and-blood human being in order to deliver the job, and this has to be a significant part of the job. I might also be working with AI, but fundamentally, I’ve got to be there in person. Those are the physical jobs as opposed to the intelligence jobs.
Danny: How do you think of the separation between the two? One may think this is not necessarily a static boundary and that it’s endogenous to AI capabilities in the first place.
Ioana: Absolutely. And I should say why my co-author and I — he is an AI and neuroscience expert, very much thinking about physical embodiment and the interaction with intelligence — why we think this framework is useful. If you look at price trends, the price of AI is going down exponentially, and that’s probably why it’s so exciting. But the price of physical capital — the kind you would need to do something in the physical world, even when using AI — is going down much more slowly, more linearly. In terms of economic incentives, the big incentive is to replace people in the intelligence sector because it’s so cheap to do with AI, whereas doing anything with robots remains fairly expensive.
Relatively speaking, what’s most advantageous is to expand the use of AI in intelligence tasks because it’s so cheap. I found it intriguing that the Citrini scenario had intelligence workers going to drive Ubers — which is a physical job. They have to be there in person to do it.
Danny: Another person who’s modeled this is Anton Korinek and Donghyun Suh in a 2024 paper, who took a different approach: they parceled out every task in the economy, ordered them by the difficulty of automating, and assumed that AI will progressively work through that list. If Korinek were to critique your model thoughtfully, what would he say?
Ioana: Our models are fairly similar. The intelligence part of our model is pretty much the same as theirs — we’re also saying AI is going to replace more and more intelligence tasks, plausibly starting with the easier ones and progressing to more difficult ones. But the question is whether there’s a physical sector where AI doesn’t make much inroads. That’s the key contention, and it’s an empirical one.
In their paper, they discuss the case of whether there will be some reserved array of tasks for humans. What we’re saying is similar, but it’s not that it’s reserved — it’s just technically very difficult to replace people in physical jobs for many reasons. It’s not easy, it’s not very cost-effective. You can do it, but at what cost? Relatively speaking, it’s easier to replace workers in the intelligence sector.
Danny: One of the beautiful things about your paper — and this is the explicit aim — is that it allows us to reconcile different intuitions that the more computer-science-oriented AI folks have with the intuitions that classically trained economists have. What kind of reaction have you gotten from the AI community?
Ioana: AI people tend to be smart, so they get the point. But usually what I hear is: the robots are going to get so much better, and thanks to AI, the robots themselves are going to get cheaper, faster, and self-improving. Maybe that’s possible. We just don’t think it’s very plausible, because that gets to our point about intelligence saturation. Just adding more and more intelligence to a physical process is unlikely to make it tremendously better. There are multiple pieces of evidence from prior experience suggesting that it’s difficult. You can never say never, but the idea that using AI to improve robotics is going to solve all your problems is certainly a high hurdle.
A lot of AI people are also aware that developing robotics is actually not easy. Right now it’s still very difficult. One of the funny things is that you see on X and other social media all these cool robots doing human-like things — but often the videos are accelerated because the real speed at which these robots operate is so much slower. You would lose patience watching them do a task. That shows you how powerful human bodies are for many of these tasks — human hands, for example, are incredibly well optimized. It can be done, but at what cost and speed for a robot to achieve the same performance?
Danny: David Autor has this distinction between highly qualified tasks and lower qualified tasks. The intuition is: if I automate the lower qualified tasks, that’s good for the worker because I’m left with the bundle of highly qualified tasks. But if I do the inverse — automate the highly qualified tasks — I’m left with a bundle of non-qualified tasks. That’s a problem. How does that mesh with your model?
Ioana: That’s an interesting way of looking at it. Our perspective is more macro. It’s also more simplified because we only have one type of worker in the economy. But the bottom line is that you really have to think about this at the macro level — what are the interrelations between sectors? Whenever we say workers are displaced from somewhere and go somewhere else, and that somewhere else might currently be lower wage, I want to say: wait — that somewhere else could become much higher wage in the near future because of the general economic growth that AI is generating.
As an intuitive example, take primary school teaching. It’s an in-person job — physical, in my definition. We’ve seen that online education just doesn’t work for most people, so the in-person experience is really important. In that sector, there have been very few productivity gains. A teacher fifteen hundred years ago probably didn’t do things hugely differently. And yet teachers today are paid a lot more than teachers fifty or a hundred years ago, because the economy is better and that pulls their wages up along with the rest of the economy.
Similarly, jobs that today aren’t well paid could become higher paid due to the general productivity growth that AI generates. It’s an important positive channel that people often overlook. The partial equilibrium may not be the general equilibrium.
Danny: Which parameters would you be watching most closely to understand where we’re headed?
Ioana: First, you want to look for a decrease in the share of workers in intelligence jobs. If that’s not happening, the revolution hasn’t started yet — there’s got to be worker reallocation. Then the question is whether the size of that reallocation is sufficiently large relative to the productivity gains in the intelligence sector.
The intelligence sector — the one where everything could be done virtually — is where AI is most easily deployed. As we deploy AI, does it raise the output of this sector a lot? And are relatively few people displaced from it? That would be a good scenario where wages are likely to grow. The more you see the productivity effects of AI in the intelligence sector weakening for a given amount of labor reallocation, the more likely that wages decline.
Those are the two core forces, and they depend on parameters. In particular, it depends on how easily you can replace a physical job with a virtual one — for example, how easy is it to replace an in-person primary school teacher with an AI doing the teaching? If it’s relatively easy, you could see a fairly large wage decrease as automation progresses. It could even be the case that wages rise at first, buoyed by general productivity gains, but as we realize it’s actually pretty easy to make everything virtual, wages could decline as automation continues. So one core factor is how substitutable physical and virtual jobs are — how easy is it to replace an in-person job with a virtual one and achieve about the same result?
Danny: A lot of these conversations end up coming back to a bet on future capabilities. Is it ultimately all a function of capabilities, absent maybe some hard physical constraints?
Ioana: For sure. But something AI folks don’t think enough about is that they often translate technical capabilities of AI directly into economic value, and those could be very different. Just because something is technically possible doesn’t necessarily mean it’s economically efficient.
Danny: To this exact point, there are two ways AI can have an effect on the economy today. One is through diffusion — the technology exists, it spreads, it gets actually used. The other is through capability increases. Which effect is currently dominant?
Ioana: That’s a good question, and I don’t think we have robust empirical evidence to say. What’s interesting is that right now it’s still a bit unclear if AI is doing much of anything in terms of labor market effects or productivity effects. We’ve seen some productivity gains, but is it really AI? Is it other things? Empirically, we’re not sure where we stand.
However, those two things are very important theoretically and very different. In our paper, we carefully distinguish automation — pure automation — from capability growth. When I think about automation, it’s about taking a given stock of AI, however capable it is, and applying it to more and more things. That’s where people potentially get displaced from the intelligence sector and must reallocate to the physical sector, and that can have negative wage effects under certain conditions.
What you’re talking about with increased capabilities — I think of that as capital deepening or AI scaling. That means we have more AI or better AI, so the total amount of AI available goes up. From economic theory, that’s always a good thing in itself. If you just add more capital to the economy, it makes workers more productive. In every basic model, that increases wages. In our model, once we automate everything that could possibly be automated, if we can still add more and more AI, that extra AI is going to benefit workers. But that could come after a catastrophic decline in wages.
Danny: In the long run —
Ioana: It will all be fine.
Danny: Exactly. To butcher a quote. To this point, you’ve suggested we may wish to smooth this transition. If we were to worry about a potential decline, we might not want to stave it off, but we might want to steer the speed — slow automation. The obvious counter is: how should we think about the trade-off between the smoothness of the transition and international competitiveness?
Ioana: This is a difficult question. First, in our model, the model itself is not well set up to think about transition costs because workers are all the same and can costlessly transition from an intelligence job to a physical job. It’s simplified to understand what would happen in that situation. In the real world, workers who are forced to change sector typically experience significant wage declines of about 15%. That’s significant, and from a policy perspective you might want to guard against it. There is economic theory suggesting you could want to tax AI a little bit just to slow things down — give workers more time, don’t have them all get replaced at once.
When you do that, you’re also delaying the growth effects of adopting AI. That’s an obvious trade-off. And then there’s international competition. But here our paper has something original to say: how important is it to be first? There’s this race to AGI where everyone thinks that whoever is first will transform the world. Well, maybe not. That’s where intelligence saturation comes in — the more intelligence you add, the less incremental effect it has. If that’s the case, even if you’re the first to achieve greater capabilities, you’ll only be a little bit ahead of the next. So it’s a trade-off, but it’s not this infinite advantage where going first gives you superpowers. To believe in that narrative, you really have to believe that intelligence saturation is not a thing — that having much more capable AI gives you a massive lead over everyone else.
Danny: In your model, definitionally, there is no such thing as strong AGI that automates all physical labor. Therefore —
Ioana: Therefore, it’s a bottleneck. You can always add more, but —
Danny: Right. I guess the critique would be that that’s an assumption baked into the model, which therefore produces this result.
Ioana: Absolutely. And you could see it more generally: maybe you’re not happy with my characterization as “physical.” Rename it whatever your preferred sector that you think AI has trouble automating, and the mechanics would be the same. We think physical is a useful label, but if we believe there’s some kind of work that is difficult or not cost-effective to automate — again, it’s about the economics, not necessarily that you couldn’t technically do it — then the same reasoning would apply.
We also have to think about different time horizons. There’s no doubt that in the medium run, at least, there are sectors that will be incredibly difficult to automate. We say physical, but it could be other more subtle things. Our model remains valid as long as you replace “physical” with “non-automatable” and “intelligence” with “automatable.” You would get similar dynamics.
Danny: The model holds as long as there’s something that we believe may not be practically automatable. And if everything is practically automatable, then we’re all better off, as some people have pointed out.
Ioana: Exactly. And one more thing. The intelligence saturation assumption hinges on two things. First, there are some tasks that are not automatable. Second, there is a complementarity between this non-automatable sector and everything else. Because if you can just substitute one for the other, then AI could still grow and you wouldn’t have the bottleneck — you could have very strong growth. The core of intelligence saturation is that there are two sectors: physical and intelligence. The physical is non-automatable, intelligence is automatable, and the two sectors are complements. That complementarity — the fact that you cannot easily substitute a physical thing with an intelligence thing — is what makes intelligence saturation kick in. You add more and more intelligence, and it only does so much. That also means that if China or whoever gets to AGI first, well, that’s nice, but they will only be a little bit ahead.
Danny: It will only get you so far. On the more practical policy end, in an essay in the Digitalist Papers volume two, you proposed two policy solutions to this transition dynamic: an AI adjustment insurance and a digital dividend that would fund it. Say more about the AI adjustment insurance mechanism in particular — why that mechanism and not another one?
Ioana: The idea with AI adjustment insurance is that workers have to transition from intelligence jobs to physical jobs, and there are still jobs — people just have to switch. As I said earlier, in practice, unlike in my model, this is not costless. People often lose 15 percent of their earnings. They spend time unemployed. We already have programs that help, but to the degree that we make a policy choice not to limit AI and to let it run, there’s an argument that we should help people who are adversely impacted. Just as with trade — when we decided to have more free trade, we knew some people would be negatively affected, and we put policies in place. The Trade Adjustment Assistance program existed and worked really well for those who qualified and were displaced by trade. AI adjustment insurance follows the same logic: help people transition, acknowledge that it’s difficult, and support workers through it.
The program has three elements. It has longer unemployment benefits, so people have more time to search. It has training. And the third, perhaps most unusual element, is wage insurance. If you take a job that pays less than your prior job, the insurance covers a percentage of the difference. Say you take a job that pays $1,000 less — if the insurance covers 50%, it would pay you $500. That helps people make the transition. With Trade Adjustment Assistance, the wage insurance was shown to be incredibly effective at helping people return to work. And amazingly, it actually made the government money because people returned to work and paid back the subsidy through payroll taxes. It’s a really appealing program design to encourage and support worker reallocation, which we generally expect with any big technological change.
Danny: Thomas Piketty, whom you know well, pointed out that r is larger than g — the return on capital is larger than economic growth — and that leads to concentration of capital. What happens to r and g in your model?
Ioana: What we see in our model is that the labor share goes down. Even when wages increase, AI in this model disproportionately benefits capital. Then it depends on who owns the capital. If capital ownership were somewhat equally distributed, everyone would also broadly benefit. It therefore depends on how broadly capital ownership is distributed.
This gets us into my second policy proposal: the digital dividend. The idea would be a small tax on the AI sector broadly — not just the makers of AI but everyone who uses it, because they get a lot of benefit from using it. We could put this in a fund where people receive the benefits. Or there could be other schemes — the government could take some participation in companies and then distribute the returns. It’s really important to think through this transition in terms of its effects on inequality, given that it’s likely to benefit capital. Where does the income from capital go? That becomes an important distributional question that one shouldn’t neglect when thinking about what sort of world we want to live in.
Danny: There are some interesting political economy questions there. Something a lot of people worry about — probably rightly so — is that the timing matters a great deal. There may be a point after which it becomes very difficult to change the regime you’re in, because capital has concentrated to such a degree that it would be even more challenging than it already is to do something about it.
Ioana: That’s why I was so interested in thinking about whether we can set up these policies now, sooner rather than later. With the digital dividend, if the labor market isn’t too disrupted and people are able to transition and things are okay, then maybe we don’t need it. But if it turns out there’s huge job displacement and the physical and intelligence sectors are very substitutable — so people lose their jobs, some can’t find new ones — then we could transition this dividend into something bigger, more like a UBI-style program that covers more people, including young people who don’t qualify for unemployment benefits. With the AI adjustment insurance I discussed first, you have to have worked enough to qualify. If you’re a young person without work history, you don’t qualify for anything. If this is going to hit young people hard, the dividend is a way to cover them. Similarly, even if you have benefits, they eventually expire.
A core aspect is that we don’t know exactly what’s going to happen, so we need a flexible system that can adapt. The fundamental danger is that once winners and losers are clearly known, it can be difficult to persuade winners to give some crumbs to the losers. It’s better to have something in place ahead of time.
Danny: You want some kind of Rawlsian veil of ignorance before this all hits — which is the elegant thing about your proposal. And as you point out, particularly for the digital dividend, the amount need not be fixed. What you want is to have the rails in place to collect it and the mechanism in place to distribute it, but you want to be able to change the rate depending on the needs.
Ioana: Exactly. If most of the economy is automated, you can just put a tax on all firms — that’s what I call expanding the base. First, I say let’s put the tax on AI-related sectors, because that can also slow things down a little, which has some benefits. But ultimately, if most jobs are replaced anyway, we can expand beyond that. Having a flexible system in place is really important.
I also want to add that it shouldn’t only be about redistribution of income. That’s very important — people need an income to live — but there are other considerations. For many people, jobs have a lot of non-monetary value. They provide community. They give a sense of excellence in developing your skills. Not having jobs could hit people’s well-being in ways that go beyond not having an income. We need to think about what people would do if that were to happen — what could replace the non-monetary benefits of jobs?
And as you were saying about the concentration of capital, it’s really important to think about power and who decides. If a lot of the benefits go to capital and we change nothing, then whoever gets those benefits has even more power. In that sense, it’s important to think today about the trajectories — what will it look like tomorrow and who has the power to decide. At a very basic economic level, if the owners of capital get most of the benefits, all this AI boom will be directed to satisfy their needs and not necessarily the needs of other people. The economy responds to whoever has the money to buy — that’s how the capitalist system works, and it’s very effective at that. So it’s not just about redistribution; it’s about predistribution — the conditions that will shape the economy and society at the next stage.
Another element of my proposal that speaks to this is the training side. That element will influence what jobs are created, because firms want to hire in domains where there are people who want to work and have the right skills.
Danny: And they don’t want to shoulder the training cost if they can avoid it.
Ioana: Exactly. If we had more training in physical jobs — health care in particular seems like a very good sector to invest in, because it’s growing for other reasons and it’s a physical sector — that would likely encourage the development of the sector. If we invest in training today, it will likely encourage the development of health care tomorrow. These are ways we can shape outcomes. It’s not just about making sure people have money tomorrow — it’s about how we shape our economy and society so that tomorrow we end up in a better situation.
What counts as “better” is in the eye of the beholder, and that’s why we should debate the goal. Then we can run models and scenarios: with this policy, you could get this result; with other policies, you could get other results — and at what cost each time. I’m not claiming it’s obvious what you want to achieve, and reasonable people will disagree. But it’s important to think ahead about what instruments we have and what we can achieve with them.
Danny: You point out in the intelligence saturation paper that investments in capital in the physical sector are a way of shaping the trajectory.
Ioana: Exactly. People, according to our model, are going to transition from intelligence jobs to physical jobs. A big reason their wages go down is overcrowding: imagine there’s a fixed number of hospitals, and now we have twice as many nurses — that’s bad for productivity. So you need more hospitals.
We haven’t worked it out in the model, but there will be market incentives to invest there because it’s a market that can be captured. But if we think that’s not enough — and there are policy decisions and other objectives at play — we might want to encourage the sector through training (which in some sense subsidizes the sector, since firms don’t have to train workers themselves) or through other types of subsidies. That changes the structure of our economy by giving extra advantage to certain sectors that are likely to absorb the workforce as people are displaced by AI — ideally sectors with reasonably good jobs that we can train for.
Danny: You mentioned predistribution. There’s a domestic view of that, but also an international view. In a conversation in London not long ago, something people worry about a lot is that if you’re a country that doesn’t own frontier models — on the assumption that frontier models will continue mattering — you can imagine a world where a lot of your GDP flows to a place like the US. That generates several complications, including that you would really struggle to collect any meaningful digital dividend. What to do?
Ioana: This problem is more subtle than it may seem. First, less developed countries have a much smaller share of their workforce in intelligence jobs. The displacement threat is therefore far lower, and the extent of the labor market problem could be much more limited in these countries. That might also mean they’re not getting the same productivity boost, but on the upside, they won’t be as severely impacted as a country like the US where the largest share of the labor force works in intelligence jobs.
Second, it really matters how competitive the AI sector is. There are reasons to believe there’s a lot of competition — many different models developed by many different actors, some incredibly cheap. They might not be quite as good as the frontier model, and that’s where intelligence saturation comes in. If you think being a little bit better is tremendously important — so there’s no intelligence saturation — then maybe it’s a problem. But if being a little bit better is just a little bit better, then poorer countries can use a cheaper, nearly free model and achieve almost the same results.
The potentially less grim view for developing countries is: first, they have fewer workers who might be affected; second, they can use AI very cheaply if the sector stays competitive and if using a better model doesn’t yield huge additional benefits. They have the chance to develop their economies at low cost without getting hit by huge job losses. Some things could go wrong, but it’s not obvious that things will be worse there. In some ways, they could be better.
Danny: You’ve done a lot of work on UBI. The underlying question is: is work a good or a bad?
Ioana: It depends — the economist’s favorite answer. In the very basic model in economics, people work to make money. They don’t like working, so you have to pay them. If that’s all there is to it, then great — no more jobs, we still get money, wonderful.
But if there are non-monetary benefits of work that aren’t easy to replace, then it’s more nuanced. Is there other ways we could get those benefits — through volunteering or other social structures that would provide community and skills development? I think the second view is more realistic. Some jobs are really bad and people would rather not do them. But a lot of jobs, even those that might seem bad from the outside — people might have colleagues they really like, even if the job itself isn’t great, and that really matters to them.
We can’t assume it’s necessarily for the best that we eliminate all work. We have to think about what jobs are good for besides income and how we can get those benefits in different ways. Let me give an example. With the digital dividend, one approach — what I’m proposing — is that people simply receive income at the individual level. But you could devote some of the dividend to local community investment trusts, where people are called upon to decide how the money should be invested locally to develop their neighborhood or community. That could be a kind of job for people — administering and deciding what they want to do at the local level. That would restore more power, meaning, and engagement. Those things aren’t always easy to design well, but it’s worth thinking about. In Europe, there have been schemes like this — transition funds in Spain, for instance, where they closed mines and local communities had to decide how to use the funds to support people who had lost their jobs.
Economists often think about the money, and the money is very important. But there are other elements important for people’s well-being that we shouldn’t neglect, and we can address them both theoretically and empirically by learning from existing experience.
Danny: We have a lot of empirical evidence on the effects of UBI from various experiments. You’ve done much of this work. One limitation is that these experiments all happen in a world with work. How much of those insights would transfer to a world with no work or almost no work?
Ioana: That’s a valid question. First, people are really worried that UBI will disincentivize work. In that case — hooray, we don’t need to worry about that because there’s no work to be disincentivized.
The benefits of UBI don’t really depend on work existing. It’s rather that the removal of work could cause harm, including non-monetary harm, that we need to think about rectifying. We know that unemployment causes mental health problems. In fact, researchers have shown it increases mortality — people who get laid off are more likely to die. The baseline probability is very low, so don’t panic, but it becomes more likely. Unemployment has real adverse psychological impact.
Jobs can be very important to people, and UBI is nice as far as income goes, but it doesn’t by itself provide the other benefits of jobs. You probably need some other social structure for that. There’s a real policy design question here, because some people say people will just invent that by themselves. I think that’s tough. As an individual, it’s often difficult to create the kind of structure that would really help you have meaningful activity, and often it has to be social. Then we have the classic coordination problem, where policy could really help.
Danny: You can invent it, but it only really works if everyone around you buys into some similar notion of what the alternative is. If it’s just you and two friends, that feels difficult.
Ioana: Exactly. That won’t necessarily happen just because people have incomes. The coordination problems are significant. That’s where having the right incentives and social structures in place can help generate meaningful activities for people.
Danny: If we go back to a world with employment and to your model — in a world of transformative AI, would you expect employment tribunals to be sympathetic to workers or to firms?
Ioana: It’s funny you’re asking me that because I have a paper about this, where I looked at the impact of economic conditions on how employment tribunals decide. To the extent that the situation is dire for workers, tribunals might have a bit more sympathy for the worker side. But it’s hard to tell. If we really think AI is going to displace jobs on a massive scale, this sort of thing might slow things down a little, but it’s not going to drastically change things — especially because there’s a lot of talk about AI-native companies that start with a completely different model and won’t have people to lay off. I think it’s important to support people through the process, but there are limitations to how much you can slow down AI deployment. It’s more about having a comprehensive approach so that we get good results for society overall, rather than going on a fool’s errand of saying there will be no AI in this country.
Danny: I thought you might say that employment tribunals might rule in favor of firms, because AI-induced competition has become so intense that they feel for the firm rather than the worker. Do you expect AI to increase or decrease labor market concentration?
Ioana: That’s a tough question. I think there will be new firm creation due to AI — new business models become possible. But plausibly those firms might not employ many people, almost by definition. So while there might be new firms, which typically helps decrease concentration, if these firms don’t have a lot of employment, I don’t know how useful that will be.
There are reasons AI could help small firms. Right now, it’s hard to do certain core functions — accounting, HR — well when you’re really small, and that might be an impediment. But as AI tools automate many of these business functions, it could make it easier to be a small firm. You could have many small firms rather than big behemoths that exist in part to absorb the fixed cost of complex departments. But it’s quite speculative, and I’m not sure which way it would go. Big firms and incumbents will naturally try to reinforce their position. How it comes out is fairly unclear.
Danny: That would imply that from a labor market concentration perspective, you would expect to see more mergers because they just wouldn’t affect labor market concentration. But it’s unclear whether anyone would want to merge at all given what you’ve just described.
Ioana: Exactly. It’s unclear how the equilibrium develops. This is uncharted territory. But usually when there’s a new technology, it reshuffles not only workers but firms — which firms are successful and so on. Whether that leads to more or less concentration is hard to tell at this point.
Danny: For something completely different: what is your favorite theory of divorce?
Ioana: I have written a paper on that. Why do people divorce? There are two big theories. Either you get to know somebody, they seem great, but then you live with them and it turns out they’re not — you made a mistake in assessing compatibility. Or you like them and they are great, but over time people change, and in some cases you don’t like how that change happens.
It turns out we can use data to disentangle these theories. In my paper, I showed that the big cause of divorce seems to be that people change. It’s not that you chose the wrong person to start with — you probably chose reasonably appropriately. But people change, and sometimes they change for the worse. That seems intuitive, too. Nowadays people usually don’t rush to marry. They know each other, they often marry later, so they have a good idea of compatibility. But statistically, we can show that it’s mostly not about insufficient information, but rather that people change, and sometimes the changes aren’t appreciated by the other person.
Danny: I suppose that would imply AI will not change the rate of divorce, because I could imagine a story where AI improves the information I have about a potential partner ex ante. Maybe AI will also make us change more — that seems less plausible intuitively, but who knows.
Ioana: It might not change much unless it makes you change more rapidly. I use AI a lot. I don’t know if I’ve changed. Paradoxically, sometimes I feel like it makes my job harder, which is a funny thing to say because it’s very useful. But given the type of work I’m trying to do — having deep insights, things that matter, things that are well-checked and well-developed — I use AI to make things even better. In the end, I spend the same amount of time or sometimes even more, because now I have this tool to push things further.
It ups the ante. I have this super tool, so I can’t be satisfied with something that’s merely pretty good. Now I need to make it even better. Paradoxically, it’s the source of more stress than you’d think. At first you think: I can do so many things! But then your bar goes up.
Danny: But now you can do so many things, and you can do them almost perfectly.
Ioana: Exactly. That’s tough. Not to mention that I am in large part one of these intelligence workers whose jobs are at risk. So there’s an identity crisis looming. It’s not easy to be in the middle of this.
Danny: Joking aside, do you believe your job is actually at risk?
Ioana: In the short run, no. Adoption is slow, I’m very privileged, and I have my unique strengths. I also teach students in person, so that helps.
Danny: So you’re also part of the physical sector.
Ioana: Exactly, that’s the physical side. But as far as my research job, the amount of progress the tools are making is definitely scary. Judgment remains very important, and that’s why I think I still have a lot of value. But we don’t know where this is going.
Sometimes nowadays I find myself purposefully writing things my own way, even though I recognize that maybe it’s worse — less smooth. But that’s how I think, that’s how I express myself. There’s this value in authenticity, but it’s warped, because what am I doing? I’m making things almost worse on purpose. It’s a bit like pottery — handmade, organic. You could have industrial pottery that’s perfect, but it’s not authentic. That’s definitely driven by an identity threat, because I pride myself on doing really good work, not on being authentic at the core. I think: I have all this expertise, I can do all this cool stuff. And now AI can do a lot of this cool stuff. What becomes more important is my judgment and taste — what I think is important, and maybe my quirks of expression. And I don’t entirely like that. I need to contend with a new way of thinking, and it feels uncomfortable.
Danny: You’ve studied philosophy. There’s a great book by Lionel Trilling, Sincerity and Authenticity, where he essentially argues that authenticity is overrated in many aspects of our daily interactions.
Ioana: That’s what I used to think.
Danny: But maybe that has now changed.
Ioana: Exactly. I used to think: all these people talk about authenticity — who cares? The important thing is to get great results. But now I think: maybe I need to go back to authenticity, if I still need to matter, because AI can do the work. Not quite — there are limitations. But it’s getting better by the day. It’s definitely an identity threat, and I think it’s going to reach more and more people. Things are reorganizing every day, and I’ll probably adapt. I’m not in any immediate danger — as I said, I’m very privileged. But it’s more a psychological kind of threat. How do you adapt to a somewhat different understanding of what you’re for, what you’re good for? We live in a weird moment.
Danny: My related pet theory is that politically, possibly the most salient group will be highly educated academics in high-prestige, relatively low-wage jobs. They’re currently compensated by prestige, and so they put up with a comparatively low wage. They tend to have political influence, and LLMs are really quite good at producing output that looks suspiciously similar to theirs. It goes to the heart of professional identity. If you already don’t have a lot of money and then the societal prestige goes away too — historically, that’s been a dicey setup.
Ioana: I think it’s hard. If you see it through the lens of our theory, it’s about whether people like me can find meaning in physical jobs — which in my case would most immediately be teaching. But there are other elements, like in-person mentoring and consulting, as long as it’s important for it to be in person. Because otherwise, why wouldn’t you just ask the AI?
Some roles might remain because we don’t want them to be AI — judges and politicians, for example, because we often require them to be in person. Counseling those people might be interesting. But even for them, if it’s just about intellectual input, they could use AI more and more. I believe in my own model that most intelligence jobs can eventually be largely replaced, including the purely intelligence side of my own job. That’s an uncomfortable thing to sit with.
But if you go back in history and assume nothing cataclysmic happens — if it’s just that we have to do different jobs — the artisans who did textiles by hand put immense pride in their expertise. When they couldn’t make a living anymore because of factory textiles, that must have been terrible. But in the end, people adapted and did different jobs. It surely wasn’t a great feeling to be in the middle of it.
Danny: Definitely not. Final question: what should I have asked but didn’t?
Ioana: I think we really covered a lot of ground, including some surprising topics. Nothing comes to mind right now.
Danny: Well, with that, Ioana, thank you so much. This was really fun.
Ioana: Thank you.