Labor Market Impacts of AI – with Bharat Chandar
Danny Buerkli: My guest today is Bharat Chandar. Bharat’s a labor economist. He’s a postdoc at the Stanford Digital Economy Lab and got his PhD from Stanford GSB. Bharat is one of the surprisingly few economists studying the labor market impacts of AI, which is why I’m delighted to have you on. Bharat, welcome.
Bharat Chandar: Thank you. Excited to chat.
Danny: What do we know about the impact of AI on labor markets today?
Bharat: Great. So there’s still not a ton of research on this topic. I can say that we know that in the United States, overall, we are not seeing major disruption being caused by AI. So if you look across the entire economy, you use kind of our standard government data sets for trying to study these questions. In my own work and work by others, other academics, we basically found that if you kind of zoom out and you look across the economy, the jobs that are more exposed to AI, they’re not seeing major disruptions being caused by the technology so far, at least it seems.
They’re not seeing substantially different trends in terms of employment or wages and things like that. Now in our paper, Canaries in the Coal Mine with Eric Brynjolfsson and Ray Chen, we try to dig into this a little bit more. There’s been a lot of discussion in particular about entry level jobs and how those might be having different kinds of adverse trends. And there was kind of speculation over the summer, especially about whether some of those trends might be being caused by AI. But there wasn’t a lot of evidence to try to support one way or another whether that was the case.
So we wanted to bring some data to this. And the way we did that is we have this partnership with a company called ADP, which is the largest payroll software provider in the United States. And with this partnership, we were able to track employment for millions of workers across the United States, across different jobs, pretty much spanning industries across the United States. So it’s a very large data set where we could really dig into the specific occupations that we might think are more exposed to AI. But then going further, we could also dig into certain age groups within those occupations that there’s been this discussion about adverse impacts for and kind of tracking the data, whether that’s actually bearing out in practice.
So when we looked at the data, we found that we were in fact seeing that these jobs that were more exposed to AI, such as software development, customer service, etcetera, there has been a slowdown in entry level hiring. And that’s not the case for more senior workers in those roles. And it’s also not the case for entry level workers in roles that are not as exposed to AI. One example that we have in the paper is about home health aides. So this is a job that you would think is not very exposed to AI because there’s a lot of in person interaction, physical interaction, talking to the patient, etcetera.
So that’s a job where in fact we’re seeing faster employment growth for young workers than for more experienced workers. So overall, that narrative that these jobs are seeing a slowdown in the entry level labor market does in fact seem to be showing up. Now, there’s certainly questions about whether that’s being caused by AI. And we tried, to the best of our ability, to test different alternatives that we thought could plausibly explain some of these trends. So one example was tech overhiring.
So there has been this idea that during the pandemic, tech companies hire too many people, and now they’re kind of trying to reduce that overload in their workforce. But we got similar results if we took out the tech sector, if we took out any computer jobs, any coding jobs, so not even just software development. There was also this idea that it could be driven by interest rate changes. And I think that’s certainly one potential explanation. So the way we looked at that is we looked at how exposed different occupations are to interest rates.
And it turns out that AI exposure is actually negatively correlated with interest rate exposure, and one way to think about that is that there are jobs such as in construction or transportation that are very interest rate exposed, but are not very AI exposed. And so you can kind of cut it between jobs that are more or less interest rate exposed, and you get the same results there. There’s an idea that it’s driven by outsourcing. You get the same results for jobs that are teleworkable versus not teleworkable. And another thing I’ll mention is there was this idea that maybe it could be driven by education disruptions during COVID.
So obviously, schools moved online, but it turns out that you get the same results for college graduates and jobs that don’t involve a college degree. And what’s striking about that is that for the non-college graduates, you actually see it even at higher age groups. So up to age 40, you’re kind of seeing these disruptions. So that’s suggesting that it’s not necessarily being driven by education either. That’s not to say that we proved one way or another that it’s being caused by AI, but we tested some of these alternatives and we were seeing the pattern still held.
And I think there’s certainly scope for more research here, and we’re lucky that we’re starting to see that a little bit now. One example of that is a paper by Hosseini and Lichtenger. They’re two grad students at Harvard who use data from Revelio Labs. It’s essentially like LinkedIn data. And they look at companies that put out a posting that references generative AI and implementing it within the company.
And then they look at companies that don’t do that. And they find very similar patterns to us. Belki Klein-Tussink at King’s College in London also finds a similar pattern in the UK. Now there’s one paper that kinda goes against that, which is this paper by Humlum and Vestergaard, which is interesting. They’re researchers in Denmark, and they’re looking at companies that adopt versus don’t adopt the technology, and they actually find a different pattern.
They don’t find any differential trend in employment over there. So I think we’re still in the early stages of trying to figure this out. And I think one thing that we can try to do is try to reconcile these different results that are showing up in the literature and what might be driving some of that. I think we want better adoption measures. And then just like, is this being driven by AI?
Are there other explanations that we’re not accounting for? What might those be? But I think this trend about this slowdown in entry level employment, and especially in jobs that are more exposed to AI seems pretty robust and has been replicated in different places.
Danny: And with the exception of the Humlum and Vestergaard study that you mentioned, everything you’ve listed seems quite coherent also with other studies that look at the macro effects where we, for instance, the one that Molly Kinder and the Yale Budget Lab did that shows no aggregate effect in the US labor market, which is perfectly compatible, I believe, with what you found, which is that there are some effects in some very specific pockets of the labor market.
Bharat: Yeah. This is an important point. So we should mention that if we’re looking at 22 to 25 year olds, that’s a pretty small share of the workforce. And so we can see pretty concentrated impacts for those workers in jobs that are more exposed to AI. But then when we zoom out across the economy, that’s not necessarily gonna appear, especially if we’re looking at these more aggregated government statistics. Like, how much of this can we separate from just noise and who’s getting sampled versus an actual impact?
Once we’re aggregating it up to that level, we might not be able to kinda tease those apart.
Danny: What was your prior going into this?
Bharat: That’s a good question. So I wrote a paper in May or so where I was doing this kind of exercise of zooming out across the economy and using the government data to try to compare employment in more versus less exposed jobs. And I got results that were very similar to what you’re mentioning from the Molly Kinder paper, where I was not finding these differential trends in employment across more or less exposed occupations. But I think that didn’t get the question about the entry level workers, which is where a lot of the narrative was and a lot of the discussion was. And we just didn’t have the sample size or the reliability in the data to try to get at that question as well.
So that’s why we wanted to try to take a larger dataset where we could try to get at some of these questions around entry level employment, especially in AI exposed jobs. And so going in, I guess I didn’t have much information to try to form an opinion about this. There were discussions in the media, different narratives about whether this was or was not being driven by AI, but I don’t think there was good data on, one way or another, about how employment was actually changing in these jobs that we could reliably trust to get an estimate of that. And so that’s where we wanted to bring in the data to get a sense of that. So I would say before we wrote the paper, I was just very uncertain about what was happening with that specific group.
But I do think that I had some amount of confidence that overall across the economy, we weren’t seeing major changes.
Danny: If you were to deliver a steelman critique as it were of the Canaries in the Coal Mine paper, what would that look like?
Bharat: I think the steelman critique is just that we don’t have an experiment where some companies—I mean, it’s actually pretty tricky. So you can’t even necessarily say we need an experiment where some companies use AI and some companies don’t. That might get you some of the way there where you can see these companies that are implementing it, like how is it changing their hiring? But I think the problem with that is that there’s also anticipation effects. So it could be that maybe I haven’t fully fleshed out my AI strategy at the moment within the company, but I’m anticipating that I’m going to be doing that in the future.
And I don’t want to hire a bunch of people right now who in a year or two, I might not necessarily want to retain because of these changes in AI that I’m anticipating that I’m going to implement that are going to make it so that I don’t necessarily need to have those people on the payroll anymore. So that’s another thing that makes this tricky. Like, I do think it would help a lot to have better adoption and there have been some innovations on that front. So whether it’s via job postings, I think earnings call records and how people are talking about AI adoption in those, I think is also a good way to go about this. Other adoption measures that might be based at the firm or whatever it is.
And then there’s also survey-based estimates. So that’s what Humlum and Vestergaard do, where they ask employees who are using AI. So I do think that there are innovations happening on that front about measuring firm adoption better. And I think that could go a long way towards addressing some of this, but there are other factors involved too that won’t necessarily be solved by that, including this anticipation issue.
Danny: Now your paper has an evocative title, Canaries in the Coal Mine. If the effect we see on young workers is real, which other effects would you expect to see in the future?
Bharat: Well, I think there’s a lot of uncertainty about what we might expect going forward in the future. So it could be that right now we’re in an adjustment period where firms are getting used to the technology and they’re making a lot of investments to figure out how to use it properly. And once they do that, they’ll potentially reverse course and start hiring these young workers again, especially since they might have greater capacity for adjusting to these technologies in the short run and knowing how to use it and things like that. So that’s one possibility. Another possibility is that the capabilities, which are accelerating very quickly, the AI technologies today are very different than they were in 2022, and they’re much more powerful now.
That could mean that we could see greater impacts on more senior workers. So the effects might not just be concentrated on entry level workers going forward. So there are a variety of scenarios that could play out here, and uncertainty is mostly around both adoption and also model capabilities and what kind of tasks that they’ll be able to perform going forward. So that makes it difficult to assess, are these trends that are going to continue? So far, we haven’t seen any reversal.
I recently updated the data through November, and we didn’t see any reversal. So at least insofar as this is more than three years out since ChatGPT was released, and those trends are just kind of continuing. So insofar as that tells you anything, but in one year, in two years, are we gonna see the same trends? I think there’s a lot of uncertainty around that.
Danny: Now if we think about the aggregate effects, the aggregate potential future effects of AI, if you had to place yourself somewhere between Anton Korinek, “things will go wild,” and Daron Acemoglu, “maybe not so much to see here,” where would you place yourself?
Bharat: I would place myself somewhere in the middle of that range. I do think that economists probably are underestimating the growth in capabilities over time, and there are reasons that might be the case. Maybe they don’t fit as well within our existing models, and I think there’s also hesitancy about making predictions about future capabilities. That’s not a typical thing that I think economists do very often. So that might be contributing to that.
But I do think that we’re seeing these massive improvements in model capabilities. I use AI very differently today than I used it a few years ago. So I think we should be cognizant of that. Now, I do think that the economists have some good points around bottlenecks in the economy, areas where it’s going to be more difficult to roll out these technologies. And there could be a variety of reasons that that’s the case.
There are regulatory reasons or cultural reasons within the firm that might prevent them from using these technologies. There could be slowdowns driven by challenges with implementation. So what are the data requirements that you need? What are the security guarantees? What are the privacy guarantees?
I do think that there are certain sectors of the economy where we’re going to not see as fast of an improvement. And of course, there’s all the industries that require physical or in person interaction and things like that. So I do think we want to take that seriously. Maybe robotics will change that and there will be a much larger scope of industries and occupations that might be exposed going forward. So I do think I’m sort of in the middle here where I take seriously this idea that there could be serious bottlenecks, but I also am cognizant that there are massive improvements happening to capabilities.
Danny: And how do you think through how improvements in capabilities translate into diffusion and then some form of labor market impact and then presumably some form of aggregate impact? Because presumably, diffusion is also on some level a function of capabilities themselves. It’s not just a purely exogenous variable.
Bharat: I think the first person I saw make this point is Jonathan Maslisch, who is a growth economist who also studies some of these questions. It’s a great point that adoption is also a function of capabilities. So the more things that these models are able to do, the more possibility there is for adoption. And maybe that’s one reason that we’re seeing very fast adoption with AI. One is the general purpose technology, but also there are increasing applications that are being developed.
Multimodal is much further along than it was before. Agentic coding is much better than the technologies that I think we had initially with the release of AI. It’s being integrated into codebases better. So that’s just one example. I do think that we could see the potential for much more applications that are directly relevant to work.
And I think one example of that is the GDP val set of results that are coming out of OpenAI. So they’re trying to look at real world tasks and how well the models can perform in completing those tasks. And so they look at different occupations and what are the most common things that people do in these occupations? What are typical workflows? And can the model produce something that kind of an unbiased observer would prefer relative to a human output at that same task?
And what was striking about that is that the main failure point for the models in performing those tasks was actually in just the output that they were producing. So if you wanted a spreadsheet, the models weren’t very good at creating a spreadsheet or a slide deck or whatever it was. And it turns out that I was just looking at this. I think if you look at the more recent models, they kind of took this to heart in the development, and they tried to improve the output that was coming out of these models so that they better match requirements for these tasks. And if you look at the ChatGPT 5.2 or Gemini three or the most recent version of Claude, they’re doing a much better job of producing output that is what these tasks are looking for, whether that’s a spreadsheet, etcetera.
And so I think as people become more cognizant of what the actual requirements of the work are, and I think we’re moving in that direction and people are taking this very seriously, we could see a broader range of applications going forward, which could also drive adoption.
Danny: What you’re pointing towards is this idea that we keep shifting our model of what we think the binding constraint is in terms of these models doing really economically useful work. For a while it was maybe things like factual recall, and then it was things like, well, the ability to produce a slide deck or spreadsheet with actual formulas that work. Some people think it’s something like continuous learning. And this may be true, and it may be one version of the world is we at some point, we aggregate enough of these capabilities and that unlocks everything. But historically, it would seem that every time we figure out one thing, we realize that there’s yet another thing that we would need.
Where do you come down in that debate?
Bharat: Yeah. I think that there’s actually something that I’m trying to figure out whether it’s a fundamental bottleneck or not. So I think right now, the way that we interact with AI, especially with the agents, is we’re increasingly, at least I am, we’re increasingly developing these more complicated prompts to specify what it is that we want the model to produce. And especially for longer horizon, more complicated tasks, we need greater specificity for what we want to convey to the model about what it should be producing. It can’t necessarily come up with that on its own.
It needs guidance on what it is that we want the model to do. And then it’s amazing at the execution. But right now, we still need to specify what it is we want. And I think in practice, the way this usually works is we specify something, we put in a prompt that’s maybe not as detailed as it needs to be. We get some output, we observe the output, we iterate on it, we improve aspects that it didn’t do as good of a job of implementing what it is that we wanted in our mind, and that could be because we didn’t explain it well enough.
And then we kind of do that iterative process. And that process is mostly not necessarily about the execution itself, but more about us trying to communicate what it is that we want to the model. And I think the question is, is that going to be the way that things are going to be forever? So is there just an inherent bottleneck that’s caused by us needing to specify what our preferences are to the model so that it can do the execution well. And unless we can exactly explain what it is that we want, it can produce whatever it’s capable of producing, but it needs to be something that’s useful to us.
And that requires a lot of manual intervention on our part, which is just literally us expressing our preferences, or even clarifying what our preferences are. When I write my initial prompt, I may not actually have a good idea of what it is that I want the model to produce. And part of that iteration is me understanding better what it is that I want. And it’s not immediately obvious to me how to solve that problem. Like maybe the models get a better sense of who I am as a person and what my preferences are, but those are also moving over time.
So I am often confused by this, and I don’t know to what extent we’re going to be able to reduce this bottleneck, which is us just communicating with the models and specifying what we want. Annie Liang—she’s a professor at Northwestern—has a recent interesting paper about this where she considers theoretically this idea that the models may not have a good sense of what it is that we want them to produce. And she gives the example about matching in the marriage market or the dating market. So I may have certain attributes about a partner that I want to communicate to the model.
And then I want it to go out, go on the dating app and then find people who match those preferences. But the exact preferences over the partner that I want are super complicated. They’re not easy to communicate well. I may need some time to iterate on that and refine what it is that I want. And so if I were just sitting here and I was telling the agent, oh, this is what I’m looking for in a partner, it would not do a very good job of actually picking out that person because there’s a lot that I’m not communicating to the agent that are extremely relevant to who I would actually want to match with.
And so that, I think that’s a good example of where this bottleneck is coming from and why it’s going to be, I think, pretty tricky to solve. But I’m not on the model development side and maybe they’re thinking very carefully about this and how to solve that problem. But at least for me, kind of backing up, I’m not exactly sure how to solve that bottleneck.
Danny: Would that be—the example you gave, would that be because your preferences are too high-dimensional and therefore too difficult to express explicitly?
Bharat: Exactly. Yeah. So it’s specifically about the dimensionality. In her work, that is the key force that makes it difficult to offload the preferences to the model and just let it do the execution. So for simpler problems where I can easily communicate what it is I want, and I just need the model to go solve this very simple problem, that is much more possible now and going forward.
But if it’s something where I need to communicate something that’s quite a bit more difficult, then that could make it more challenging to kind of offload the execution and actually get something that I’m looking for.
Danny: The counterexample would be, well, human matchmakers exist, but the counter to that in turn would be, well, yes, but they were a fairly small market, it would seem. So maybe that is an inherent limitation.
Bharat: Well, the matchmaker exists, but you still go through hours and hours of dating to figure out, is this the person that I want to spend my life with? So
Danny: I was thinking of human matchmakers. Right? There are marriage markets where where humans do the matchmaking. Need not be marriages.
Bharat: So that’s correct. They can help you find potential candidates for you to then invest the time in kind of evaluating whether this is the person that I wanna match with. But I still have to invest all that time, and the matchmaker doesn’t exactly know what I want.
Danny: Fair enough. What you’re also getting at is this fundamental question of automation versus augmentation, and there is an argument that exists that says, well, with all these evals, specifically GDP val and others, we’re in fact incentivizing the wrong thing. We’re in a sense incentivizing labs to build automation capabilities, whereas we might actually prefer them to build augmentation capabilities. The counterargument to that is to say, well, LLM capabilities seem highly, highly correlated. The model that’s best at math is also likely to be the best at legal advice, etcetera.
And that in turn implies that we may not actually be able to differentiate between increasing augmenting capabilities versus increasing automating capabilities because they’re ultimately one and the same? How do you think about that problem?
Bharat: Right. I think this is a great question, and it’s something that I have been thinking a lot about. So can you direct the technologies to be more augmentative than automated? And I think the main point that kind of gives me pause here, or at least uncertainty about to what extent this is possible, is exactly what you’re talking about. So the LLM capabilities are extremely correlated.
Models that are very good at math are also very good at other things. And so an implication of that—and I think Tom Cunningham, who is now at NBER, is someone who has been very provocative in my thinking on this front.
Danny: And I owe this point to him, I should say.
Bharat: Yes. Yes. So he, I think, makes a great point that if the model capabilities are extremely correlated in this way, it’s going to be more difficult to direct them in whatever direction we wanted to go that we think is more socially valuable. So indirectly, an implication of that is it might be more difficult to direct it in an augmentative direction instead of an automated direction. Now, I’m actually working on an essay right now that I’m hoping to release pretty soon, but I do think that there’s one case where we could certainly be investing more to make the technology more augmentative kind of by design, which is developing the technology in a way that makes it more conducive to learning.
So when we think about augmentation, we’re basically talking about areas where the technology could increase human capabilities. So we’re able to do more stuff with the technology than we were able to do before. And I think a great example of where we can see augmentation and an increase in these capabilities is by improving how well the technology enables us to learn. I think the history of the twentieth century kind of gives a great example of that. If you look at the beginning of the twentieth century, essentially no one in the United States, something like 10% had a high school degree.
And the United States was actually cutting edge on this front. They were much further ahead in terms of universal education than other advanced economies. But even then, only 10% of people had a high school degree, and these were either people who were very rich or extremely academically talented. And that changed very quickly. So by the 1940s or even the 1960s, the share had just like shot up.
It had gone up to like 70%, 80%, 90%. So there was a very rapid improvement in education in the United States, the universal access to this education. And I think that had an enormous effect on the economy, both in the United States and elsewhere. So what it led to is over the, especially the first half, the first seventy, eighty years of the twentieth century, we simultaneously saw massive productivity growth and also saw reducing inequality because this expansion of educational opportunities allowed many more people to pursue higher forms of work that were better paid, etcetera. And I think that we could see a similar transformation with AI technology.
So in the past, the way that we increased our capabilities is we spent more time in school, and that’s starting to have actually some negative impacts. So a good example of this is Ben Jones from Northwestern has a few papers where he’s documenting this increasing burden of knowledge. So if you look at the first age at which major inventors have their first breakthrough innovation, that’s increasing over time. So earlier in the twentieth century, it was something like by age 32 on average, you reach your first major invention. But today that’s something closer to like age 40.
And the reason that’s the case is because I need a lot more knowledge and I need to learn a lot more. I need to spend a lot more time in school to get at the frontier of a field than was the case in the past, because we’re kind of building on the shoulders of giants in our profession. But if there is a way that instead of requiring people to spend more time in school to reach their frontier, we actually improved the learning technology. So the rate at which people can get to the frontier, so they don’t have to spend twenty five years in school like I have, but instead we can compress that timeline by personalized learning, different innovations that could be enabled by AI. That could be, I think, pretty transformative.
And that would, one, both be augmentative, so it would increase human capabilities. And also number two, improve productivity growth because the humans themselves would be able to make much more use of their time and labor productivity, etcetera. So I think it could require a lot of investments in improving the infrastructure and the technology around education. But in my view, that’s a great direction for us to go in to make the technology more augmentative. And I also think it’s actually pretty robust.
So in pretty much any scenario, AI outcomes, it’s essentially always good to make people more capable by improving learning. So whether it’s just maybe in the future, we won’t have any work. Right? And even if we improve this learning technology so that we’re all super smart and we can all learn anything in, like, three months or something. Even in that world, our leisure will be better if we’re much more knowledgeable and smarter.
Or in the world where there are these severe bottlenecks or adoption frictions that prevent AI from being as transformative as it is, if we do improve learning and we improve human capabilities a lot, that is actually kind of setting a baseline on the productivity improvements that we could see. So I do think that this is kind of a robust solution. It’s not solving all the problems, but I think it’s going a long way towards making this technology something that is better for humans.
Danny: So I suppose your claim is not that that would prevent the technology from overshooting as it were and veering into automation territory. Your claim is that it would be optimal in the sense that under all conceivable possible futures, we would still prefer to have that learning accelerated versus not.
Bharat: Yes. Yes. I think that’s right.
Danny: Speaking of learning, one thing I think you’ve also thought a fair bit about is critical thinking, which seems increasingly important, possibly the scarce input and the binding constraint when it comes to working with AI. How should we strengthen critical thinking skills?
Bharat: Right. I do think that this kind of goes hand in hand with what I was talking about with learning as well. So part of the objective of improving learning capabilities is maybe making it so that people have more incentive or making it easier for people to develop those skills in such a way that’s maybe not as demanding as the way we currently try to both cultivate and also evaluate those skills. So it could be that AI could improve that process as well. But I do think one point I want to mention is that we should think about kind of these equilibrium or incentive consequences of AI in developing those critical thinking skills.
So imagine that we’re in a world where because of AI, knowledge work is significantly displaced or like not as important as it was in the past. Right? So if that’s the case, my returns to really investing in my critical thinking skills when I’m in school will be lower because monetary payoff is just not as high as it used to be. Like over the past fifty years or so, the returns to a college education grew. So the college wage gap is much larger now than it was in the seventies. And that’s because of the direction of technology.
But if that reverses and this is a different technology, that’s not skill biased, that’s not increasing the college wage gap, that’s not making it more lucrative to go to school for longer, then that may not be the case going forward. And we could see less investment by people at universities or at the K-twelve level than we’ve seen right now. So it’s essentially kind of your basic incentive trade-off, where if we see these compressions in inequality in the labor market, that could interestingly enough have perverse incentives in terms of my investment in education when I’m in school. Now that may not necessarily be the case. I do think that’s kind of an open question whether AI will increase or decrease the demand for critical thinking skills.
And it gets back to what I was talking about before in terms of specifying what it is that I want to the model and how persistent of a bottleneck that will be. If increasingly this becomes a world where we’re managing these teams of agents to act on our behalf, then that’s increasing the importance of correctly specifying what it is that I want. And that’s not a trivial problem. That’s like the task of a manager or an executive who is running a company or is managing a team of people. And that requires a lot of critical thinking presumably.
And so I think it’s not obvious whether in the long run, like this is going to increase or decrease critical thinking skills, but I do think that we need to kind of sort through these incentives and what the implications might be for education. And maybe that also encourages us to think about how to develop the technology for learning in a different way.
Danny: Speaking of management, you’ve worked closely with Nicholas Bloom. How do you think LLMs will impact management quality in firms?
Bharat: Management quality. So I do think that there’s been some discussion about whether it might lead to a flatter structure of firms. And I think the rationale for that is each person essentially becomes a manager where they’re managing a team of agents to act on their behalf. And so maybe we don’t need as much of a hierarchical structure within the firm where I’m directing employees under me to execute, right, because they themselves are just directing the agents to execute on their behalf. And so that is a potential implication of that.
Like, it could be that management skills become both increasingly more important, but also number two, increasingly more widespread and demanded by companies because everyone is essentially gonna be a manager where they’re offloading this execution to AI entities. So it could be that that becomes like a more important task even for entry level workers who find their first job. And in turn, that could lead to flatter work structures. So that’s kind of one way that I’m thinking about it.
Danny: Now if we think about the overall potential effect on the labor market, the standard response to some of these sort of unemployment scenarios is this is the lump of labor fallacy. We’ve been there before. There isn’t—we’ve talked about this. There isn’t a finite amount of work. This is a partial equilibrium in the dynamic equilibrium.
People will find new things to do and the conversation is kinda silly. Let’s not even go there. Now it’s possible that that will hold true, but it seems to me, you and many others also take quite seriously the possibility that, in fact, this time, it may be different. What would make it so that it will be different this time?
Bharat: Right. I think there are two dimensions to this, one in the shorter run and one in the longer run. Over the longer run, I think the way that I’m thinking about it is how quickly can the model capabilities improve so that they end up automating just an enormous share of tasks. And so there’s just not a lot left that humans can do at a greater capacity than AI can. And again, I mean, there are a variety of bottlenecks that people have posited that could prevent something like that from happening, including what I was talking about before in terms of specifying what your preferences are.
But that I think is one scenario that could make it very different. It’s dependent on how the capabilities evolve over time, how quickly we see improvements in physical tasks and robotics and things like that. So that’s one thing that I would want to think about is in the past when new technologies displaced labor, there were new forms of work that were created as a result of that. And humans, because of our malleability, were able to kind of move instead into those forms of work with growing labor demand and shift away from jobs that were decreasing in demand. But if those new tasks that are created are also being done by AI, that could lead to a different outcome.
So over the long run, I think that’s kind of the question for me is, are these bottlenecks gonna prevent something like that from happening widespread across the economy? Or are we going to see that these new capabilities are also being done by AI, and that’s gonna make it more difficult for many people to find work? In the shorter run, I think the big question that I’m trying to think through right now, and I think this is more immediate, is the areas where we’re going to potentially see job displacement, are the people in those occupations going to be able to adjust by finding alternative forms of employment? So that’s what we’ve seen in the past. People have faced displacement.
Today, the unemployment rate is under 5% despite all the enormous technological change we’ve seen over the past a hundred years or so, because as certain sectors that are being automated face decreasing labor demand, there are other new forms of work that are being created that people can move into instead. Now, the question is, are the people who are being displaced going to be capable of moving into those new forms of work that are being created, the new labor demand that’s being created? Or are their skills too far afield from those new pockets of potential demand that they won’t be able to make that transition well. So I think a really important question here, and this is a project that I’m working on right now, is kind of thinking through these equilibrium implications, these implications of allowing people to adjust to the scenarios that they’re facing. So are there pockets of the economy where there are going to be occupations that face displacement, where the people in those occupations are going to have a hard time finding work?
And do we want to target interventions towards those specific groups of workers? And at the same time, are there pockets of the economy where occupations that are potentially facing displacement are actually not going to have that much trouble finding new forms of work? So I think number one, we want to kind of test in the historical record when you’ve seen displacement like this in the past. The people who were displaced, what did they do instead? And how easy was it for them to find alternative work?
And then do some forecasting going forward and some scenario planning for what are the occupations that we think might face more displacement and how easy will it be for them to find alternative work in areas that are gonna face growing labor demand.
Danny: How should we think about demographic change and migration as part of this? And this may be more relevant for certain European economies than other places, but on some level, if you wanted, you could think of some of these technologies as a substitute to migration and also as a substitute to the workers that you’re losing because of demographic change with all the attendant political implications, of course.
Bharat: Right. I think this is a fascinating point, and I think it leads to some of the differences in response to this technology that we’re seeing between different countries. So I do think that this is quite salient for places like East Asia, where they’re seeing this kind of demographic crunch with much lower birth rates. And I don’t think that labor displacement is necessarily top of mind for them in relation to this issue—they’re just facing a labor supply crunch where they don’t have enough workers, and that’s going to be a growing problem going forward. And so they would love it if there could be AI that could essentially mimic a huge influx of workers entering the economy.
Now that said, I do think that this could be quite problematic, certainly in the short to medium run. And I think the reason that’s the case is because if you look at places like the United States, a disproportionate share of innovation is being done by immigrants. And I think if we make it more difficult for immigrants to enter the country, that could also potentially slow down or that could slow down the rate of progress in developing these technologies or the development of those technologies to happen in other places that are potentially more receptive to immigrants. So I do want to be cognizant of that—I mean, even now in these AI companies, a lot of development is happening via the work of immigrants as kind of the place where they all congregate from around the world, the brightest minds who are working on these problems. And so I definitely think that that’s kind of a first order issue in the short to medium run.
Danny: What will you work on next?
Bharat: So there are kind of three big things on my docket right now. One is measuring the international impacts of AI. And so, that’s work I’m doing with Belki Klein-Tussink at King’s College in London. So we’re trying to expand this canary style analysis to countries around the world, and we’re using this data from Revelio, which has employment information for different countries. And we want to first clarify in what places we’re seeing these impacts on entry level workers and which places that we’re not seeing that, how the trends kind of vary by country, both developing countries versus more developed countries, Western Europe versus other parts of the world, where there might be more of these adoption frictions versus less of these adoption frictions.
And then we also want to take seriously this idea about measuring adoption at the firm. So using these job postings to measure that and then get a better sense of, is this being driven by firms that are actually adopting AI, is there something else going on potentially? Maybe anticipation effects or other things that are influencing employment outcomes that are not being driven by AI. So we wanna extend that analysis internationally. The other thing that I’m working on, like I was just mentioning, is modeling these equilibrium implications and doing the scenario planning around how will workers adjust to potential displacement, and identifying the pockets where we might want to target interventions towards certain occupations that are going to have a hard time adjusting.
And I was actually just talking to my boss, Erik Brynjolfsson, as well about thinking through some of these Baumol cost disease type ideas. So certain sectors of the economy are going to represent a growing share of importance going forward, whether that’s health care. So how do we incorporate those types of ideas as well in terms of where we’re going to see a greater need for employment going forward? So that’s one idea around this kind of the modeling and calibration and the simulation of these labor impacts and where we’re going to see these impacts and have difficulty adjusting. And then number three is thinking more about these education ideas.
So, I mean, even just to start, like how is AI impacting education? There was a nice piece in The Economist about this recently that was kind of surveying the research in this space. But I do think we need to know a lot more about how AI is impacting school, how it’s impacting curricula, how it’s impacting students, as well as in terms of what careers they want to pursue. I think we’re still in the very early days of understanding some of these things. And so I think we both need to just collect a lot more data about it.
And then I think we want to think more seriously about, can we design tools that build on top of these models and improve the rate of learning? I think if we could do that, that could be a huge benefit to society as a whole.
Danny: Bharat, thank you very much.
Bharat: Yeah. Thank you. This is a great discussion. Thank you for the questions.
Danny: Thanks for listening to High Variance. You can subscribe to this podcast on Apple Podcasts, Spotify or wherever you get your podcasts. If you like this podcast, please give us a rating and leave a review. This makes a big difference particularly for newer podcasts like this one.