Conversation with Noah Smith

Here is a lightly edited transcript of my email conversation with Noah Smith on the subject of employment in an AGI world.


Subject: Thoughts on "Plentiful, high-paying jobs in the age of AI"
------------------------

From: Steve Newman
Date: Thu, Mar 21, 2024 at 11:05 AM

Noah –

Hi, my name is Steve Newman. I appreciated your recent piece on comparative advantage and AI, and I gather that by now several of our mutual acquaintances have mentioned that I had some thoughts about the topic. Apologies if you're feeling bombarded. I mentioned in my (fairly obscure) blog that I'd enjoy connecting with you on this, expecting to get zero hits but hoping for one, and apparently overshot.

(I'm also a happy subscriber to your blog)

I've written up my thoughts in a terse form for you here. If you're interested enough to take a look, I'd love to hear your thoughts. I like to think I have avoided the boring mistake of confusing comparative advantage with competitive advantage, and raised some different points which at worst would at least be novel mistakes.

Regards,

Steve

----------
From: Noah Smith
Date: Fri, Mar 22, 2024 at 12:42 AM

Thanks! Traveling now but will take a look soon!

----------
From: Steve Newman
Date: Fri, Mar 22, 2024 at 5:48 PM

👍

Steve Newman reacted via Gmail

----------
From: Noah Smith
Date: Fri, Mar 22, 2024 at 6:36 PM

I wrote some comments in the doc; check them out! :-)

----------
From: Steve Newman
Date: Sun, Mar 24, 2024 at 8:10 AM

Thanks for engaging on this!

I feel like we're still not connecting on what strikes me as the central point. It cuts across a few different comments in the doc so I'll articulate it here.

From the standpoint of a planetary overlord who only cares about increasing material wealth, in the strong-AGI world, wouldn't the obvious choice be to discard humanity and pour all surplus into increasing the supply of AIs? (Which I shorthand as "building more chip fabs" but of course it depends on where the AI bottleneck lies.) This would pay for itself fairly quickly. Why would I instead choose to scale back my fab-building project so as to free up resources to keep some people alive? This is my core hangup: I don't see why – from a strict maximizing perspective – we wouldn't go all in on building fabs fabs fabs. (And of course solar panels and mining robots and whatever other balanced mix of activities results in maximal AI production.)

In other words, while I'm not sure I'm using the terminology correctly here, I am suggesting that the opportunity cost which would lead me to employ humans would be overridden by the larger opportunity cost of failing to build more chip fabs.

At any point in time, there is a limit on our ability to turn energy into compute; these resources are not short-term fungible. That means that in the short term, compute is always limited, even if that limit is a different number at different points in time.

But from a long-term perspective, wouldn't the overriding consideration be to invest resources in increasing our ability to turn energy into compute?

Concretely, perhaps there is an agricultural robot which can be reprogrammed to help clear land for a new chip fab. Or land used for farming could be repurposed for a solar farm. Or the materials needed to construct / maintain housing stock could be used for more robots. Or the electricity needed to keep people's lights on could be used to power those construction activities.

It just seems very shortsighted to bother feeding, clothing, and housing a population of inefficient human workers, when you could instead plow those resources into increasing the supply of compute.

(Of course, as you note, "Ideally we'd just distribute ownership of the fruits of AI production".)

Note that comparative advantage kicks in when there is still some resource that AI consumes more of, even if it consumes less of most resources like energy. In my post, this resource that AI uses more of is hypothesized to be COMPUTE. That happens if and only if there is a bottleneck in the compute production process such that energy can't be cheaply turned into compute at arbitrary scale. For a hint that this might happen, Google "Rock's Law".

I don't see how Rock's Law applies. Sure, the cost of individual fabs continues to rise, but aggregated across the global economy we would presumably still manage to build new fabs? Or if you mean that eventually we'd reach some limit where the aggregate demand is insufficient to build even one instance of the next-generation fab... then presumably we'd continue building fabs of the previous generation?

I really would like to understand this, as of course this is an important question for modeling future worlds and setting AI policy.

Cheers,

Steve

----------
From: Noah Smith
Date: Sun, Mar 24, 2024 at 7:30 PM

From the standpoint of a planetary overlord who only cares about increasing material wealth, in the strong-AGI world, wouldn't the obvious choice be to discard humanity and pour all surplus into increasing the supply of AIs? (Which I shorthand as "building more chip fabs" but of course it depends on where the AI bottleneck lies.) This would pay for itself fairly quickly. Why would I instead choose to scale back my fab-building project so as to free up resources to keep some people alive? This is my core hangup: I don't see why – from a strict maximizing perspective – we wouldn't go all in on building fabs fabs fabs. (And of course solar panels and mining robots and whatever other balanced mix of activities results in maximal AI production.)

This is similar to the classic Econ 101 situation where one person is born owning all the resources in the world, and everyone else starves to death. (It's even Pareto optimal!). 

In other words, while I'm not sure I'm using the terminology correctly here, I am suggesting that the opportunity cost which would lead me to employ humans would be overridden by the larger opportunity cost of failing to build more chip fabs.

In order for comparative advantage to work, you need some sort of producer-specific constraint. In this case, it means you can't just get rid of humans and use all the energy for compute, because you're limited in the amount of compute you can build.

But from a long-term perspective, wouldn't the overriding consideration be to invest resources in increasing our ability to turn energy into compute?

I don't know; that depends on technological considerations. But what it does mean is that if we want to stop this from happening, and protect human jobs, we don't need a ton of different regulations; we only need to limit the amount of energy that goes to data centers. This is the one simple super-regulation that will keep AI from crowding humans out of the resource market.

I don't see how Rock's Law applies. Sure, the cost of individual fabs continues to rise, but aggregated across the global economy we would presumably still manage to build new fabs? Or if you mean that eventually we'd reach some limit where the aggregate demand is insufficient to build even one instance of the next-generation fab... then presumably we'd continue building fabs of the previous generation?

Yep.

I really would like to understand this, as of course this is an important question for modeling future worlds and setting AI policy.

I think you're on the right track here. The key is that if you want to stop AI from taking over the world, you only have to stop it from taking over the natural resources that humans consume directly (like energy). One way to do this is to limit the electricity that data centers can use. Another neat idea is to nationalize all natural resources and set up an Alaska Permanent Fund type fund to distribute the proceeds to regular people. That would be easier than collectivizing ownership of AI itself, and it would have the same result in the long run.

Best,

Noah

----------
From: Steve Newman
Date: Mon, Mar 25, 2024 at 8:11 AM

Cool – so I think we both agree that:

1. If there is some natural constraint on our ability to efficiently devote resources to the creation and operation of ever-more AIs, then comparative advantage applies and there should be plenty of work for people.

2. Alternatively, we can get the same result by imposing an artificial constraint, such as limiting the amount of energy that goes to data centers.

3. Or we could distribute wealth in some fashion, such as nationalizing natural resources. In this world, people won't necessarily find jobs in the conventional sense, but nor will they need them.

4. If none of the above transpire, there would be a problem (immiseration of some, potentially large, portion of the population). But this should be quite avoidable, per 1/2/3 above.

If this strikes you as a reasonable framing, I'd like to write a followup blog post in which I note that you graciously took the time to discuss this with me and that we both agree with the above. Please let me know if that's OK with you.

(In my post, I will also likely say that I don't expect any serious natural constraint a la #1 above to arise in practice, and that I have concerns regarding income / wealth inequality in some scenarios, but of course I won't put any words in your mouth on those topics.)

Cheers,

Steve

----------
From: Steve Newman
Date: Wed, Mar 27, 2024 at 5:40 PM

Just pinging to check whether you'd be OK with me writing a followup blog post in which I note that you graciously took the time to discuss this with me and that we both broadly agree with the summary below.

Thanks again for engaging with me on this!

Cheers,

Steve

----------
From: Noah Smith
Date: Thu, Mar 28, 2024 at 10:27 AM

Cool – so I think we both agree that:

1. If there is some natural constraint on our ability to efficiently devote resources to the creation and operation of ever-more AIs, then comparative advantage applies and there should be plenty of work for people.

2. Alternatively, we can get the same result by imposing an artificial constraint, such as limiting the amount of energy that goes to data centers.

3. Or we could distribute wealth in some fashion, such as nationalizing natural resources. In this world, people won't necessarily find jobs in the conventional sense, but nor will they need them.

4. If none of the above transpire, there would be a problem (immiseration of some, potentially large, portion of the population). But this should be quite avoidable, per 1/2/3 above.

Sounds right to me! But keep in mind that (4) also only happens if AI fully replaces human skills across the board. If there are still skills only humans can do, then we don't even need 1/2/3 to keep being valuable.

If this strikes you as a reasonable framing, I'd like to write a followup blog post in which I note that you graciously took the time to discuss this with me and that we both agree with the above. Please let me know if that's OK with you.

Of course! Feel free to include any of the text of this email exchange, or my comments in the Google doc, if you think that would be at all helpful!

 Best,

Noah 

----------
From: Steve Newman
Date: Thu, Mar 28, 2024 at 10:33 AM

But keep in mind that (4) also only happens if AI fully replaces human skills across the board. If there are still skills only humans can do, then we don't even need 1/2/3 to keep being valuable.

Thanks, important call-out.

If this strikes you as a reasonable framing, I'd like to write a followup blog post in which I note that you graciously took the time to discuss this with me and that we both agree with the above. Please let me know if that's OK with you.

Of course! Feel free to include any of the text of this email exchange, or my comments in the Google doc, if you think that would be at all helpful!

Much appreciated (and, should it ever be relevant, feel free to quote me as well).

I really enjoyed getting to chat with you on this. I'm going to put together a blog post on the value of Actually Talking To People.

Cheers,

Steve

----------
From: Noah Smith
Date: Thu, Mar 28, 2024 at 6:03 PM

Sounds great! Thanks! :-)

Best,

Noah