5 Comments

You have written “it’s impossible to grasp the sheer number of edge cases until you’re operating in the real world.” Won’t edge cases be enough to keep us humans occupied?

Expand full comment
author

The hypothetical situation we're discussing, as framed by Noah Smith, is that AIs are "better than humans at every conceivable task". That implies that AIs have already tackled all of the edge cases, as well.

It's perfectly legitimate to envision a scenario where AIs are mostly great at most things, but fall down on some edge cases. But that's a *different* scenario, based on competitive advantage (being better at edge cases), rather than comparative advantage.

Of course in practice, for some time to come we are likely to be on a complicated, bumpy ride where AIs are straight-out superior at some things, hopeless at other things, and a messy middle set of tasks or jobs where AIs are mostly superior – sometimes astronomically so – but also suffer from edge cases.

Expand full comment

Our comparative advantage, while it lasts, is our manual dexterity.

Expand full comment

"It took me like two minutes to come up with some obvious holes in this idea:"

I'm not sure these are holes at least the way Smith argues it.

"Lower quality work": Yes, humans will be do lower quality work, but will also be much cheaper. Sure, if you want AIs that are better, you can pay for them, but the lost opportunity cost makes them prohibitively expensive, as I will expand.

"Transactional overhead." Yes, AIs are better here but the cost puts them out of reach.

Why are AIs so expensive? The comparative advantage argument finds that all the available AIs are doing R & D work in fusion, quantum-computing, neural linking, curing disease, climate control, etc., and to pull them away from that requires paying their proprietors enormous amounts of money equivalent to the lost opportunity cost that cheaper energy and compute would mean to a future economy-- $trillions? AIs become astronomically expensive exactly because they're so smart.

So the likely scenario seems to be that an energy and compute budget is mandated by law for humans at a level just below super-intelligence -- so plenty of energy and compute for everything we need. This doesn't constrain the AIs because they are constantly discovering new energy sources and building faster compute, so what's left to humans shrinks as a percentage of the total year by year.

"Adaptation": Yes, this is a challenge, but also Smith grants from the beginning that continuous diversification is a historical given. There will be farmers and horse trainers out of work as an economy changes. But it is not possible for everyone to be out of work because there is always something that humans can do to free up an AI for more important work.

But the more important counter to adaption problems is that the AI world is fabulously wealthy with cheap energy and compute and there is no need to work, the basics of life are free and people who don't want to work won't need to.

However, like you, I have similar concerns about humans being irrelevant. Very rapidly in the scenario above (only a few decades?), we are no longer in control of AI, and AI is effortlessly in charge. Smith seems to recognize that his scenario only takes us to that point. Once AI calls the shots, its anyone's guess what happens next.

"If anyone reading this is connected with Noah Smith, I’d love to get his take."

Uh, you saw his response to Zvi. Your post isn't even close to the level of deference and humility required for a Smith response! (smile)

Expand full comment
author

As things turned out, I did connect with Noah, he was gracious enough to have a discussion with me, and we came to a meeting of the minds. I've been on vacation for the last couple of weeks, but will be writing a blog post about this shortly.

> "Lower quality work": Yes, humans will be do lower quality work, but will also be much cheaper. Sure, if you want AIs that are better, you can pay for them, but the lost opportunity cost makes them prohibitively expensive, as I will expand.

The problem with this model is that if there is some finite resource needed by both humans and AIs, then quickly humans become unable to pay for that resource. And it seems inevitable to me that such resources will exist: for instance, energy, raw materials, land. So, absent policy interventions, the salary that a human could command won't be sufficient to keep them fed, housed, etc.

> So the likely scenario seems to be that an energy and compute budget is mandated by law for humans at a level just below super-intelligence

Now we're not talking about comparative advantage, we're talking about policy interventions to maintain human welfare in a world where comparative advantage is insufficient for people to support themselves.

> But the more important counter to adaption problems is that the AI world is fabulously wealthy with cheap energy and compute and there is no need to work, the basics of life are free and people who don't want to work won't need to.

Yes, assuming various things go well, in principle there are policy choices we could make that would lead to this outcome – but, again, this is no longer an argument about comparative advantage, which is all I was addressing in this post.

Expand full comment