4 Comments

Humans are pretty limited creatures. We only have so much brain power. But there are individuals who make disproportionate contributions in their fields. They largely do so (I suspect) by being better able to create mental models that better match the world, GIVEN THE SAME DATA AS EVERYONE ELSE. An AI that can read every astrophysics paper and meaningfully integrate that data is (I'm guessing) better able to identify errors in existing knowledge and better able to synthesize all of our existing knowledge.

And likewise, I suspect, better AI models will be able to be built that work on less data. The existing training method is fairly ridiculous. We may or may not get to human level AI with our existing methods, but a very early thing for a marginally super-human AI AI-researcher to do would be to fix that. There is no inherent reason we need to scrape the data off the entire internet to train an LLM, that's just a current limit.

Between these two factors alone, it seems like there is a LOT of room for recursive improvement and exponential growth.

I can't speak for Eliezer, but if an AI is able to recursively self-improve to 20x or 100x human levels (whatever exactly that means) over the sort of timescales he means (over days to a few months), even if longer-term it's still a sigmoid, that should still count as FOOM, with all the attendant x-risks.

Expand full comment
Sep 30, 2023Liked by Steve Newman

I can't say if "AI" in general will foom but I'm pretty sure LLMs can't.

My (maybe flawed?) reasoning is this:

LLMs are trained using a loss function. The lower the loss is, the closer they are to the "perfect" function that represents the true average across many-dimensional space of the training set.

Once its close to zero, how is it going to foom? It can't. Diminishing returns.

Expand full comment
author

Agreed, if we ever get a foom, it will be from something that is not just learning to imitate humans.

Expand full comment

You might find some of the figures in the relevant part of this episode useful:

https://www.dwarkeshpatel.com/p/carl-shulman

Expand full comment