3 Comments

As a non-technical reader, my only feedback is to compliment you on your engaging, rigorous and methodical analysis of the AGI landscape. Most commentators are vibe-based, which gets increasingly irritating each week as the field matures and the landscape becomes more granular.

Expand full comment

Great analysis as always.

1) Given that frontier lab insiders likely know more about this than the rest of us, what do you think of some of their comments that imply a shorter timeline? Do we assume they are drinking the Kool-Aid?

https://x.com/McaleerStephen/status/1875380842157178994

https://x.com/shuchaobi/status/1874988923564564737

https://x.com/OfficialLoganK/status/1873768960975671296

https://x.com/polynoamial/status/1855037689533178289

2) Another possible topic for a future post might be your thoughts around this idea that is similar to Leopold's shortcut "we just need to build AI ML researchers." That idea is that we just need AI to help us create Make/N8N flow charts (instead of raw code so less technical humans can easily review) that are made up of as many deterministic parts as possible (e.g. code) and then use AI for the non-deterministic parts broken into small steps that an AI will likely get correct. If we can get workflows that are reliable and automate many white-collar jobs, then this alone might be a major societal disruption. Basically "we just need AI to help us write automation scripts that also call AI as needed."

Expand full comment

1) It's a good question. I'll note that none of the four tweets you linked to mention a specific timeline. The general vibe from insiders seems to be that AGI is locked in, but they *mostly* don't seem to specify a specific short timeline, and I'm not sure I've ever seen an example where someone credible from inside a big lab has said, like, "high probability of AGI in <3 years" and been clear that they're using a definition of AGI similar to mine. I'm sure it's been said somewhere but mostly people are vague on the timeline, the definition of AGI, or both. So the inside view may not be that different than my "fast path" scenario.

The other thing I'll note is that while insiders know things we don't, they are also a non-representative sample: believing in short timelines is likely to be highly correlated with working at a frontier lab. And then they find themselves in a bubble with other people who have short timelines. See https://www.aisnakeoil.com/i/153318545/lets-stop-deferring-to-insiders.

2) My intuition is that this sort of thing won't work, that in the statement "we just need to build AI ML researchers", the word "just" is hiding the entire problem. I think a lot of jobs – including but not limited to AI research – will turn out to be "AGI complete", meaning that you can't solve the problem using a combination of deterministic code and 2025-era LLMs, you need a more general intelligence. This is essentially what I was getting at in my previous post (https://amistrongeryet.substack.com/p/defining-agi).

Expand full comment