The Important Thing About AGI is the Impact, Not the Name
Reality Doesn't Care How We Interpret the Words "General Intelligence"
I’m working on a post about the timeline until AGI, and one piece of feedback I received is that I need to explain what I mean by “AGI”.
Debates about terminology are usually boring, but the definition of AGI actually gets at something important. As AI capabilities advance, there will be an inflection point where the impact increases steeply, and most of the things people refer to as “AGI” will occur somewhere close to that inflection point. In other words: there’s no need to argue about how to define “AGI”, because most useful definitions amount to the same thing.
In this post, I’m going to propose a specific definition of AGI which I think is particularly useful when estimating timelines. I’ll argue that it’s roughly equivalent to most of the alternative definitions in common usage. It also tells us when AI will start to have transformative effects.
There Are Many Ways in Which AI Might Change the World
People have speculated about many different ways in which AI could have a transformative effect on the world. Readers of this blog are almost certainly familiar, but for clarity (and newcomers) I’ll briefly describe the sorts of things I’m referring to.
One idea is that AI may lead to incomprehensible economic growth. It could unlock rapid progress in science and engineering, do jobs more cheaply than human workers, and allow companies and governments to make more effective decisions. In recent history, world per-capita GDP has been doubling roughly every 40 years1. Some people argue that advanced AI will enable GDP to double (or more) in a single year, which I’ll term “hypergrowth”. Ten years of hypergrowth would result in per-capita GDP increasing by a factor of 1000. A family that currently survives on $2 per day would be taking home $730,000 per year2.
Another idea is that AI may lead to catastrophic risks. It might help someone stage a devastating cyberattack or engineer an ultra-deadly pandemic; it might allow a despot to achieve unbreakable power over a country or even the entire world; an AI might escape our control and destroy all human life.
People speculate that AI might render humans obsolete, at least as economic workers. It might create an end to scarcity, allowing everyone to live in luxury (if we choose to distribute the spoils widely). It might enable science-fiction technologies such as a cure for aging, rapidly expanding space colonies, interstellar travel, and nanotech. Some envision a Singularity, in which progress becomes so rapid that it’s impossible to even speculate as to what might ensue.
I assert that all of these things will become possible at about the same time. The implication is that it’s useful to have a term which refers to that moment. But first, let me clarify what I mean by “become possible at about the same time”.
I Said “Become Possible”, not “Definitely Happen”
I am not saying that all of the proposed implications of advanced AI will definitely come to pass. Hopefully we’ll manage to avert the catastrophic risks. The increasing difficulty of finding further technological advances might rule out a singularity. A “cure for aging” might turn out to be impossible. People might prefer interacting with other people, such that humans do not become economically “obsolete”.
When I say that the things I listed will “become possible at about the same time”, I mean that the level of AI capability needed to unlock incomprehensible economic growth is roughly the same as the level needed to engineer a truly devastating pandemic, allow AI to take over the world, or enable rapid space colonization. Other factors will then determine which consequences actually ensue, and on what schedule. For instance, hopefully we won’t immediately experience an engineered pandemic. But if so, it will be because no one tried to engineer one, or they were caught in the act, or we’ve improved our biodefenses. It won’t be because another 20 years of AI progress were necessary first.
I’m making a strong claim here. Why do I believe that all of these disparate scenarios require roughly the same level of AI capability?
There’s a Reason We Talk About Artificial General Intelligence
People argue as to whether economic hypergrowth is possible even in principle. But it’s almost certainly not possible unless AI is able to automate essentially all economic activity. Automating half of the jobs won’t have a profound effect; demand for the other half will expand until we reach a new, fairly conventional equilibrium. (After all, this has happened many times before; it wasn’t all that long ago that most people worked in agriculture or simple crafts.)
So, hypergrowth requires AI that can do “essentially everything”. It also requires AI be able to adapt to the job, rather than requiring jobs and workflows to be restructured around AI. Otherwise, AI will diffuse into the economy at a rate similar to past technologies – too slowly to lead to sustained hypergrowth. Hypergrowth requires AIs to be general enough to do more or less everything people can do, and flexible enough to fit into the circumstances in which people were doing it.
What about space colonization? Establishing a self-sustaining colony will be an incredibly massive and difficult undertaking. Modern human society relies on a vast array of products and services, from flatscreens to advanced pharmaceuticals. Replicating the global supply chain would be difficult enough; doing it while keeping people alive and comfortable in the harsh environment of outer space (or the surface of another planet) will be even harder. The challenge is reduced considerably if we can pave the way with a robotic workforce (simpler needs, more environmental tolerance, no need for safety margins) and AI-driven advances in technology. Highly general AI unlocks space colonization; narrower progress – say, robots which can undertake some of the more dangerous work under human supervision – probably won’t be enough.
What about that engineered pandemic? A “conventional” pandemic, on the level of COVID-19 or the 1918 “Spanish” flu, could plausibly be achieved with the assistance of an AI not much more capable than what we have today. (Some human experts might manage this without any help from AI at all; fortunately, if unsurprisingly, none of them seem to want to.) But I mentioned an “ultra-deadly pandemic”. By this, I mean something with a high fatality rate (much higher than 50%), that spreads so rapidly and undetectably as to outrun or overwhelm even a stringent lockdown.
It’s conceptually possible to imagine a virus like this: one that has a fatality rate like Ebola or untreated AIDS, spreads like Omicron, and perhaps has a longer incubation period (so as to delay initial detection). However, successfully designing, engineering, refining, and testing such a virus – without being detected – would be incredibly difficult. The feedback loops that determine how a virus affects its hosts and spreads through the population are unfathomably complex; for instance, a modification which makes it easier for the virus to bind to human cells might make victims feel sicker (and thus more likely to stay home). The real-world impact of a virus depends on everything from cellular processes to the politics of pandemic response. AI models that “merely” advance our understanding of protein interactions and metabolic pathways are unlikely to be sufficient to enable an engineered hyper-pandemic. If such a thing is possible at all, it will require a team of evil geniuses, presumably consisting of the same sort of broadly-competent AIs that would be needed for space colonization or 100% GDP growth.
I’m not going to bother talking through the other potential impacts of AI that I listed earlier, such as an end to scarcity or the possibility of AI takeover. My claim is that each of these scenarios are “AGI complete”: sufficiently complex as to require a breadth and depth of AI that would also enable economic hypergrowth. I also claim that an AI which can perform virtually all economic tasks would be sufficient to enable all of these scenarios, unless they turn out to not be possible for reasons unrelated to how capable our AIs are.
Why should it be that all of these very different possibilities are enabled by roughly the same level of AI capability?
Threshold Effects
Last month, I mentioned Dean Ball’s post about “threshold effects” – the idea that gradual progress in a technology can result in a sudden change in impact, when a particular threshold of capability is reached:
Dean Ball recently wrote about threshold effects: new technologies don’t take over the world when they first appear; adoption only takes off once some hard-to-anticipate threshold of usefulness is crossed. Cell phones were a clunky and expensive niche product, and then they were everywhere. Self-driving cars were a research curiosity for decades, and now suddenly Google’s Waymo service is doubling every three months.
AI will start to be widely adopted for a given task only when it crosses the threshold of usefulness for that task. This can happen fairly suddenly; the final step from “not good enough” to “good enough” isn’t necessarily very large.
For all of the truly transformational impacts of AI, I argue that the threshold is one I described earlier:
So, hypergrowth requires AI that can do “essentially everything”. It also requires AI be able to adapt to the job, rather than requiring jobs and workflows to be restructured around AI.
When AI can perform more or less all economically valuable tasks, and doesn’t require the task to be adapted to suit automation, it will be ready to undertake all of the scenarios I’ve mentioned. Until those conditions are met, the need for expert human assistance will make all of those scenarios infeasible.
Some Details
I’ve glossed over the question of AI undertaking physical work – i.e. robotics. Most of the scenarios I’ve described would require highly capable robots, but one or two (e.g. advanced cyberattacks) probably wouldn’t. That said, the distinction may not be important. The pace of progress in robotics – both physical capabilities and the software to control them – has recently picked up speed. This is not entirely coincidental: the same modern “deep learning” techniques that underlie the current wave of AI are proving useful for controlling robots, and excitement has spurred a wave of new work on physical robot bodies. When AI becomes capable enough to spur economic hyper-growth, within a few years it will also likely plow through any remaining barriers to building capable robots.
Real-world impact will unfurl over a period of at least a few years. Some tasks will become feasible sooner than others. Even once AI can perform most economically valuable tasks, not all companies and governments will react immediately. It will take time to build out enough data centers for AIs to collectively do more work than people, and ramping up production of physical robots may take even longer. When I talk about AGI being achieved, I’ll mean the point where the basic capabilities are there, not when the rollout is complete.
When I refer to AI that can “perform more or less all economically valuable tasks”, I don’t necessarily mean that a single AI system can do all of those tasks. We might wind up creating models that are specialized for different tasks, just as people specialize in different kinds of work. But it will be necessary that creating those specialized models is about as easy as training specialist workers.
Proposed AI timelines often hinge on the onset of recursive self-improvement (RSI): the critical moment at which progress begins to advance rapidly because AIs are accelerating their own development. In an upcoming post, I’ll argue that for RSI to have a major impact, AIs will already need to have highly general capabilities; anything less will only have a limited impact on the rate of AI progress. In other words, I argue that RSI will not be a precursor that leads to AGI; instead, it will unfold alongside other transformational impacts of AGI. (In practice, RSI will probably start to kick in a little bit sooner. But as I’ll argue below, I think this is far enough off that for the moment, we can continue to lump the onset of RSI together with the onset of world-transforming AGI.)
This Is The Useful Definition Of “AGI”
Arguments about AGI often get hung up on exactly what the words “general” and “intelligent” mean. (Remember that AGI stands for Artificial General Intelligence.) Also, AGI is often assumed to mean human-level intelligence, which leads to further debates – the average human? A mid-level expert at the the task in question? von Neumann?
All of this might make for very interesting debates, but in the only debates that matter, our opponent and the judge are both reality, and reality doesn’t give a shit about terminology. The question is not what “human-level artificial general intelligence” ought to mean based on a literal reading of the phrase, the question is what concepts are useful for us to assign names to. I argue that the useful concept that lies in the general vicinity of human-level AGI is the one I’ve articulated here: AI that can cost-effectively replace humans at virtually all economic activity, implying that they can primarily adapt themselves to the task rather than requiring the task to be adapted to them.
Because transformational impacts all require AI to have essentially the same broad range of capabilities, we can use a single term to describe the moment at which many different impacts of AI become possible. It won’t really be a single moment: AI advances messily, and the future is unevenly distributed. As we get closer to world-changing AI, we’ll need to differentiate its impact more carefully. But I argue that the scenarios I’ve covered here come close enough together, and that point is still far enough in the future, that it’s reasonable to lump them together for purposes of discussion. (My next post will argue that AGI, as I’m defining it, is not imminent.)
Appendix: My Definition of AGI
Here’s a brief recap, that I will link to in the future when referencing the term “AGI”.
I define AGI as AI that can cost-effectively replace humans at more than 95% of economic activity, including any new jobs that are created in the future.
I believe that most of the hypothesized transformational impacts of AI cluster around this point. Hence, this definition of “AGI” captures the point where the world starts to look very different, where everyone will be “feeling the AGI”. In particular, I believe that:
This definition implies AI systems that can primarily adapt themselves to the work required for most economic activity, rather than requiring that jobs be adapted to them. AIs must be able to handle entire jobs, not just isolated tasks.
Once AI can handle most knowledge work, highly capable physical robots will follow within a few years at most.
This level of capability enables a broad range of world-transforming scenarios, from economic hypergrowth to the potential of an AI takeover.
World-transforming scenarios require this level of AI (specialized AIs generally won’t transform the world).
Recursive self-improvement will become a major force only slightly before AGI is reached.
AGI refers to the point where AIs of the necessary capability (and economically viable efficiency) have been invented, not the point where they are actually deployed throughout the economy.
ChatGPT o1:
What has been the rate of increase in world GDP per capita in recent decades?
Estimates vary by data source and precise time window, but a useful benchmark is that over roughly the last three to four decades (e.g., from the early 1980s or 1990s to about 2019), global real GDP per capita has grown by around 1.5–2% per year on average. This figure combines faster-growing emerging economies with slower-growing developed ones, so the rate for specific regions can be substantially higher or lower.
From the given range of 1.5 to 2% per-capita growth per year, I took a slightly above-midpoint figure of 1.8%. I then applied the Rule of 72 to get a doubling period of 40 years.
Yes, I am glossing all sorts of questions, such as as how the gains from AI would be distributed. The point is simply that an economy which doubles in size each year leads to a world that is unrecognizably different from today.