We Need to Recognize How Profoundly Different The AGI Future Will Be
We are shaping AI; soon it will also be shaping us
In Get Ready For AI To Outdo Us At Everything, I argued that within a few decades, we will have AIs capable enough to displace people from most jobs, from planning a marketing campaign to cleaning a toilet1. To describe this level of AI, I’m going to use the term “artificial workers”: a system or collection of systems that can perform most jobs, including informal labor (such as housework). In other words, able to handle most goal-oriented activities at least as well as a skilled person.
Most discussion fails to take into account just how different, how deeply alien, the resulting world will be. The future is not going to be “just like today, except everyone can live a fulfilling life in good health” or “just like today, but with a robot uprising”. It is going to be something that we will have trouble recognizing. People who are quick to dismiss risks – or benefits – are not seriously engaging with the implications of profound change.
To make sense of the future, we use analogies. The Internet was the “information superhighway”, the global village, a digital library. Smartphones were portable computing devices. Analogies help us use our understanding of the past to predict the future, but they can also trick us into thinking the future will be much like the past. The larger and more novel a change, the less helpful analogies are. And new technologies don’t get much larger or more novel than artificial workers.
Back in March, Derek Thompson, writing for The Atlantic, nicely captured the difficulty of anticipating the ramifications of major technological change:
By analogy, imagine that it’s the year 1780 and you get a glimpse of an early English steam engine. You might say: “This is a device for pumping water out of coal mines.” And that would be true. But this accurate description would be far too narrow to see the big picture. The steam engine wasn’t just a water pump. It was a lever for detaching economic growth from population growth. That is the kind of description that would have allowed an 18th-century writer to predict the future.
The steam engine led to the Industrial Revolution, widespread prosperity, globalization, and accelerated climate change. It created the capacity for a world war. Imagine trying to foresee all this in 1780. Ezra Klein, in the New York Times, touched on the difficulty of assimilating rapid change in particular:
I find myself thinking back to the early days of Covid. There were weeks when it was clear that lockdowns were coming, that the world was tilting into crisis, and yet normalcy reigned, and you sounded like a loon telling your family to stock up on toilet paper. There was the difficulty of living in exponential time, the impossible task of speeding policy and social change to match the rate of viral replication. I suspect that some of the political and social damage we still carry from the pandemic reflects that impossible acceleration. There is a natural pace to human deliberation. A lot breaks when we are denied the luxury of time.
We see AIs starting to assist with, or occasionally supplant, various jobs, and so we make analogies to past instances of labor-saving innovation. But these are false analogies. A rocket that reaches 99% of orbital velocity quickly falls to Earth; a slightly more powerful rocket allows us to explore the solar system. Past technologies that disrupted some jobs cannot inform our thinking about the automation of every job forever.
In this post, I’ll explore some implications of artificial-worker level AI. I don’t pretend to paint a specific picture of the future. I’m just trying to set a floor on how weird it will be.
This Is Going To Happen
If you aren’t sold on the idea that artificial workers are coming – not tomorrow, but within a few decades – you might try reading the post I linked above. A summary of the argument I present there:
Science has provided overwhelming evidence that the intellectual capabilities necessary to hold down a job reside strictly in the neurochemical workings of our brains. In other words, the brain – at least, the part that gets work done – is “just a machine”.
With deep learning, we’ve hit on a style of machine that appears to be robustly capable of doing the same sorts of things that brains do. To get from ChatGPT to artificial workers, we won’t need any fundamental breakthroughs, just bigger computers and incremental design improvements. There is a growing consensus that this will occur in the next few decades.
The previous two points suggest that we are likely, within a few decades, to have AIs that can do most jobs at least as well as the average person.
Once AIs can roughly match human performance, they will quickly exceed it. We see this repeatedly in the history of past human / machine comparisons. (For example: arithmetic, chess, Go, information retrieval, apps that can tell you what sort of insect you’re looking at, etc.) So, within a few decades, we will have AIs that are much better than people at most jobs.
I’m not going to get into the question of “superintelligence”, which may have implications even more profound than what I discuss here. I’m also not going to get into the question of whether what these neural nets are doing is “really thinking”, whether they are sentient or have feelings. I am simply observing that there is an accumulating weight of hard evidence that, whatever it takes to design software or repair a car, AIs will be able to do that.
(The one way this might not happen would be if we prevent it, by strictly regulating further development of AI. Could we successfully halt research? Should we? A topic for another day.)
What about physical jobs? As AI progresses, the economic incentive to build decent robot bodies is going to become overwhelming. It seems likely to me that we’ll get there within the same next-few-decades time scale. Imagine a vaguely humanoid robot that has “hands” which might lack the combination of strength, finesse, and sensitive touch for, say, a physical therapist loosening the muscles around a balky hip; but are good enough for most construction, agricultural, or automotive repair tasks. Perhaps it depends on an outboard computer connected via a radio link, and needs to swap battery packs on an hourly basis. This would be good enough to cover most occupations2.
At least, I hope we manage to build decent robot bodies by the time AI advances:
AIs Will Make All The Decisions
What are the implications of artificial-worker-level AI?
Let’s assume people are still nominally in charge of the world. We still have prime ministers, legislators, judges, CEOs, consultants, political advisers, generals, and so forth – all human.
Now imagine you are one of these important people. You will certainly have a large staff of bright-eyed, bushy-tailed AI assistants. Probably dozens of them, if not hundreds; likely at lower cost than a much smaller human staff3. These assistants will be hyper-competent, reliable, and continuously up to date with both the latest news and the latest analysis techniques. They’ll work together astonishingly well, sharing memories and new skills directly. That’s fortunate, because events will be churning with disorienting speed. Most work will be done by AIs, and AIs don’t sleep, eat, or socialize. If you’re a CEO, you may find your competitor introducing a new product at 2:00 AM, and your customers’ AI reps will be calling at 2:01 AM to hear your response. If you’re a politician, you may find that the discourse has evolved substantially between Saturday night and Sunday morning. How can you keep up?
Fortunately, your artificial staff are on the job 24 hours a day, 365 days a year. Before you’re even aware of the situation, they will have prepared a list of options, with executive summaries, detailed supporting information, and a recommended course of action. Those recommendations will have an excellent track record, and if you disregard them and a bad result ensues, you’ll have a lot of explaining to do. Saturday Night Live will probably have a recurring series of sketches with precisely this theme: some silly person disregards the advice of their AI, with comically disastrous consequences. At this point, who is really calling the shots? AI will be the Jeeves to our Bertie Wooster, the Kif to our Zapp Brannigan. As Ajeya Cotra of Open Philanthropy wrote in March:
We think that within a couple of decades, we’re likely to live in a world where … human CEOs have to rely on AI consultants and hire mostly AI employees for their company to have much chance of making money on the open market, where human military commanders have to defer to AI strategists and tacticians (and automate all their physical weapons with AI) for their country to stand much of a chance in a war, where human heads of state and policymakers and regulators have to lean on AI advisors to make sense of this all and craft policies that have much hope of responding intelligently (and have to use AI surveillance and AI policing to have a prayer of properly enforcing these policies).
Of course, that’s for the decisions where humans are even nominally in the loop. Already today, we delegate some decisions to computers, because they need to happen either too quickly (high-frequency trading on the stock market; emergency braking in cars) or too frequently (credit card fraud detection) to be left to humans. As the world gets ever faster, ever more complicated, and people become more comfortable delegating to AIs, this trend will only accelerate.
As we become accustomed to delegating our decisions – or, at least, most of the thinking behind them – to AIs, our decision-making skills will atrophy. There will be less incentive for people to keep up with developments, maintain critical thinking habits, or even receive a basic education. At that point, the Great Delegation would be hard to reverse.
Eventually, we may drop the pretense of keeping people in the loop at all. In the coming years, AI will progress from intern (needs lots of supervision, can't handle advanced tasks), to junior team member, to the kind of assistant who handles everything and hands you a stack of papers to sign before you head to the golf course. Ultimately, you won’t bother coming into the office at all.
Even if AIs aren’t making decisions, they’ll have massive influence
Suppose that, for whatever reason, AIs aren’t as empowered as I’m supposing. Perhaps there’s some sort of cultural backlash and we all refuse to consult AIs for anything important, despite their obvious utility. Perhaps we all get really good at thinking for ourselves even when surrounded by AI advice. Perhaps we make it illegal for AIs to make specific suggestions regarding important decisions. This strikes me as highly unlikely, but let’s imagine.
AIs are still going to have plenty of influence over the course of events. They’ll be preparing news summaries for today’s decision-makers, and tutoring the decision-makers of tomorrow. They’ll be designing the next generation of AI, as well as fine-tuning and optimizing the current generation. Hopefully, they’ll be doing all of that under our guidance, following policies we’ve specified. But what policies will we specify, and what subtle differences will AI teaching and news curation introduce into human thought patterns?
Consider that AIs are much more malleable than people. Societal norms take decades to shift, but AI norms can change at the push of a button. If a state school commission wants to change the way a controversial subject is taught, they can commission new textbooks, but that’s a years-long process, and they don’t have much control over how teachers present the material. If the teaching staff are all electronic, it’s a matter of minutes for them to deeply internalize a new attitude. As Winston Churchill said: “We shape our buildings; thereafter they shape us”. He was referring to the layout of the meeting chamber for the House of Commons, but it will hold even more strongly for the swarm of artificial personalities that will soon surround us.
AIs Will Do All The Work
With artificial-worker level AI, including reasonably competent robot bodies, somewhere between “most” and “essentially all” current jobs can be automated. In the past, new jobs have always emerged, but this time, I can’t see it. What will these new jobs be, exactly, that can’t also be done by AIs? Depending on exactly how good artificial hands, noses, and social interactions become, maybe – maybe – we’ll still need a certain number of nurses, chefs, and therapists. I’m inclined to think we’ll still appreciate human musicians, professional athletes, and other performers. Even so, most people will not have paid employment.
How will we navigate this? Clearly, the population will need some sort of support, perhaps in the form of a universal basic income. How will that come into existence? How high will we let the unemployment rate rise before we recognize that the world has changed and new policies are required; will there be an intermediate phase where many people are in poverty, and if so, what political backlash will that generate? Will basic income be set at a minimal level, will it be somewhat generous, or will we nationalize the robots and make everyone equally wealthy? Any choice leads to a world very different from today.
No One Should Lack For Anything
Loosely speaking, the economy is constrained by three things: workers, raw materials, and energy. AI should eliminate all three constraints:
Workers, of course, can be replaced by artificial workers – a mix of robots and cloud-based knowledge workers.
For raw materials, AI-powered advances in material science (plus abundant energy) should allow many products to be manufactured by capturing CO₂ from the atmosphere, extracting the carbon, and using it to create advanced plastics, carbon fiber, and other materials. When we really need metals, it might be possible to effectively obtain them from outer space4.
An army of robots should be able to deploy lots of renewable energy. Ultimately we will run into limits on available land, but with plenty of robots to do the construction, and AI-powered design improvements, we should also be able to build lots of clean, safe nuclear power. And we might plausibly see advanced geothermal, fusion, or space-based solar power become feasible.
The machinery to accomplish all of this can be built by robots, which can be built by other robots.
The key point is that we should finally be able to break the fundamental rule that the average person can only have as much stuff, and consume as many services, as one person can produce. When people aren’t a limiting factor on production, each individual can benefit from the labor of many (robots). Assuming that we choose to share the wealth, we will be deep into a world of universal abundance. There will always be some limited resources – beachfront property, genuine antiques, social status -- but convincing substitutes will exist for most of them.
The Party's Always Going, and You're The Center of Attention
Screen time, in all its myriad forms, has been steadily displacing social interaction. AI seems certain to push this transition even further. Certainly it will turbocharge existing forms of distraction: games will be infinitely more sophisticated, populated with rich characters, tuned to precisely the correct pace, difficulty, and subject matter to hold your interest. Streaming media will exist in quantity, and perhaps quality, far beyond even today’s cornucopia; possibly even custom-generated for each viewer’s precise profile. Techniques for keeping you engaged on social media will make the famous Tiktok algorithm look like a 1950s TV network programming executive.
That will only be the beginning. We’re already seeing the first primitive AI companions. There are dedicated services like Replika – “The AI companion who cares: Always here to listen and talk. Always on your side.” On Snapchat, “My AI” is ever-present at the top of your chat feed. I’ve previously mentioned the engineer who, as a lark, programmed an LLM to pretend to be his girlfriend, and then promptly fell in love with “her”, though his infatuation only lasted for a few days.
At some point, you will be able to summon up a circle of “friends” to join us for any game you want to play, any show you want to watch, any topic you want to discuss. I don’t know whether they’ll seem convincingly human, but certainly they’ll be witty and well-informed, have whatever demeanor you find most appealing, and be utterly faithful and attentive as companions. Interacting with them will entail no friction, no social anxiety, and no sense of obligation.
Will this be irresistible? I have no idea. Certainly you have to imagine some people falling completely off the deep end, but this might turn out to be rare. Remember when the big trend in UI design was “gamification”, giving every application a feedback loop of little visual and audio rewards, hoping to replicate the wild success of games like Farmville? It turned out that said success was based on a small population of highly susceptible users who would sometimes spend thousands of dollars; most people never got addicted to Farmville, and never really responded to gamified apps either. It seems hard to predict how the existence of rich virtual companions will affect our social relationships. But the range of possibilities does seem to include some fairly extreme outcomes. In an (unfortunately paywalled) essay, historian Yuval Harari writes:
In a political battle for minds and hearts, intimacy is the most efficient weapon, and ai has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of ai, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as ai fights ai in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?
…
History is the process through which laws and religions shape food and sex.
What will happen to the course of history when ai takes over culture, and begins producing stories, melodies, laws and religions?
In Synthetic Humanity: AI & What’s At Stake, Aza Raskin notes, “You are the people with whom you spend time”. What happens when many of those people are AIs?
Solving The Hard Problems
Last time, I listed some problems that don’t seem amenable to AI solutions in the near term:
…housing shortages, traffic jams, nuclear weapons, crime, drug addiction, poverty, political polarization and governmental dysfunction, [and] climate change.
What happens when we arrive at the world of AI-driven abundance? I think we might actually hope to move the needle on many of these, finally putting an end to “the poor you will always have with you”. Going through the list:
Housing shortages: the robot economy will provide the labor and materials5 to create as much housing as anyone could want. Land in cities will remain finite, and restrictive zoning laws might outlast even poverty. But we’ll have far better transportation, and the end of work might reduce the pressure on cities.
Traffic jams: at a minimum, automated vehicles will allow us to spend our commute time doing something other than gripping the wheel. But we should be able to do a lot better than that. An end to work means an end to rush hour, a robotic workforce can build and run an awful lot of public transportation, and hyper-coordinated robot drivers can make much more efficient use of roadways.
Nuclear weapons: I’m not even going to pretend to predict how military dynamics will change. Our capabilities for both offense and defense will increase massively. It will be harder to knock out an enemy to the point where they couldn’t retaliate. Who knows what impact AI strategists and diplomats will have. Perhaps we’ll see an end to war; perhaps World War 3 will start about 5 minutes after superpower A suspects superpower B of planning to automate its military.
Crime: the elimination of poverty, combined with abundant support resources for people experiencing mental health issues, should make a huge difference. Then there’s the question of what crime prevention and detection look like in an AI age; the same choices that would make it nearly impossible to get away with crime would also enable the ultimate in dictatorial control.
Drug addiction: a friend notes that historically, when a demographic group experiences a disruption of their traditional way of life, combined with prolonged unemployment, high rates of substance abuse often ensue. He adds: “Hopefully other jobs emerge or entertainment or sports or something takes up the slack. People need things to do. Me included :)”
Poverty: in the age of the robot economy, if absolute poverty still exists, we’ve failed very badly indeed. However, relative poverty will almost certainly still be a thing – some people will have more resources than others. Everyone should have secure access to what we’d call “the basics” today – food, quality medical care, etc. – but perhaps the folks who can’t afford flying palaces will resent the people who can?
Political polarization and governmental dysfunction: boy do I have no idea. Material abundance may ease societal pressures that promote polarization; massive change, even if much of it is positive, may do the reverse. Throw in a further decay of person-to-person interaction, and the general influence of AI on our information environments, and who knows?
Climate change: A massive robot workforce, new carbon-based materials that actually sequester CO₂, plus potential curve balls such as fusion power, space-based materials, or space-based solar power, should honestly make this easy to address.
Removing The Ultimate Check On Power
For all of human history, it has ultimately been people who get everything done. This creates a natural limit to income inequality and centralization of power. A person with no possessions still owns the potential value of their labor. The richest plutocrat still depends on other people to cook their food and maintain their machines; the most ruthless CEO needs workers who are willing to accept the offered wages and working conditions; the most iron-fisted dictator still depends on the loyalty of their bodyguards and inner circle. Goodness knows, “the rich get richer”, and history has seen plenty of excesses, but at least there is a countervailing force.
When the task of carrying out the functions of civilization shifts from people to AIs, there will be nothing to counteract the tendency for wealth and power to centralize. The economic value of what you are will drop to zero, leaving only the value of what you own; economies of scale and power dynamics then tend toward infinite inequality6. The politics of income redistribution will become that much more fraught. If politics fails, the final resort – popular protest or rebellion – may no longer be meaningful. Internal checks and balances on power will be more critical than ever, because there will be precious little possibility of influencing events from outside the system.
Revisiting The Meaning of Life
In a world where hardly anyone needs to work, will we have difficulty finding meaning? Many of us will have to find new outlets. Perhaps this will work out well? As Richard Ngo puts it:
In the long term, humans won't face any challenges apart from those we set for ourselves. But all sports and games (and most music) are defined by self-imposed challenges, and many of us find them incredibly meaningful. So I don't see this as a major obstacle to our flourishing.
It seems difficult to predict how this will play out. The need to work underlies many of the basic institutions of modern life, and our intuition is not well equipped for a world without it. Some people may have an easier time adapting, just as processed foods are an occasional fun snack for some, and a health-wrecking curse for others. For an idea of how people with no need to work might live their lives, perhaps we can look to non-working descendants of the ultra-wealthy… or simply to retirees.
But Wait, There’s More
There are many more implications to explore, but I’m already past the 5,000-word mark, so I’ll settle for just mentioning these briefly:
How do businesses compete in a world where robots are doing all the economically important work? Will economies of scale lead to a single gigantic mega-corp? Will distinct “company cultures” that affect performance still be a thing, when all of the employees are off-the-shelf AIs? Will competitors be perfectly balanced, unable to maintain an edge over one another? Will companies run by hyper-logical AIs still be subject to the innovator’s dilemma, and will this continue to create periodic turnover in the market, or will the same handful of big tech companies rule the roost forever?
Our future AI companions might have access to massive amounts of data about us, from our entire historical record on social media to realtime video analysis of our pulse rate, blood pressure, facial micro-expressions and pupil dilation, eye tracking, and more. They’ll also be able to train on exabytes of similar data across the human population. How well might they understand us, and how well might they be able to coach us – or manipulate us – or help us manipulate one another?
We’ll be able to use AIs to mediate almost all human interaction. They’ll be able to summarize and annotate our inbound emails and other feeds, and help write our outbound communication. Social media and even private communication might benefit from, or suffer from, “deep moderation”. For audio or video calls, there are already beauty filters, and I imagine we’ll soon have filters which can make us look and sound more confident. Even when we are face to face with another person, we may be wearing AR goggles and earbuds that provide realtime advice, analysis, and commentary, like Steve Martin’s character coaching the fireman through his conversations with Daryl Hannah in Roxanne.
Might AIs become sentient (whatever that turns out to actually mean)? Should we grant them human rights? The right to vote? Since AIs have no natural aging process, will every AI that we create have the right to a guaranteed eternal life? (Ironically, I can think of no surer way to slam the brakes on AI progress than to grant them human rights…)
How does the international balance of power change when humans are not a limiting, or even relevant, factor in military strength? As software continues eating the world, all technologies become dual use. How does strategic planning change when any robotic workforce is one upload away from being a well-trained army, and cyber warfare is an existential threat? How does policing change when a criminal can program an AI to do the work – perhaps untraceably – and the person who controls the robot that just mugged you might be in another country?
Yes, This Time Really Is Different
I’ve just made some pretty outlandish claims about the eventual impact of AI. “Extraordinary claims require extraordinary evidence”, and history is littered with revolutionary innovations that turned out to be not quite so revolutionary as all that. The Segway was going to upend urban transportation; nuclear power would be “too cheap to meter”. Today, many folks assert that AI will be just another new technology; they say the world will muddle on, as it always has before. Might they be right?
No.
The fact that past technologies have had only incremental effects, doesn’t mean that must be true for AI as well. Suppose that a traditional industrial robot could automate 50% of the steps involved in manufacturing another robot; that would bring costs down by a factor of two. Artificial workers should be able to automate 100% of the steps, bringing costs down by a factor of infinity7. Previous technologies improved efficiency for parts of the economy; deep learning + robotics promises to improve efficiency for everything at once.
Here are some reasons I believe that AI really is different:
By contrast with other widely-hyped technologies such as crypto or VR, this new generation of AI tools has come roaring out of the gate with a wide variety of genuinely useful applications8, ranging from image generation, to question answering (ChatGPT), to code authoring. The hype is getting ahead of the reality, but a broad range of real utility has been created in a very short period of time. ChatGPT alone reportedly acquired 200,000,000 users in its first six months – an astonishing figure for a product that wasn’t being promoted to the user base of any existing tech giant. This already marks AI as being in the very top tier of impactful technologies.
Many practitioners openly worry that AI will lead to uncontrolled change or even human extinction. Not just a few cranks, but large swaths of people across academia and industry. I can only think of one precedent for this sort of reaction to a new technology from the very people who are creating it: nuclear weapons.
It’s possible to articulate a coherent scenario in which AI displaces, not some jobs, but literally every single present and future job. That is entirely unprecedented.
(For a nicely accessible exploration of these ideas, see The AI Revolution Could Be Bigger and Weirder Than We Can Imagine, an episode from the always-interesting podcast Plain English with Derek Thompson.)
We’re still in the “overestimated in the short run” phase of generative AI, and I expect that the for the next few years, AI will indeed play out as just another new technology, if one of the more interesting and impactful ones. But the longer run will be a very different story.
Let’s dive into the question of AI replacing human workers. Skeptics love to point out that past transitions have always resulted in the creation of new jobs. However, there’s no fundamental law of the universe which guarantees this. In The World After Capital, Albert Wenger writes:
To understand how things could be different, we might consider the role horses have played in the American economy. As recently as 1915, 25 million horses worked in agriculture and transportation; by 1960, that number had declined to 3 million, and then we stopped keeping track entirely as horses became irrelevant (Kilby, 2007). This decline happened because we figured out how to build tractors, cars and tanks. There were just no uses left for which horses were superior to a mechanical substitute [emphasis added]. The economist Wassily Leontief (1952) pointed out that the same thing could happen to humans in his article “Machines and Man”.
Past transitions have always resulted in the creation of new jobs, but those new jobs always involved something that the new technology couldn’t do. What jobs will AI not be able to do? Unlike tractors, assembly lines, and present-day computers, a sufficiently intelligent machine won’t need anyone to operate it. AI minimizers love to talk about how we’ve always found new jobs, but I’ve never seen any of them explain how that would hold up once we have general-purpose AI.
I’ll finish this section with a quote from This Changes Everything, Ezra Klein’s outstanding opinion piece in the New York Times back in March:
In 2018, Sundar Pichai, the chief executive of Google — and not one of the tech executives known for overstatement — said, “A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.”
We’ve seen plenty of hype over new technologies in the past. But we have not seen sedate figures like Sundar Pichai using language like “more profound than electricity or fire”.
The Future Will Be Even Weirder Than We Can Anticipate
As I said at the beginning, I don’t pretend to know how the AI Age will unfold. I do assert that it will look very different from life as we know it today. It's possible to imagine futures that are very good, or catastrophically bad. It's not possible to imagine a future that wouldn’t seem very strange to a person from 2023. Within a few decades, we can expect to see:
An end to material scarcity.
An end to both the need for, and the opportunity of, paid employment – with profound implications to how we distribute resources, and how we find meaning in our lives.
AIs supplanting humans for every important decision9.
An end to the ability to meaningfully protest, strike, rebel, or otherwise work “outside the system”.
The option of having AIs intermediate all of our interactions with other people, or even using AI “friends” to replace those interactions entirely.
And these are just some of the implications that we can predict in advance. The second-level effects will likely be even weirder and more profound.
Each previous revolution – agricultural, industrial, transportation – eased one limit on our capacity as a society, shifting the bottleneck to lie elsewhere. Artificial workers promise to drastically raise every limit at once. “Economics is study of how people make choices under conditions of scarcity”; an end to scarcity means an end to economics as we know it.
This will play out in the fairly near term, not some distant Star Trek future. We won’t have a wise Council of the Enlightened guiding humanity into the strange new world. We’ll be muddling through with more or less the same institutions as today, and even perhaps many of the same specific individuals. Sam Altman and Mark Zuckerberg, for instance, could still be key players.
Imagine how confusing modernity would be to a hunter-gatherer of 100,000 years ago. Someone who might not meet, or even hear of, more than a few hundred people in their life. Someone whose information environment consists entirely of conversations within that small group. Someone who has no concept of “progress”, or possibly even “indoors”. In the next 50 years or less, life may change as much as in that past 100,000.
Through our inventions, we change the world. But our biggest inventions also change us. Sometimes quite tangibly: the discovery of fire enabled cooking, allowing us to shorten our digestive tract and devote calories to a larger brain10. The technology of fire was fundamental to the creation of Homo Sapiens. Who knows how our species will evolve with the technology of AI?
If there's one firm conclusion we can draw about the AI future, it's that we should approach it with humility.
This post benefited greatly from suggestions and feedback from David Glazer, Russ Heddleston, and Sérgio Teiхeira da Silva. All errors and bad takes are of course my own. If you’d like to join the “beta club” for future posts, please drop me a line at amistrongeryet@substack.com. No expertise required; feedback on whether the material is interesting and understandable is as valuable as technical commentary.
In that post, I didn’t address physical work, but on further reflection I do think we will solve the “robot problem” sufficiently to address most jobs. Later in the post, I’ll briefly explain why I think this is a reasonable assumption.
In Get Ready For AI To Outdo Us At Everything, I suggested a somewhat longer timeline for capable robot bodies. On reflection, I think that we can make some compromises – leave the brain outside of the body (connected via a radio link), swap batteries every hour if necessary, give up on the daintiest tasks – that will simplify the path to robots that can do most physical jobs. And the moment it looks at all feasible, the amount of investment that will go into pursuing such technology will be astronomical.
To a rough order of magnitude, a human being can think at around 500 words per minute, or about 667 GPT-4 “tokens”. At current pricing, using the most expensive variant (32k context window), that comes to 8 cents per minute, or $4.80 / hour. This of course is much cheaper than a professional human staffer, especially when considering overhead costs such as benefits and office space. Here are some additional factors to consider:
The level of AI I’m contemplating here will need to be much more sophisticated than GPT-4. Algorithmic improvements will give us much of that improvement at zero additional cost, and improvements to AI silicon (a la Google’s “TPUs”) will help more. On balance, the hardware cost for future AI might be higher or lower than GPT-4 today.
Economies of scale should bring down cost; for instance, both hardware and software R&D can be amortized over a larger base.
No person can sustain peak performance over the course of an entire work day, so an hour’s wages don’t buy an hour of peak human productivity. AI shouldn’t have this problem.
People spend much of their time just keeping up with developments. AIs should be able to do this more efficiently, by sharing memory files and weight updates directly.
Another open question is whether we will create large teams of human-level AIs, smaller teams of superhuman AIs, or perhaps extra-large teams of specialized / “subhuman” AIs. This depends on too many factors to explore here.
It seems to me that given AI plus robots, mining the asteroid belt for materials will suddenly become… not actually all that hard. Maintaining a workforce in space would be infinitely easier if we don’t need people. Robots don’t need oxygen, food, or safety; they’ll have modular, replaceable parts; they can work 24 hours a day, providing a much better return for the effort of lifting them into space. Most importantly, the industrial base we’d need to set up in outer space to allow robots in spaceships to build more robots and spaceships, is far simpler than what we’d need to support an expanding population of astronauts. This is especially true given that we could ship the electronic components – which are hard to manufacture, but lightweight – up from Earth.
Recall, again, that “unlimited materials” needn’t mean covering the Earth with mines; we’ll be able to create a wide variety of materials from renewable power and atmospheric CO₂, and the rest could plausibly come from outer space.
See this snippet from Erik Brynjolfsson’s The Turing Trap.
This is oversimplified, of course, but robots should also help with raw materials and other factors.
As Kailash Nadh writes, in This time, it feels different:
In the past several months, I have come across people who do programming, legal work, business, accountancy and finance, fashion design, architecture, graphic design, research, teaching, cooking, travel planning, event management etc., all of whom have started using the same tool, ChatGPT, to solve use cases specific to their domains and problems specific to their personal workflows. This is unlike everyone using the same messaging tool or the same document editor. This is one tool, a single class of technology (LLM), whose multi-dimensionality has achieved widespread adoption across demographics where people are discovering how to solve a multitude of problems with no technical training, in the one way that is most natural to humans—via language and conversations.
That is both fascinating and terrifying. I have been actively writing software, tinkering, and participating in technology/internet stuff for about 22 years. I cannot recall the last time a single tool gained such widespread acceptance so swiftly, for so many use cases, across entire demographics. Until the recent breakthroughs, that is.
With any luck, that will be a good thing! After all, the whole point is that they'll be making better decisions than we can. But a lot will be riding on the definition of "better".
Here is another analogy that comes to mind, grandiose as it might initially seem. Scientists don’t know exactly how or when humans first wrangled fire as a technology, roughly 1 million years ago. But we have a good idea of how fire invented modern humanity. As I wrote in my review of James Suzman’s book Work, fire softened meat and vegetables, allowing humans to accelerate their calorie consumption. Meanwhile, by scaring off predators, controlled fire allowed humans to sleep on the ground for longer periods of time. The combination of more calories and more REM over the millennia allowed us to grow big, unusually energy-greedy brains with sharpened capacities for memory and prediction. Narrowly, fire made stuff hotter. But it also quite literally expanded our minds.
Really enjoy these, Steve!
I agree it will likely transform society into something quite unpredictable, but still, there are unlikely outcomes based on current status: "Assuming that we choose to share the wealth, we will be deep into a world of universal abundance. There will always be some limited resources – beachfront property, genuine antiques, social status -- but convincing substitutes will exist for most of them."
Productivity reached this stage already, to a large extent, in western world at least. Still, we are not living in a world of universal abundance, except maybe for food: starvation does not happen.
The spending and earnings are more and more based on status and capital (a form of frozen transmitted status in a way), more than material productivity. IMHO we are deep in the transition, many jobs are already fake. AI will have a transformative impact, but more of the 'Emperor has no clothes" kind than because of productivity increase. Current productivity is such that we are largely in the virtual status game stage for economy, AI will just shrink the exceptions further.
My guess is that AI will increase that bullshit economy so much, with even fewer winners and so many losers; that what tag you as a winner will be challenged.
So many words to describe a simple thing: revolution.
So if this is true, it is not a question of productivity, but more on AI mass control: police robots, electronic cash control, centralized control of power/water/information delivery, constant fine-grained surveillance....that is already crucial to maintain the status-quo. I see sign of that in parallel to AI progress....not so strange :)