AI Could Break Things; Let's Use It As a Wakeup Call To Make Them Stronger
Stop Treating AI Policy Tradeoffs As Zero-Sum
AI policy involves difficult tradeoffs. Too often, we treat these as a tug-of-war, fighting to privilege one goal at the expense of another.
We’ll accomplish more if we redirect some of that energy toward actions that can alleviate tradeoffs. There are so many constructive steps we could be taking to address concerns around AI – without retarding progress. In fact, there are many policies which would directly benefit society even as they reduce AI risks!
To begin with, let’s consider concerns where AI might exacerbate problems that already exist in the world.
AI Is New, The Potential Harms Mostly Aren’t
Many concerns people express regarding AI are not novel. The worry is that AI might make an existing problem worse. AIs may assist in cyberattacks, but attacks already occur every day. Models trained on Internet discussion boards might make biased parole recommendations, but flawed parole decisions have been around for a long time. AI will certainly be used to generate spam, but spam is nothing new.
These are real concerns, and I do not mean to minimize them. But precisely because these problems are not new, there are known measures that would help. Not complete solutions, but worthwhile actions that would make a significant difference. Often these actions are neglected – which represents both a problem and an opportunity.
Consider the concern that someone might eventually use an AI to create a new pandemic virus. The mechanism of creation might be new, but the most likely scenarios would result in a familiar sort of virus, one that spreads much like Covid or the flu1. Infamously, we spent trillions of dollars coping with Covid, but are failing to prioritize relatively cheap measures to reduce the threat of a future pandemic. Improvements to air ventilation and filtration in public spaces would make it more difficult for respiratory viruses to travel from one person to another. Broad-spectrum vaccines would reduce the impact of common viral families such as flu and coronaviruses. Wastewater monitoring would help us to quickly identify a new virus, and standby manufacturing capacity would allow us to quickly deploy tests and vaccines2. In combination, these measures have the potential to greatly reduce the potential for a future pandemic, whether natural or engineered.
In short, we worry because we live in a world where a respiratory virus can rapidly cross the globe and kill millions of people. But we shouldn’t resign ourselves to living in that world! Rather than arm-wrestling over precisely whether and how to regulate biological capabilities of AI models, we could push for measures that attack respiratory viruses directly. Not only would this help to loosen one knot of the AI policy tangle, it would address the very real impact of Covid, the flu, and other diseases that kill hundreds of thousands every year.
The idea that we have the power to combat long-standing issues like respiratory viruses is exhilarating. Once you start looking at the world through this lens, you see opportunities everywhere.
Let’s Use AI as a Wake-Up Call
It is an underappreciated fact that we have many viable paths for reducing the burden of respiratory viruses. The same is true for other problems that AI might exacerbate. Action in all of these areas has languished, but the specter of AI might be an opportunity to address that.
Phone and text spam and frauds are abetted by the ease of spoofing caller ID. There are technical measures that would make it harder to use a fake phone number, but institutional inertia has delayed deployment.
Bias in institutional decisions that affect people’s lives is exacerbated by a lack of transparency, and the lack of effective, prompt channels for appealing flawed decisions.
A successful cyberattack sometimes involves “SIM jacking” – tricking a mobile carrier into transferring an employee’s phone number to a SIM card controlled by the attacker, so that they can receive authentication codes intended for the target employee. This could be addressed through tighter procedures at mobile carriers, or by moving away from the use of phone messages as a security measure.
Cybersecurity in general suffers from policies and standards that encourage box-checking over effective security.
In most cases, what we can hope for are partial solutions. We are not going to eliminate spam, cyberattacks, biased decision making, or (probably) the flu. But perhaps we can harness the renewed attention that AI is bringing to these problems to spur constructive action.
What about the potential for AI to introduce genuinely new problems into the world?
New Problems Have Constructive Solutions, Too
It’s surprisingly difficult to identify genuinely novel concerns raised by AI. Deepfakes are new, but doctored (or simply out-of-context) photographs have been around since long before Photoshop. AI might centralize power in a handful of mega-corporations, but centralized power structures go back to the dawn of history. AI companions can be viewed as a continuation of the trend toward doomscrolling, information bubbles, and online interactions replacing real-life friendships.
One candidate for a genuinely novel problem is the potential end of employment. Past advances have produced special-purpose technologies, capable of some jobs, leaving people to find other jobs. If we eventually develop true AGI (and capable robot bodies), by definition this would be a general-purpose technology, able to subsume all jobs. This would result in a world where there is no realistic prospect for most people to find work. Permanent mass unemployment in today’s society would not be a pretty sight, but that is not our only option. We should encourage discussion of ideas such as universal basic income, an automation dividend, or collective ownership of various resources.
Arguably the most frightening concern is the (controversial) possibility of “loss of control”, where a superintelligent AI achieves unchallenged control over the entire Earth. Even this could be viewed as merely an extension of the ancient problem of totalitarian rulers, but no dictator has ever been immortal, able to conquer the world, or had the capacity to directly monitor every one of his subjects for signs of treachery. We are still working to understand the circumstances under which a loss of control could take place, let alone find reliable solutions. But there are many productive avenues for research.
It is not clear whether we can fully address (or rule out) novel concerns like loss-of-control. Nor can we count on fully eliminating pandemics, biased decision making, cyberattacks, or other problems that could be exacerbated by AI. But for every one of these issues, there are at least constructive steps that we can be taking.
The Goal Isn't AI Progress or AI Safety, The Goal Is A Better World
There are many debates around AI policy. Positions are often justified by appeals to principles such as progress, safety, or equity. However, none of these principles are absolute. Without progress, safety is stifling; without safety, we might not be around to enjoy progress.
We should not think in terms of “winning” the debate over progress, safety, or equity. A desirable future must satisfy many criteria. If we argue for sacrificing one goal in favor of another, we’ll just be robbing Peter to pay Paul – even as someone else is robbing Paul to pay Peter.
The constructive course is to fight for win-win actions. Every time someone expresses concern that AI might enable bioterrorism, I would love to see them call for improved air circulation or other steps that reduce the threat. Whenever an AI proponent argues that concerns AI will enable more cyberattacks are overblown, I wish they would also help cement the case by pushing for the software industry to address long-standing issues with cybersecurity. If you are worried about biased AIs, restrictions on AI capabilities are not the only tool you should be reaching for. If you fear that safety concerns will stifle AI progress, you can help your cause by working to alleviate the real-world problems that give those concerns salience.
We can’t eliminate the tensions over biased AIs, or highly capable AI agents, or other topics of debate. But we can try to reduce those tensions to the point where constructive compromise is possible. There are few courses of action more likely to promote a positive future.
Thanks to Grant Mulligan, Julius Simonelli, Kevin Kohler, Rob Tracinski, Sean Fleming, and Shreeda Segan for invaluable feedback and suggestions.
In many scenarios, it would actually *be* a variant of the flu or Covid viruses.
Summarized from Biosecurity and AI: Risks and Opportunities, which links to further material describing promising avenues for reducing pandemic risk.