One Conversation is Worth a Thousand Angry Takes
Make The Internet Better Using This One Weird Trick
The ocean is wet, the sun is bright, and online discourse heavily features people criticizing one another's viewpoints.
I recently got pulled into such a situation, when econ blogger Noah Smith wrote a post arguing that fears of advanced AI causing mass unemployment are overblown. This seemed obviously wrong to me: when AIs can do everything faster, cheaper, and better than a person... won't we let them? How does that not result in the collapse of employment?
I was, to put it simply, confused and sad. I respect Noah, and didn't understand how we could see things so differently. Is he not as smart or trustworthy as I thought? Should I drop my subscription to his blog? Or maybe am I not as smart as I thought? Should I stop blogging?
I was also worried, because Noah is a prominent blogger, and was promoting an idea that struck me as dangerously incorrect.
I wrote a rebuttal, but that didn’t really accomplish anything. Well, it let me feel like I'd "done something", but that’s always a suspect motivation.
Then I did what we somehow never do in these situations: I talked to Noah about it.
Our Conversation
I asked my readership whether anyone could connect me with him. It turns out that we know a few people in common, and he offered to chat. I hadn't really expected this to pan out – he's a Famous Blogger, and I'm... an infamous blogger? Not-famous blogger? Anyway, someone he's probably never heard of. But he was gracious enough to engage.
I wrote a summary of my thoughts, he added a bunch of comments, and then we went back and forth a few times over email1. In the end, we determined that we both agree with the following high-level model of job prospects in an age of strong AI2:
If there is some natural constraint on our ability to efficiently devote resources to the creation and operation of ever-more AIs [for instance, due to a shortage of AI chips], then comparative advantage applies and people should have plenty of work.
Alternatively, we can get the same result by imposing an artificial constraint, such as limiting the amount of energy used for data centers.
Or we could distribute wealth in some fashion, such as by nationalizing natural resources. In this world, people won't necessarily find jobs in the conventional sense, but nor will they need them.
If none of the above transpire, there would be a problem (immiseration of some, potentially large, portion of the population). But this should be quite avoidable, per 1/2/3 above.
Noah added:
Sounds right to me! But keep in mind that (4) also only happens if AI fully replaces human skills across the board. If there are still skills only humans can do, then we don't even need 1/2/3 to keep being valuable.
(A complete transcript of our conversation can be found here – thanks Noah for inviting me to publish it.)
To be clear, this doesn't mean we agree on everything. I still think we will eventually get to a point where AI destroys the conventional job market, and I don’t get the sense Noah expects that. But look at all the good things which came out of our conversation:
I've moved from baffled disagreement on a question of logic3, to a legitimate difference of expectation regarding concrete future developments, such as potential constraints on chip supply.
I understand where and why we diverge, such that I am still able to trust Noah’s opinions in general.
I've received a useful reality check on my own ideas, and uncovered some assumptions I hadn’t realized were important to my thinking.
If I find myself in another discussion of this topic, and someone cite’s Noah’s blog post, I can point out that Noah agrees that his argument depends on 1/2/3 above and we can then dive into those questions.
If I hope to shift the broader public discourse on this question, I have a better idea how to go about it.
Perhaps most important, I made a connection with Noah. It's not like we're buddies; I don't know whether we'll communicate again. But if we do, we'll have an increment of mutual context, and hopefully respect and trust.
Why This Worked
When two Internet strangers come together to discuss an important topic on which they disagree, success is far from guaranteed. It can easily end in frustration, acrimony, and disdain, and with no forward progress on any informational point. Especially over a medium like email, lacking in nonverbal cues or reminders of one another's humanity, where it's so easy for misunderstandings to fester.
So why did our conversation go well? I credit the following factors:
1. I respect Noah, having read countless thousands of words on his blog. As a result, I had trust that it was worth my trouble to invest in the conversation, reading his words carefully and taking the trouble to express my ideas clearly.
2. I don't know whether Noah knew me from Adam, but he also put in the effort to make things work. He read and engaged with what I had to say, he stuck with the conversation through multiple rounds, he stayed constructive and friendly throughout.
3. We're both good communicators in written form. He does this professionally, and I've honed my craft over years of consensus-seeking in workplace discussions among distributed teams.
4. We were communicating in private, with no audience to play to or get distracted by, and no need to score quick points.
5. We had achievable goals. Instead of trying to change one another's minds, we simply analyzed our disagreement until we teased out the differences in our underlying assumptions. In particular, we have different intuitions as to the likelihood of constraints on the amount of computing capacity that can be manufactured.
6. We stayed focused; neither of us introduced new topics or clung to unnecessary supporting points.
7. We had a good starting point. Noah's blog post did an excellent job of explaining his argument, which allowed me to understand where he was coming from and start our conversation at a point fairly close to the key difference in our assumptions.
Not every conversation will have these advantages, but in an upcoming post, I’ll be presenting some ideas for how to foster more good discussions.
Conversation is More Efficient Than Posting Rebuttals
Apparently economics professor Robin Hanson has published some strong views on the value of modern medicine. Scott Alexander, a blogger I greatly admire, decided that Hanson “more or less believes medicine doesn’t work”, and to address this, he posted a nearly 7000 word critique. Hanson wrote a 2200 word response, and on the day I’m writing this, Alexander followed up with another 5400 words.
Much of the disagreement seems to be about what Hanson’s views actually are. In his response, Hanson states that Alexander mischaracterized his views. Alexander replied:
I acknowledge he’s the expert on his own opinion, so I guess I must be misrepresenting him, and I apologize. But I can’t figure out how these claims fit together coherently with what he’s said in the past. So I’ll lay out my thoughts on why that is, and he can decide if this is worth another post where he clarifies his position.
(I think this is a polite way of saying “Hanson has been all over the place on this topic, and I’d appreciate it if he would acknowledge that.” Note that I haven’t looked into any of this myself, I am just echoing Alexander here.)
So basically, we’re now 14,000 words in, and readers of Alexander’s blog are left in confusion as to the position Hanson would defend4. Possibly, if the back-and-forth continues, we’ll eventually get somewhere. But it would be so much faster and easier if these two would just talk directly.
The big problem, I think, is that because Alexander is addressing his audience in a static blog post instead of engaging directly with Hanson, he feels the need to be systematic. He can’t wait to see whether the reader has grasped his point, so he throws in everything he could possibly say up front. When there’s no opportunity for back-and-forth, sometimes this the best you can do. But Hanson is responding! It would be much more efficient to talk to him directly, fast-forward to agreement on what their respective views are (and how they might differ), and only then start presenting public evidence to debate their actual differences.
(It’s worth noting that Scott Alexander does often engage in direct conversation with folks whose opinions he questions; I always appreciate reading about those conversations.)
Direct Engagement Doesn't Always Work
I’ve recently come across some examples where disagreeing parties engaged in direct conversation, and failed to arrive at a shared understanding. These strike me as exceptions that prove the rule5.
First, the February debate between Beff Jezos and Connor Leahy, prominent figures with opposing views on AI safety. The three-hour session didn’t seem to do much to advance anyone’s understanding of anything. The participants (Leahy in particular) routinely interrupted one another, the conversation constantly jumped around, and there was heavy reliance on abstractions and hypotheticals that were easily misconstrued. As a result, while some interesting ideas came out, nothing was ever really settled and it’s not clear to me that the participants (let alone the audience) properly understood one another’s ideas. This might have gone better with a more clearly defined structure, supported by a moderator.
Second, the Rootclaim $100,000 Lab Leak Debate, intended to resolve the question of whether COVID originated in a lab leak. The process was incredibly rigorous, with three separate debate sessions totaling 15 hours (!), supported by massive research and preparation by both participants, and two judges putting in roughly 100 hours each6. This failed to produce consensus, in the sense that the loser of the debate disagrees with the outcome. However, consensus on the origins of COVID was a very ambitious goal, given the extent to which primary evidence went uncollected or was actively concealed. And the thorough debate did succeed in massively advancing the public understanding of the topic. I consider this to be a noteworthy achievement.
Finally, forecasting existential risk from AI: the Forecasting Research Institute gathered eleven expert forecasters with eleven AI safety experts. Each group was chosen for extreme views on AI risk; the selected forecasters on average put the probability of AI doom at 0.1%, and the safety experts put it at 25%. After 80 hours of research and discussion, the two groups barely budged. My sense – take this with a grain of salt – is that the AI safety folks in question were deep into a very specific worldview, and did not do a great job of communicating that worldview to folks outside their circle7.
The upshot is that for a conversation to go well, you need some combination of: a tractable topic (all three of these examples involved very difficult topics), participants who are skilled communicators, and a highly engaged and skilled moderator. For complex topics, a large time commitment will also be needed.
Talking: It's Good
I don't know how much time I invested in my conversation with Noah Smith. A couple of hours, all told? Certainly less than I spent writing my original rebuttal post. In return, I understand where Noah and I diverge; I reaffirmed my overall trust in his writing; and I better understand my own position. If I hope to shift the broader public discourse on this question, I have a better idea how to go about it. Last but not least, I made a connection with someone I respect. Not a bad payoff!
I’m cooking up an initiative to generate more productive conversations about AI. Getting to see this work in practice was validating, and I’m looking forward to doing a lot more of it, in an environment that’s set up for success: with well-defined topics, active moderation, and committed participants.
The next time you're tempted to dunk on (what strikes you as) a bad take, consider whether you could instead reach out and start a conversation. You might accomplish something; you might learn something; you might make a connection. The world doesn't really need more dunks, but it desperately needs more connections.
I gather that, as this was going on, Noah was also responding to various other responses to his post. For instance, from Zvi Moshowitz’s blog:
Before I get to this week’s paper, I will note that Noah Smith reacted to my comments on his post in this Twitter thread indicating that he felt my tone missed the mark and was too aggressive (I don’t agree, but it’s not about me), after which I responded attempting to clarify my positions, for those interested.
There was a New York Times op-ed about this, and Smith clarified his thoughts.
Noah Smith: I asked Smith by email what he thought of the comments by Autor, Acemoglu and Mollick. He wrote that the future of human work hinges on whether A.I. is or isn't allowed to consume all the energy that's available. If it isn't, "then humans will have some energy to consume, and then the logic of comparative advantage is in full effect."
He added: "From this line of reasoning we can see that if we want government to protect human jobs, we don't need a thicket of job-specific regulations. All we need is ONE regulation – a limit on the fraction of energy that can go to data centers.”
Matt Reardon: Assuming super-human AGI, every economist interviewed for this NYT piece agrees that you'll need to cap the resources available to AI to avoid impoverishing most humans.
I’ve lightly edited this for clarity.
That is, whether the economic concept of comparative advantage somehow renders it mathematically impossible for AIs to permanently disrupt the job market.
Source: I am a reader of Alexander’s blog, I am in confusion, and pride prevents me from believing that it’s just me.
In the original sense of that phrase, in which “prove” means “test”, not “support”. That is, these are exceptions which test the rule and show its limits.
In addition to the 15 hours of debate, the judges spent time fact-checking participants’ claims, assessing the merits, and writing up their verdicts.
Appendix 8 of the report on an earlier stage of this project, beginning on page 113, contains raw samples of the actual discussions between participants and makes for interesting reading. The conversation wandered into all sorts of strange places, such as the probability that advanced AI encloses the Sun in a Dyson sphere within the next 77 years.