11 Comments

Think you got it exactly backwards. The Board did fire Sam, and for a while they had the support of people presuming it was because Sam did something bad. It's only when it became clear that there was no actual reason that the entire world turned against them and brought Sam back.

The answer clearly is that switching off the AI would work, but you better have an actual reason to do it.

Expand full comment
author
Dec 16, 2023·edited Dec 16, 2023Author

Agreed that the outcome significantly depended on the board failing to articulate any compelling reason for having fired Sam. They also failed to execute well in other important ways, such as not getting key stakeholders on board ahead of time.

I don't think we know that they didn't have valid reasons for firing him. The board failed to publicly *articulate* reasons, which was inane, but subsequently of course there have been extensive leaks. Did the evidence the board was looking at add up to valid grounds for dismissal? I don't know. Given that they felt he had to go, did they handle it well? Of course not, it was horribly mishandled. Apparently in part because they were acting in haste, out of fear that once he knew what was up, he would outmaneuver them – which of course is exactly what happened.

Regardless of whether the board had valid reasons for firing Sam, I think this incident clearly illustrates that the people in control of the switch are subject to pressure. Their ability to resist that pressure is hopefully influenced by the actual facts of the situation (was there a good reason to fire Sam, is the AI behaving dangerously), but is also affected by their ability to manage communications with key stakeholders and the public, and the profit motives of those around them.

A hypothetical Button Pushing Committee might not handle communications any better than OpenAI's board just did, and they might be up against profit motives that are even larger than for OpenAI today. They might also be dealing with grounds-for-concern that are even more subtle and difficult to explain than obtained here. Under such circumstances, can we be confident that, given they have valid reasons for concern, they will be able to successfully resist pressure not to shut down a stupendously profitable AI?

Expand full comment

> I don't think we know that they didn't have valid reasons for firing him.

I mean ... at this point we know that they haven't said publicly, or privately to anyone including Satya Nadella or Emmett. To think they still had a good reason is epistemic malpractice IMO.

A hypothetical Button Pushing Committee would hopefully be slightly less incompetent and be able to actually *say something* about why they're pushing the button. I think the correct update here is that if you're going to do something drastic you should be able to explain why. Which we can all agree is a basic standard to hold an oversight committee to. "Trust me" is not enough.

Expand full comment
author

I didn't say they had good reason. I said I don't think we know that they don't. In the subsequent weeks, there have been a lot of leaks that point toward at least a plausible (I do not say confirmed) picture. In particular, the New York Times reporting that Altman was lying to board members as part of a push to get Helen Toner off the board (https://www.nytimes.com/2023/12/09/technology/openai-altman-inside-crisis.html). But also see, for instance, https://www.washingtonpost.com/technology/2023/12/08/open-ai-sam-altman-complaints/. And for background, https://twitter.com/geoffreyirving/status/1726754277618491416?s=46 and https://www.washingtonpost.com/technology/2023/11/22/sam-altman-fired-y-combinator-paul-graham/.

Again, I don't claim to know the precise reasons the board acted, nor whether they were valid reasons. I do think the information that has been leaking out leaves plausible room for the idea that the board may have had valid reasons. I'm avoiding a stronger statement because I don't want to get bogged down on the details here.

I also think we need to remain humble regarding our assumptions as to the behavior of future oversight committees. The job is not set up for stellar execution. It's not a good avenue to advance one's career, the best you can hope for is not to piss too many people off during your tenure. You don't really get to practice. You'll often be acting under conditions of uncertainty, "fog of war". Things may be very boring for a long time and then suddenly develop very rapidly. There may be ambiguous signals, or signals that you can't talk about for competitive or security reasons. The people you rely on for information will have incentives to manipulate you. And so forth.

Expand full comment

I think the singular feature of any oversight committee to be considered effective at its job would entail, at a minimum, being able to explain why they take consequential decisions. If they do this repeatedly and are proven correct, they can get a measure of trust. It's true of every position of authority we select in the world, from doctors to presidents to judges to company boards. Until that bar is met, the presumption that they might have had a reason and we don't know, while mathematically valid, is grossly insufficient.

Expand full comment
author

I think you're simply arguing that the OpenAI board did not execute their function well? I agree with you on that: either Altman should not have been fired, in which case they erred by firing him; or he should have been fired, in which case they erred by failing to explain why. So I'm not sure what you're disagreeing with.

Expand full comment

That the lesson I draw is the opposite to yours - this weird structure worked as intended, the board did exercise their right to fire sam with no legal or operational repercussions. And was only their incompetence that made people mad and got it reversed.

Expand full comment

I find it bizarre how highly specific and localized arguments tend to be about how we would lose control of AGI. The story you wrote is, to my mind, much more to the point-- there are a zillion ways things can play out poorly and pretending we could anticipate them all is foolish.

Essentially, e/acc and these companies have decided that by releasing successive sub-AGI versions of increasing quality, society will adjust, if bumpily. Altman in every interview always takes care right after saying how near-utopia will be ushered in by AGI to also emphasize that terrible things will also be done with AGI. He just thinks we will then course correct. I do think he is basically sincere that this is the best path, but who really knows.

However, Sutskever recently said he hadn’t ruled out himself eventually merging with an AGI, and this is from someone in that community who seems most worried about the pace things are evolving. I don’t think most people would be comfortable with what I perceive as the percentage of powerful people in the small AI community who flirt with man/machine integration (a la Neuralink). It makes one think they ultimately would be fine with AI takeover, as long as it is “controlled” and, implicitly, that THEY WOULD ALREADY BE UPLOADED INTO THE AI AND ARE THUS ARE PART OF THE TAKEOVER. This is creepy as hell and more of these tech guys should be grilled about it. I get that until a year ago asking about such questions seemed absurdly theoretical, but it doesn’t seem quite so far off now.

I can’t look away, but increasingly the AI stuff feels like watching the Indianapolis 500-- continually exciting, but also with the fear/excitement of a horrible crash that could happen at any moment.

Expand full comment