Date: 2023-11-21T00:27:33.000Z
Location: www.nytimes.com
This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
Casey, how was your weekend?
[CHUCKLES]: Well, there was no weekend. There was only work.
That was a trick question.
There will only ever be work. Yeah.
(LAUGHING) Yes.
What is happening, Kevin?
(LAUGHING) I don’t know, man. I am on two hours of sleep. I’ve been working all weekend, and I’m increasingly certain that we are, in fact, living in a simulation.
I mean, it would be nice if we were living in a simulation, because that would suggest that there is at least some sort of plan for what might be about to happen next. But I think recent events would suggest that actually there is not.
Yeah, I had a moment this morning where I woke up and I looked at my phone from my two-hour nap, and I was like, I’m huffing fumes. This can’t be real.
I mean, let’s just say, like, over the course of a weekend, OpenAI as we know it ceased to exist. And by the time this podcast gets put in the air, I would believe anything you told me about the future of OpenAI, up to and including, it had been purchased by Etsy and it was becoming a maker of handcrafted coffee mugs.
[LAUGHS]: Honestly, it would not be the strangest thing that’s happened this weekend.
Not remotely! It wouldn’t be in the top five!
[MUSIC PLAYING]
I’m Kevin Roose, a tech columnist for “The New York Times.”
I’m Casey Newton from “Platformer.”
And this is “Hard Fork.”
This week on the show, one of the wildest weekends in recent memory — we’ll tell you everything that happened at OpenAI and what’s going on with Sam Altman. And then, later in the show, we will present to you our interview with Sam Altman from last week. So before he was fired, we asked him about the future of AI, and we’re going to share that conversation with you.
So this episode is going to have two parts. The first part, we’re going to talk about the news that happened at OpenAI over the weekend and run down all of the latest drama and talk about where we think things are headed from here. And then, we are going to play that Sam Altman interview that we discussed on our last emergency podcast, the one that we conducted last week and planned to run this week, but that has since become fascinating for very different reasons.
So let’s just run down what has happened so far, because there’s been so much. It’s, like, enough to fill one of those epic Russian novels or something. So on Friday —
And the good news, by the way, is that the events are all very easy to understand. There’s no way you’ll mess up while trying to describe what happened over the past three days.
[LAUGHS]: Yeah, let’s try this on a couple hours of sleep. OK. So Friday, when we recorded our last emergency podcast episode, Sam Altman had just been fired by the board of OpenAI. He was fired for what were essentially vague and unspecified reasons.
The board put out a statement, sort of saying that he had not been candid with them, but they didn’t say more about what exactly had led them to decide that he was no longer fit to run the company. So he’s fired. It’s this huge deal, huge shock to all of the people at OpenAI and in the tech industry. And then, it just keeps getting weirder.
So Greg Brockman, the president and co-founder of OpenAI, announces that he, too, is quitting. Some other senior researchers resign as well. And then, Saturday rolls around, and we still don’t really know what happened.
Brad Lightcap, who is OpenAI’s COO, sent out a memo to employees explaining that they know that Sam was not fired for any kind of malfeasance, right? This wasn’t like a financial crime or anything related to a big data leak or anything. He says, quote, “This was a breakdown in communication between Sam and the board.”
And let’s say that by the time that Brad put that letter out, there had been reporting that at an all-hands, Ilya Sutskever, the chief scientist at the company and a member of the board, had told employees that getting rid of Sam was the only way to ensure that OpenAI could safely build AI, which led to a lot of speculation and commentary that this was an AI safety issue driven by Effective Altruists on the board.
So it was very significant when we then get a letter from Brad saying explicitly, this was not an AI safety issue. And of course, that only served to make us even more confused. But lucky for us, further confusion with info.
(LAUGHING) Yeah, this was actually the clearest that things would be for the rest of the next 48 hours. So OpenAI — its executives are saying this isn’t about safety or anything related to our practices. But what we know from reporting that I and my colleagues did over the weekend is that this actually was at least partially about AI safety and that one of the big fault lines between Sam Altman and the board was over the safety issue, was over whether he was moving too aggressively without taking the proper precautions.
Yeah.
After this memo from the COO went out, there were reports that investors in OpenAI, including Sequoia Capital, Thrive Capital, and also Microsoft, which is the biggest investor in OpenAI, were exerting pressure on the board to reverse their decision and to reinstate Sam as CEO, and then for the entire board to resign. They had sort of a deadline for figuring some of this stuff out, which is 5 PM on Saturday.
That came and went with no resolution. And then Sunday, a bunch of senior OpenAI people, including Sam Altman, who is, by the way, now no longer the CEO of this company, officially gather at the offices of OpenAI in San Francisco to try to work through this all.
That’s right. There is some reporting that all of a sudden, at least some people on the board are open to the idea of Sam returning, which was one of those moments that was both shocking and not at all surprising. Shocking, because they had just gotten rid of him. Not at all surprising, because I think by that point, it had started to dawn on the world, and on OpenAI in particular, on what it would mean for Altman to no longer be associated with this company where he had recruited most of the star talent.
Totally. And the employees of OpenAI were sort of making their feelings known as well. They did this sort of campaign on Saturday, where they were posting heart emojis in quote posts of Sam, sort of indicating that they stood by him and that they would follow him if he decided to leave and start another company or something like that.
Yeah, it was something thing to behold. It was essentially a labor action aimed at the board. And what I will say was in this moment, you realize the degree to which the odds were weirdly stacked against the board. Because on one hand, the board has all of the power when it came to firing Sam.
But beyond that, there is still a company to run. There is still technology to build. And so now, you had many employees of the company being very public in saying, hey, we do not have your back. We did not sign up for this, and you’re in trouble.
Yeah, and so on Sunday, there was a moment where it sort of looked like Sam Altman was going to return and sort of take his spot back as the CEO of this company. He posted a photo of himself in the OpenAI office wearing a guest badge, like one that you would give to a visitor to your office.
I will say, I have worn that exact badge at OpenAI headquarters before.
[LAUGHS]: Yeah. And so the caption on the photo was something like, this is the first and last time I’ll ever wear one of these. So it kind of sounded like he was setting the scene for a return as well. And I would say there was just a feeling among especially the company’s investors, but also a lot of employees and just people who work in the industry, that this wasn’t going to stand, that there were too many mad employees, that the stakes of blowing this company up over this disagreement were too high. And if there really wasn’t a smoking gun, if there was really nothing concrete that the board was going to hold up and say, this is why we fired Sam Altman, there was this sense that that just wasn’t going to work, that there was no way that the board could actually go through with this firing.
Yeah. And I think one way that the employees and the former executives were very effective was in using social media to create this picture of the overwhelming support that was behind them, right? So if you were an observer to this situation, you’re only seeing one side of the story, right? Because the board is not out there posting. They haven’t issued a statement that lays out any details about what Sam allegedly did.
And so instead, you just have a bunch of people saying, like, hey, Sam was a great CEO. I love working for the guy. OpenAI is nothing without him. All these posts are getting massively reshared. It’s easy to look at that and think, oh, yeah, he’s probably going to be back in power by the end of the day.
Totally. So that was the scene as of Sunday afternoon. But then, Sunday evening Pacific Time, this new deadline, 5 PM Pacific Time, has been given for some kind of resolution. That also comes and goes, and there is no word from OpenAI’s headquarters about what the heck is going on.
It sort of feels like there’s, like, a papal conclave and everyone is waiting for the white smoke to emerge from the chimney. And then, we get word that the board of directors of OpenAI has sent a note to employees announcing that Sam Altman will not return as CEO after all, and sort of standing by its decision.
They still didn’t give a firm reason or a specific reason why they pushed him out. But they said that, quote, “Put simply, Sam’s behavior and lack of transparency in his interactions with the board undermined the board’s ability to effectively supervise the company in the manner it was mandated to do.” And they announced that they have appointed a new interim CEO.
Now, remember, this company already had an interim CEO, Mira Murati, the former chief technology officer of OpenAI who had been appointed on Friday. She also signaled her support for Sam and Greg, and reporting suggests that she actually tried to have them brought back.
And because of that, the board decided to replace her as well. So Mira Murati’s reign as the temporary CEO of OpenAI lasted about 48 hours before she was replaced by Emmett Shear, who is the former CEO of Twitch and who was the board’s choice to take over on an interim basis.
The board found an alternative man — or Altman — to lead the company.
[LAUGHS]: So that was already mind-blowing. This happened at night on Sunday. And I thought, well, clearly, things cannot get any crazier than this.
That’s when I went to bed, by the way. I was like, whatever’s happening with these people can wait till the morning. And then, of course, I wake up, and an additional four years’ worth of news has happened.
Yes. So after this announcement about Sam Altman not returning and Emmett Shear being appointed as the interim CEO, there is a full-on staff revolt at OpenAI. The employees are outraged. They start threatening to quit. And then, just a couple of hours after this note from the board of directors comes out, Microsoft announces that it is hiring Sam Altman and Greg Brockman to lead an advanced research lab at the company.
An advanced research lab, I assume, means that Satya has just given those two a fiefdom, and they will be given an unlimited budget to do whatever the heck they want. But of course, because Microsoft owns 49 percent of OpenAI, at this advanced research unit, Sam and Greg and all their old friends from OpenAI will have access to all of the APIs, everything that they were doing before. They will just get to pick up where they left off and build everything that they were going to do, but now, firmly under the auspices of a for-profit corporation and, by the way, one of the very biggest giants in the world.
Yeah. So I think it’s worth just pausing a beat on this move, because it is truly a wild twist in this saga. So just to explain why this is so crazy, so Microsoft is the biggest investor in OpenAI. They’ve put $13 billion into the company.
They’re also sort of highly dependent on OpenAI, because they’ve now built OpenAI’s models into a bunch of their products that they are kind of betting the future of Microsoft on, in some sense. And this was a big bet for them that, over the course of a weekend, was threatening to fall apart, right?
Sam Altman and Greg Brockman were the leaders of OpenAI. They were the people that Microsoft was most interested in having run the company. Microsoft did not like this new plan to have Emmett Shear take over as CEO. And —
They said it’s “shear” madness, Kevin!
[LAUGHS]:: And so they did kind of the next best thing, which was to poach the leaders of OpenAI, the deposed leaders, and bring them into Microsoft, along with, presumably, many of their colleagues who will be leaving OpenAI in protest if the board sticks to this decision.
Yeah, man. So this one threw me for a loop. Because if you have spoken with Sam or Greg or many of the people who work at OpenAI, you get the strong impression these people like working at a startup, OK? Working at OpenAI is, in many ways, the opposite of working at a company like Microsoft, which is this massive bureaucracy with so much process for doing anything.
I think they really liked working at this nimble thing, at a new thing, being able to chart their own destiny. Keep in mind, OpenAI was about to become the sort of only big, new consumer technology company that we have seen in a long time in Silicon Valley. And so initially, it’s like, OK, they’re going to work at Microsoft? What the heck?
Because Kevin, one thing you didn’t mention — which is fine, because we did have to get through that timeline — but it’s like, the instant that Sam was fired, reporting started leaking out he was starting a new company with Greg, right? So my assumption had been, these guys are going to go off back into startup land. They’re going to raise an unlimited amount of money and do whatever they want.
At the same time, you think about where they were in their mission when all of this happened on Friday. And they had a very clear roadmap, I think, for the next year. And if they would have to go out, raise money, build a new team, train a large language model, think about how much time it would take them just to get back to where they were before, right?
They would probably lose a year, if not more, of development. So — and this is pure speculation, but my guess is that part of their calculus was, look, if we deal with the devil we know and we go to Microsoft, we get to play with all of our old toys, we will have an unlimited budget, and we can skip the fundraising and team-building stage and just get back to work. So I have to believe that was the calculus. But that said, it still was a very unexpected outcome, at least to me.
It’s a crazy outcome. And it means that Microsoft now has a hand in two, essentially, warring AI companies, right? They have what remains of OpenAI. And they have this long-term deal with OpenAI. And they also control, by the way, the computing power that OpenAI uses to run its models, which gives them some leverage there. So it is a fascinating position that Microsoft is now in and really makes them look even more dominant in AI than they already did.
That’s right. But listen. All of that said, everything that you just said is true, as we record. However, Kevin, by the end of the day, I would believe any of the following scenarios. Greg and Sam have quit Microsoft. Greg and Sam are starting their own company.
Greg and Sam have returned to OpenAI. Greg and Sam have retired from public life. Greg and Sam have opened an Etsy store. This is all within the realm of possibility to me, OK?
So if we’re back doing another one of these emergency pods tomorrow, I just want to say that while I accept that everything that Kevin just said is true, I’m only 5 percent confident that any of it lasts to the end of the week.
Yes. We are still in the zone where anything can happen. In fact, there have been some things that have happened even since the Microsoft announcement. So super early on Monday morning, like 1 AM Pacific Time, when I was still up — but I guess you were asleep, because some of us aren’t built for the grind set —
Emmett Shear, the new interim CEO of OpenAI, put out a statement saying that he would basically be digging into what happened over the past weekend, speaking to employees and customers, and then trying to kind of restore stability at the company. And my read of this letter was that he was basically telling OpenAI employees, you know, please don’t quit. I am not the doomer that you think I am, and you can continue to work here.
Because one other thing that we should say about Emmett Shear is that while we don’t a ton about his views on AI and AI progress, he has done some interviews where he’s indicated that he is something of an AI pessimist, that he doesn’t think that AI should be going so quickly ahead, that he wants to actually slow it down, which is a position that is at odds with what we know Sam Altman believes.
Yeah. As soon as he was named, people found a recent interview he gave where he said that his P doom, his probability that AI will cause doom, was between 5 percent and 50 percent. But if you listen to that interview, it sure sounds like the P doom is closer to 50 than it is to 5, I would say.
The other interesting thing in that statement is that Emmett said, before he took the job, he checked on the reasoning behind firing Sam. And he said, quote, “The board did not remove Sam over any specific disagreement on safety. Their reasoning was completely different from that.” So once again, we have someone talking about the firing without telling us anything, and making it even more confusing.
Totally. But that is not even the end of the timeline. We are still going. Because after this 1 AM memo from Emmett Shear, OpenAI employees start collecting signatures on what amounts to an ultimatum, saying that they will quit if the board does not resign and replace Sam and — if the board does not resign and bring Sam Altman back as CEO.
This letter starts going around OpenAI and eventually collects the signatures of the vast majority of the company’s roughly 700 employees — almost all of its senior leadership and the rank-and-file — saying that if the board does not resign and bring back Sam Altman, they will go work for Microsoft or just leave OpenAI.
And do you know how much you have to hate your job to go work for Microsoft? These people are pissed, Kevin.
[LAUGHS]:: And then, as if it couldn’t get any crazier, just Monday morning, Ilya Sutskever, the OpenAI co-founder and chief scientist and board member who started all of this, who led the coup against Sam Altman and rallied the board to force him out, posted on X, saying that he, quote, “deeply regrets his participation with the board.” He said, quote, “I never intended to harm OpenAI. I love everything we’ve built together, and I will do everything I can to reunite the company.”
So that is it. That is the entire timeline of the weekend up to the point that we are recording this episode. Casey, are you OK? Do you need to lie down?
I — well, I do need to lie down. But you know, sometimes, Kevin, when you’re watching a TV show or a movie and the central antagonist has a sudden change of heart that’s completely unexplained, there’s no obvious motivation, I always feel like, wow, the writers really copped out on this one. At least give us some sort of arc.
That was the moment Ilya Sutskever had this moment where, as you say, after leading the charge to get rid of Sam for reasons that the board did not specify but that Ilya strongly hinted had something to do with AI safety, he now spins around and says, hey, it’s time to get the band back together. I mean, just a tremendously undermining moment for the board, generally, and for Ilya in particular.
Totally. So right now, as things stand, there are a lot of different factions who have different feelings and emotions about what’s going on. There’s the people at OpenAI, the vast majority of whom are opposed to the board’s actions here and are threatening to walk out if they’re not reversed. There are the investors in OpenAI who are furious about how all of this is playing out. So a lot of people with a lot of high emotions and a lot of uncertainty, yelling at these — what used to be four and are now three OpenAI board members who have decided to just stand their ground and stick it out.
So let’s pause there. Because I think that while all of us agree that the board badly mishandled this situation, it is worth taking a beat on what this board’s role is. When I listen back to the episode that we did on Friday, this is a place where I wish I had drilled down a little bit deeper. The mission of this board is to safely develop a superintelligence, absent any commercial motive.
That is the goal of this board. This board was put together with the idea that if you have a big company — like, let’s say, a Microsoft — that is in charge of a superintelligence, that — and what is that? Something that is smarter than us, right? Something that will outcompete us in natural selection.
They didn’t want that to be owned by a for-profit corporation, right? And something happened, where three of — at one point, at least four of the people on this board, and now, it’s down to three, but three of those people thought, we are not achieving this mission.
Sam did something, or he didn’t do something, or he behaved in some way that made us feel like we cannot safely build a superintelligence, and so we need to find somebody else to run that company. And until we know why they felt that way, there is part of me that just feels like, we just can’t fully process our feelings on this, right? Like, I think it was actually really depressing to see how quickly polarizing this became on social media as it sort of turned into Team Sam versus Team Safety.
That’s actually a really bad outcome for society, right? Because I think we do want — if we’re going to build a superintelligence, I would like to see it built safely. I’m not sure that it is a for-profit corporation that is going to do the best job with that, having watched for-profit corporations create a lot of social harm in my lifetime. Right?
So I just want to say that, that I’m sure, before the end of this podcast, we will continue to criticize the board for the way that it handled this. But at the same time, it’s important to remember what their mission was and to assume that they had at least some reasons for doing what they did.
Yeah. I mean, I was talking to people all day yesterday who thought that the money would win here, basically, that these investors and Microsoft — they were powerful enough, and they had enough stake in the outcome here that they would, by any means necessary, get Sam Altman and Greg Brockman back to OpenAI. And I was very surprised when that didn’t happen. But maybe I shouldn’t have been.
Because as someone pointed out to me when I talked to them yesterday who was sort of involved with the situation said, the board has the ultimate leverage here, this structure, this convoluted governance structure, where there’s a non-profit that controls a for-profit, and the non-profit can vote to fire the CEO at any time. Like, it was set up for this purpose. I mean, you can argue with how they executed it. And I would say they executed it very badly. But it was meant to give the board the power to shut this all down if they determined that what was happening at OpenAI was unsafe or was not going to lead to broadly beneficial AGI. And it sounds like that’s what happened.
That’s right. Another piece that I would point to — my friend, Eric Newcomer, wrote a good column about this, just pointing out that Sam has had abrupt breaks with folks in the past, right? He had an abrupt break with Y Combinator, where he used to lead the startup incubator. He had an abrupt break with Elon Musk, who co-founded OpenAI with him.
He had an abrupt break with the folks who left OpenAI to go start Anthropic for what they described as AI safety reasons, right? So there is a history there that suggests that — right now, a lot of people think that the board is crazy. But these are not the first people to say Sam Altman is not building AI safely.
Right. Here’s the thing. Like, I still think there has to have been some inciting incident, right? This does not feel to me like it was kind of a slow accumulation of worry by Ilya Sutskever and the more safety-minded board members that just woke up one day and said, you know what? Like, it’s just gotten a little too aggressive over there, so let’s shut this thing down.
I still think there had to have been some incident, something that Ilya Sutskever and the board saw, that made them think that they had to act now. So, so much is changing. We have to keep going back to this caveat of, like, we still don’t know what is going to happen in the next hour, to say nothing of the next day or week or month.
But that is the state of play right now. And I think this is — I mean, Casey, I don’t know how you feel about this, but I would say this is the most fascinating and crazy story that I have ever covered in my career as a tech journalist. I cannot remember anything that made my head spin as much as this.
Yeah, certainly, in terms of the number of unexplained and unexpected twists, it’s hard for me to think of another story that comes close. But I think we should look forward a little bit and talk about what this might mean for OpenAI in particular. OpenAI was described to me over the weekend by a former employee as a money incinerator.
ChatGPT does not make money. Training these models is incredibly expensive. The whole reason OpenAI became a for-profit company was because it cost so much money to build and maintain and run these models.
When Sam was fired, it has been reported that he was out there raising money to put back into the incinerator. So think about the position that leaves the OpenAI board in. Let’s say that they’re able to staunch the bleeding and retain a couple hundred people who are closely associated with the mission, and the board thinks that these are the right people.
Who is going to give them the money to continue their work after what has just happened? Right? Now, look, Emmett Shear is very well regarded in Silicon Valley. He was texting with sources last night, who were sort of very excited that he was the guy that they chose. And so no disrespect to him.
But this board has shown that it is serious when it says it does not have a profit incentive. So unless it’s going to go out there and start raising money from foundations and philanthropists and kindly billionaires, I do not see how they get the money to keep maintaining the status quo. And so in a very real sense, over the weekend, OpenAI truly may have died.
It truly may have. I mean, you’re right. Like, we are going to take a bunch of money and incinerate it. And by the way, “we’re also going to move very slowly and not accelerate progress” is not a compelling pitch to investors.
And so I don’t think that the sort of new OpenAI is going to have a good time when it goes out to raise its next round of funding or — by the way — and this is another factor that we haven’t talked about — to close this tender offer, this round of secondary investment that was going to give OpenAI employees a chance to cash out some of their shares — that, I would say, is doomed.
Yeah. And that — I’m sure that motivated a lot of the signatures on the letter demanding that Sam and Greg come back, right? Because those people were about to get paid, and not anymore.
Totally. So that’s some of what lies ahead for Microsoft and OpenAI, although anything could change. And that brings us to the interview that we had with Sam Altman last week. So last week, before any of this happened on Wednesday, which is two days before he was fired —
It was a simpler, more innocent time.
(LAUGHING) It’s true. I actually do feel like that was about a year and a half ago. So we sat down with Sam Altman, and we asked him all kinds of questions, both about the year since ChatGPT was launched and what had happened since then, and also about the future and his thoughts about where I was headed and where OpenAI was headed.
So then, all this news broke, and we thought, well, what do we do with this interview now? And we thought about, should we even run it? Should we chop it up and just play the most relevant bits? But we ultimately decided, like, we should just put the whole thing out.
Put it out there.
Yeah. So I would just say to listeners, like, as you listen to this interview, you may be thinking, like, why are these guys asking about ChatGPT? Who cares about ChatGPT? We’ve got bigger fish to fry here, people.
But just keep in mind that when we recorded this, none of this drama had happened yet. The biggest news in Sam Altman’s world was that the one-year anniversary of ChatGPT was coming up, and we wanted to ask him to reflect on that. So just keep in mind these are questions from Earth One, and we are now on Earth Two, and just bear that in mind as you listen.
But I would say that the issues that we talked about with Sam — some of the things around the acceleration of progress at OpenAI and his view of the future and his optimism about what building powerful AI could do — those are some of the key issues that seem to have motivated this coup by the board. So I think it’s still very relevant, even though the specific facts on the ground have changed so much since we recorded with him.
So in this interview, you’ll hear us talk about existential risk, AI safety. If that’s a subject you haven’t been paying much attention to, the fear here is that as these systems grow more powerful, and they are already growing exponentially more powerful year by year, at some point, they may become smarter than us. Their goals may disalign from ours. And so for folks who follow this stuff closely, there’s a big debate on how seriously we should take that risk.
Right, and there’s also a big debate in the tech world more broadly about whether AI and technology in general should be progressing faster or whether things are already going too fast and they should be slowed down. And so when we ask them about being an accelerationist, that’s what we’re talking about.
And I should say, I texted Sam this morning to see if there was anything that he wanted to say or add, and as we record, have not heard back from him yet.
When we come back, our interview from last week with Sam Altman.
[MUSIC PLAYING]
Sam Altman, welcome back to “Hard Fork.”
Thank you.
Sam, it has been just about a year since ChatGPT was released. And I wonder if you have been doing some reflecting over the past year and, kind of, where it has brought us in the development of AI.
Frankly, it has been such a busy year, there has not been a ton of time for reflection.
Well, that’s why we brought you in. We want you to reflect here.
Great, I can do it now. I mean, I definitely think this was the year, so far — there will be maybe more in the future — but the year, so far, where the general average tech person went from taking AI not that seriously to taking it pretty seriously.
Yeah.
And the recompiling of expectations, given that. So I think in some sense, that’s like the most significant update of the year.
I would imagine that for you, a lot of the past year has been watching the world catch up to things that you have been thinking about for some time. Does it feel that way?
Yeah, it does. You know, we kind of always thought, on the inside of OpenAI, that it was strange that the rest of the world didn’t take this more seriously. Like, it wasn’t more excited about it.
I mean, I think if five years ago, you had explain what ChatGPT was going to be, I would have thought, wow, that — like, that sounds pretty cool. But — and presumably, I could have just looked into it more, and I would have smartened myself up. But I think until I actually used it, as is often the case, it was just hard to know what it was.
Yeah, I actually think we could have explained it, and it wouldn’t have made that much of a difference. We tried. Like, people are busy with their lives. They don’t have a lot of time to sit there and listen to some tech people prognosticate about something that may or may not happen. But you ship a product that people use, get real value out of, and then it’s — and then, it’s different.
Yeah. I remember reading about the early days of the run-up to the launch of ChatGPT, and I think you all have said that you did not expect it to be a hit when it launched.
No, we thought it would be a hit. We didn’t think it’d be like this. We did it because we thought it was going to be a hit. We didn’t think it was going to be, like, this big of a hit.
Right. As we’re sitting here today, I believe it’s the case that you can’t actually sign up for ChatGPT Plus right now. Is that right?
Correct.
Yeah. So what’s that all about?
We have not enough capacity always. But at some point, it gets really bad. So over the last 10 days or so, we have done — we’ve, like, done everything we can. We’ve rolled out new optimizations. We’ve disabled some features.
And then, people just keep signing up. It keeps getting slower and slower. And there’s, like, a limit at some point to what you can do there, and you can’t — we just don’t want to offer a bad quality of service.
And so it gets slow enough that we just say, you know what? Until we can make more progress, either with more GPUs or more optimizations, we’re going to put this on hold. Not a great place to be in, to be honest, but it was like the least of several bad options.
Sure. And I feel like in the history of tech development, there often is a moment with really popular products where you just have to close signups for a little while, right?
The thing that’s different about this than others is it’s just — it’s so much more compute-intensive than the world is used to for internet services. So you don’t usually have to do this. Like, usually, by the time you’re at this scale, you’ve, like, solved your scaling bottlenecks.
Yeah. One of the interesting things, for me, about covering all the AI changes over the past year is that it often feels like journalists and researchers and companies are discovering properties of these systems sort of at the same time, all together. I mean, I remember when we had you and Kevin Scott from Microsoft on the show earlier this year around the Bing relaunch.
And you both said something to the effect of, well, to discover what these models are or what they’re capable of, you kind of have to put them out into the world and have millions of people using them. Then, we saw all kinds of crazy but also inspiring things. You had Bing Sydney, but you also had people starting to use these things in their lives. So I guess I’m curious what you feel like you have learned about language models and your language models specifically from putting them out into the world.
So what we don’t want to be surprised by is the capabilities of the model. That would be bad. And we’re not — with GPT 4, for example, we took a long time between finishing the model and releasing it. Red team did heavily — really studied it, did all of the work, internally, externally.
And there’s — I’d say there’s, at least so far — and maybe now, it’s been long enough that we would have — we have not been surprised by any capabilities the model had that we just didn’t know about at all in a way that we were for GPT 3, frankly, sometimes. People found stuff. But what I think you can’t do in the lab is understand how technology and society are going to co-evolve.
So you can say, here’s what the model can do and not do. But you can’t say, like, and here’s exactly how society is going to progress, given that. And that’s where you just have to see what people are doing, how they’re using it.
And that — well, one thing is, they use it a lot. Like, that’s one takeaway that we did not — clearly, we did not appropriately plan for. But more interesting than that is the way in which this is transforming people’s productivity, personal lives, how they’re learning, and how — like, one example that I think is instructive, because it was the first and the loudest, is what happened with ChatGPT and education.
Days — at least weeks, but I think days after the release of ChatGPT, school districts were falling all over themselves to ban ChatGPT. And that didn’t really surprise us. Like, that, we could have predicted and did predict.
The thing that happened after that, quickly, was — like, weeks to months — was school districts and teachers saying, hey, actually, we made a mistake. And this is really important part of the future of education, and the benefits far outweigh the downside. And not only are we unbanning it, we’re encouraging our teachers to make use of it in the classroom.
We’re encouraging our students to get really good at this tool, because it’s going to be part of the way people live. And then, there was a big discussion about what the kind of path forward should be. And that is just not something that could have happened without releasing.
Yeah.
And part — can I say one more thing?
Yeah.
Part of the decision that we made with the ChatGPT release — the original plan had been to do the chat interface and GPT 4 together in March. And we really believe in this idea of iterative deployment. And we had realized that the chat interface plus GPT 4 was a lot. I don’t think we realized quite how much —
Like, too much for society to take in.
So we split it, and we put out — we put it out with GPT 3.5 first, which we thought was a much weaker model. It turned out to still be powerful enough for a lot of use cases. But I think that, in retrospect, was a really good decision and helped with that process of gradual adaptation for society.
Looking back, do you wish that you had done more to, I don’t know, give people a manual to say, here’s how you can use this at school or at work?
Two things. One, I wish we had done something intermediate between the release of 3.5 and the API and ChatGPT. Now, I don’t know how well that would have worked, because I think there was just going to be some moment where it went, like, viral in the mind of society.
And I don’t know how incremental that could have been. That’s sort of a “either it goes like this or it doesn’t” kind of thing. And I think — I have reflected on this question a lot.
I think the world was going to have to have that moment. It was better sooner than later. It was good we did it when we did. Maybe we should have tried to push it even a little earlier. But it’s a little chancey about when it hits, and I think only a consumer product could have done what happened there.
Now, the second thing is, should we have released more of a how-to manual? And I honestly don’t. No, I think we could have done some things there that would have been helpful, but I really believe that it’s not optimal for tech companies to tell people, like, here is how to use this technology and here’s how to do whatever. And the organic thing that happened there actually was pretty good.
Yeah. I’m curious about the thing that you just said about, we thought it was important to get this stuff into folks’ hands sooner rather than later. Say more about why that is.
More time to adapt for our institutions and leaders to understand, for people to think about what the next version of the model should do, what they’d like, what would be useful, what would not be useful, what would be really bad, how society and the economy need to co-evolve.
Like, the thing that many people in the field or adjacent to the field have advocated or used to advocate for, which I always thought was super bad, was, this is so disruptive, such a big deal. It’s got to be done in secret by the small group of us that can understand it. And then, we will fully build the AGI and push a button all at once when it’s ready. And I think that’d be quite, quite bad.
Yeah, because it would just be way too much change too fast.
Yeah. Again, society and technology have to co-evolve, and people have to decide what’s going to work for them and how they want to use it. And you can criticize OpenAI about many, many things, but we do try to really listen to people and adapt it in ways that make it better or more useful. And I think we’re able to do that. But we wouldn’t get it right without that feedback.
Yeah.
I want to talk about AGI and the path to AGI later on. But first, I want to just define AGI and have you talk about where we are on the continuum. So —
I think it’s a ridiculous and meaningless term.
Yeah?
So I apologize that I keep using it. It’s, like, deep in the muscle memory.
I mean, I just never know what people are talking about when they’re talking —
No one else does either. They mean, like, really smart AI.
Yeah, so it stands for Artificial General Intelligence, and you could probably ask 100 different AI researchers, and they would give you 100 different definitions of what AGI is. Researchers at Google DeepMind just released a paper this month that sort of offers a framework. They have five levels.
Level — I guess they have levels ranging from level 0, which is no AI, all the way up to level 5, which is superhuman. And they suggest that currently, ChatGPT, Bard, LLaMA 2, are all at level 1, which is equal to or slightly better than an unskilled human. Would you agree with that? Where are we — if you say this is a term that means something and you define it that way, how close are we?
Um, I think the thing that matters is the curve and the rate of progress. And there’s not going to be some milestone that we all agree, like, OK, we’ve passed it, and now, it’s called AGI. Like, what I would say is, we currently have systems that are — like, there will be researchers who will write papers like that.
You know, academics will debate it, and people in industry will debate it. And I think most of the world just cares, like, is this thing useful to me or not? And we currently have systems that are somewhat useful, clearly.
Like, and you know, whether we want to say, like, it’s a level 1 or 2, I don’t know, but people use it a lot, and they really love it. There’s huge weaknesses in the current systems. But it doesn’t mean that — I’m a little embarrassed by GPTs, but people still like ‘em.
And that’s good. Like, it’s nice to do useful stuff for people. So yeah, call it a level 1. Doesn’t bother me at all. I am embarrassed by it. We will make them much better. But at their current state, they are still, like, delighting people and being useful to people.
Yeah, I also think it underrates them slightly to say that they’re just better than unskilled humans. Like, when I use ChatGPT, it is better than skilled humans for some —
At some things, and worse than unskilled — worse than any human in many other things.
But I guess this is one of the questions that people ask me the most — and, I imagine, ask you — is like, what are today’s AI systems useful and not useful for doing?
I would say the main thing that they’re bad at — well, many things, but one that is on my mind a lot is they’re bad at reasoning. And a lot of the valuable human things require some degree of complex reasoning. But they’re good at a lot of other things.
Like, GPT 4 is vastly superhuman in terms of its world knowledge. Like, it knows — there’s a lot of things in there. And it’s just, it’s very different than how we think about evaluating human intelligence.
So it can’t do these basic reasoning tasks. On the other hand, it knows more than any human has ever known. On the other hand, again, sometimes it totally makes stuff up in a way that a human would not.
But if you are using it to be a coder, for example, it can hugely increase your productivity. And there’s value there, even though it has all of these other weak points. If you are a student, you can learn a lot more than you could without using this tool in some ways. Value there, too.
Let’s talk about GPTs, which you announced at your recent developer conference. For those who haven’t had a chance to use one yet, Sam, what’s a GPT?
It’s like a custom version of ChatGPT that you can get to behave in a certain way. You can give it limited ability to do actions. You can give it knowledge to refer to. You can say, like, act this way. It’s super easy to make, and it’s a first step towards more powerful AI systems and agents.
We’ve had some fun with them on the show. There’s a “Hard Fork” bot that you can ask about anything that’s happened on any episode of the show. It works pretty well, we found, when we did some testing. But I want to talk about where this is going. What is the GPTs that you’ve released a first step toward?
AIs that can accomplish useful tasks. Like, the — I think we need to move towards this with great care. We don’t — I think it would be a bad idea to put, like — turn powerful agents for you on the internet.
But AIs that can act on your behalf to do something with a company that can access your data, that can help you be good at a task — I think that’s going to be an exciting way we use computers. Like, we have this belief that we’re heading towards a vision where there are new interfaces, new user experiences possible, because finally, the computer can understand you and think. And so the sci-fi vision of a computer that you just, like, tell what you want and it figures out how to do it — this is a step towards that.
Right now, I think what’s holding a lot of people back in — a lot of companies and organizations back in using this kind of AI in their work is that it can be unreliable, it can make up things, it can give wrong answers, which is fine if you’re doing creative writing assignments, but not if you’re a hospital or a law firm or something else with big stakes.
How do we solve this problem of reliability? And do you think we’ll ever get to the low fault tolerance that is needed for these really high-stakes applications?
So first of all, I think this is, like, a great example of people understanding the technology, making smart decisions with it. Society and the technology co-evolving together. Like, what you see is that people are using it where appropriate and where it’s helpful, and not using it where you shouldn’t.
And for all of the fear that people have had, both users and companies seem to really understand the limitations and are making appropriate decisions about where to roll it out. It — the kind of controllability, reliability, whatever you want to call it — that is going to get much better. I think we’ll see a big step forward there over the coming years.
And — and I think that there will be a time. I don’t know if it’s, like, 2026, 2028, 2030, whatever, but there will be a time where we just don’t talk about this anymore.
Yeah? It seems to me, though, that that is something that becomes very important to get right in the — as you build these more powerful GPTs, right? Once I tell — like, I would love to have a GPT be my assistant, go through my emails, hey, don’t forget to respond to this before the end of the day —
The reliability has got to be way up before that happens.
Yeah, yeah. That makes sense. You mentioned, as we started to talk about GPTs, that you have to do this carefully. For folks who haven’t spent as much time reading about this, explain what are some things that could go wrong. You guys are obviously going to be very careful with this. Other people who are going to build GPT-like things might not put the same kind of controls in place. So what can you imagine other people doing that you, as the CEO, would say to your folks, hey, it’s not going to be able to do that?
Well, that example that you just gave, like, if you let it act as your assistant and go, like, send emails, do financial transfers for you, like, it’s very easy to imagine how that could go wrong. But I think most people who would use this don’t want that to happen on their behalf either. And so there’s more resilience to this sort of stuff than people think.
I think that’s right. I mean, for what it’s worth, on the whole — on the hallucination thing, which it does feel like has maybe been the longest conversation that we’ve had about ChatGPT in general since it launched — I just always think about Wikipedia as a resource I use all the time.
And I don’t want Wikipedia to be wrong, but 100 percent of the time, it doesn’t matter if it does. I am not relying on it for life-saving information, right? ChatGPT, for me, is the same, right? It’s like, hey, I mean, it’s great for just kind of bar trivia, like, hey, you know, what’s the history of this conflict in the world?
Yeah, I mean, we want to get that a lot better, and we will. Like, I think the next model will just hallucinate much less.
Is there an optimal level of hallucination in an AI model? Because I’ve heard researchers say, well, you actually don’t want it to never hallucinate, because that would mean making it not creative — that new ideas come from making stuff up. That it’s not necessarily tethered to —
This is why I tend to use the word, “controllability,” and not “reliability.” You want it to be reliable when you want. You want it to — either you instruct it, or it just knows based off of the context, that you are asking a factual query, and you want the 100 percent black-and-white answer.
But you also want it to know when you want it to hallucinate or you want it to make stuff up. As you just said, like, new discovery happens because you come up with new ideas, most of which are wrong, and you discard those and keep the good ones and sort of add those to your understanding of reality. Or if you’re telling a creative story, you want that.
So if these models — like, if these models didn’t hallucinate at all, ever, they wouldn’t be so exciting. They wouldn’t do a lot of the things that they can do. But you only want them to do that when you want them to do that.
And so like, the way I think about it is, like, model capability personalization and controllability. And those are, like, the three axes we have to push on. And controllability means no hallucinations when you don’t want it, lots of it when you’re trying to invent something new.
Let’s maybe start moving into some of the debates that we’ve been having about AI over the past year. And actually, I want to start with something that I haven’t heard as much, but that I do bump into when I use your products, which is like, they can be quite restrictive in how you use them. I think, mostly for great reasons, right?
Like, I think you guys have learned a lot of lessons from the past era of tech development. At the same time, I feel like — like, I’ve tried to ask ChatGPT a question about sexual health. I feel like it’s going to call the police on me, right? So I’m just curious how you’ve approached that subject.
Yeah. Look. One thing — no one wants to be scolded by a computer, ever. Like, that is not a good feeling. And so you should never feel like you’re going to have the police called on you. That’s more, like, horrible, horrible, horrible.
We have started very conservative, which I think is a defensible choice. Other people may have made a different one. But again, that principle of controllability — what we’d like to get to is a world where, if you want some of the guardrails relaxed a lot, and that’s — like, you’re not like a child or something — then fine, we’ll relax the guardrails. It should be up to you.
But I think starting super conservative here, although annoying, is a defensible decision. And I wouldn’t have gone back and made it differently. We have relaxed it already. We will relax it much more, but we want to do it in a way where it’s user-controlled.
Yeah. Are there certain red lines you won’t cross, things that you will never let your models be used for, other than things that are obviously illegal or dangerous?
Yeah, certainly things that are illegal and dangerous, we won’t. There’s a lot of other things that I could say, but they so depend — where those red lines will be so depend on how the technology evolves, that it’s hard to say right now, like, here’s the exhaustive set. We really try to just study the models and predict capabilities as we go, but we get — if we learn something new, we change our plans.
Yeah. One other area where things have been shifting a lot over the past year is in AI regulation and governance. I think a year ago, if you had asked the average congressperson, what do you think of AI, they would have said, what’s that?
“Get out of my office!”
(LAUGHING) Right. We just recently saw the Biden White House put out an executive order about AI. You’ve obviously been meeting a lot with lawmakers and regulators, not just in the US, but around the world. What’s your view of how AI regulation is shaping up?
It’s a really tricky point to get across. What we believe is that on the frontier systems, there does need to be proactive regulation there. But heading into overreach and regulatory capture would be really bad.
And there’s a lot of amazing work that’s going to happen with smaller models, smaller companies, open-source efforts. And it’s really important that regulation not strangle that. So it’s like I’ve sort of become a villain for this, but I think —
You have?
Yeah.
How do you feel about this?
Like, annoyed, but I have bigger problems in my life right now.
But this message of, like, regulate us, regulate the really capable models that can have significant consequences, but leave the rest of the industry alone, is just — it’s a hard message to get across. Sure.
Here is an argument that was made to me by a high-ranking executive at a major tech company as some of this debate was playing out. This person said to me that there is, essentially, no harms that these models can have that the internet itself doesn’t enable, right?
And that to do any sort of work like is proposed in this executive order, to have to inform the Biden administration, is just essentially pulling up the ladder behind you and ensuring that the folks who’ve already raised the money can reap all of the profits of this new world and will leave the little people behind. So I’m curious what you make of that argument.
I disagree with it on a bunch of levels. First of all, I wish the threshold for when you do have to report was set differently and based off of evals and capability thresholds.
Not FLOPS?
Not FLOPS.
OK.
But there’s no small company training with that many FLOPS anyway, so that’s a little bit —
Yeah.
For the listener who maybe didn’t listen to our last episode about —
Listen to your FLOPS episode!
— the FLOPS are the sort of measure of the amount of computing that is used to train these models. The executive order says if you’re above a certain computing threshold, you have to tell the government that you’re training a model that big.
Yeah. But no small effort is training at 10-to-the-26th FLOPS. Currently, no big effort is either. So that’s a dishonest comment. Second of all, the burden of just saying, like, here’s what we’re doing, is not that great.
But — third of all, the underlying thing there — there’s nothing you can do here that you couldn’t already do on the internet. That’s the real — either dishonesty or lack of understanding. You could maybe, say, with GPT 4, you can’t do anything you can’t do on the internet.
But I don’t think that’s really true, even at GPT 4. Like, there are some new things. And GPT 5 and 6, there will be very new things. And saying that we’re going to be cautious and responsible and have some testing around that — I think that’s going to look more prudent in retrospect than it maybe sounds right now.
I’d say, for me, these seem like the absolute gentlest regulations you could imagine. It’s like, tell the government and report on any safety testing you did?
Seems reasonable.
Yeah.
I mean, people are not just saying that these fears are unjustified of AI and sort of existential risk. Some people, some of the more vocal critics of OpenAI, have said that OpenAI — that you are specifically lying about the risks of human extinction from AI creating fear, so that regulators will come in and make laws or give executive orders that prevent smaller competitors from being able to compete with you.
Andrew Ng, who is, I think, one of your professors at Stanford, recently said something to this effect. What’s your response to that? I’m curious if you have thoughts about that.
Yeah. Like, I actually don’t think we’re all going to go extinct. I think it’s going to be great. I think we’re, like, heading towards the best world ever. But when we deal with a dangerous technology as a society, we often say that we have to confront and successfully navigate the risks to get to enjoy the benefits.
And that’s a pretty consensus thing. I don’t — I don’t think that’s a radical position. You — you can imag — I can imagine that if this technology stays on the same curve, there are systems that are capable of significant harm in the future. And you know, like, Andrew also said not that long ago that he thought it was, like, totally irresponsible to talk about AGI, because it was just never happening.
I think he compared it to worrying about overpopulation on Mars.
And I think now, he might say something different. So, like, it’s —
humans are very bad at having intuition for exponentials. Again, I think it’s going to be great. Like, I wouldn’t work on this if I didn’t think it was going to be great.
People love it already, and I think they’re going to love it a lot more. But that doesn’t mean we don’t need to be responsible and accountable and thoughtful about what the downsides could be. And in fact, I think the tech industry often has only talked about the good and not the bad. And that doesn’t go well either.
The exponential thing is real. I have dealt with this. I’ve talked about the fact that I was only using GPT 3.5 until a few months ago, and finally, at the urging of a friend, upgraded. And I thought —
I would have given you a free upgrade. I’m sorry you waited.
(LAUGHING) I should have asked.
But it’s a real improvement.
It is a real improvement, and not just in the sense of, oh, the copy that it generates is better. It actually transformed my sense of how quickly the industry was moving. It made me think, oh, like, the next generation of this is going to be radically better. And so I think that part of what we’re dealing with is just that it has not been widely distributed enough to get people to reckon with the implications.
I disagree with that. I mean, I think that maybe the tech experts say, like, oh, this is not a big deal, whatever. Like, most of the world is like — who has used even the free version, is like, oh, man, they got real AI.
Yeah. Yeah, and you went around the world this year, talking to people in a lot of different countries. I’d be curious to what extent that informed what you just said.
Significantly. I mean, I had a little bit of a sample bias, right? Because the people that wanted to meet me were probably pretty excited. But you do get a sense. And there’s quite a lot of excitement — maybe more excitement in the rest of the world than the US.
Sam, I want to ask you about something else that people are not happy about when it comes to these language and image models, which is this issue of copyright. I think a lot of people view what OpenAI and other companies did, which is hoovering up work from across the internet, using it to train these models that can, in some cases, output things that are similar to the work of living authors or writers or artists, and they just think, like, this is the original sin of the AI industry, and we are never going to forgive them for doing this.
What do you think about that? And what would you say to artists or writers who just think that this was a moral lapse? Forget about the legal, whether you’re allowed to do it or not. That it was just unethical for you and other companies to do that in the first place.
Well, we block that stuff. Like, you can’t go to DALL-E and generate some — I mean, you get — speaking of being annoyed, like, we may be too aggressive on that. But I think — I think it’s the right thing to do until we figure out some sort of economic model that works for people.
And we’re doing some things there now, but we’ve got more to do. Other people in industry do allow quite a lot of that. And I get why artists are annoyed.
I guess I’m talking less about the output question than just the act of taking all of this work, much of it copyrighted, without the explicit permission of the people who created it and using it to train these models. Do you think that — what would you say to the people who just say, Sam, that was the wrong move, you should have asked, and we will never forgive you for it?
Well, first of all, I always have empathy for people who are like, hey, you did this thing, and it’s affecting me, and we can talk about it first, or it was just a new thing. Like, the — I do think that in the same way humans can read the internet and learn, AI should be allowed to read the internet and learn. It shouldn’t be regurgitating, shouldn’t be violating any copyright laws.
But if we’re really going to say that AI doesn’t get to read the internet and learn, and if you read a physics textbook and learn how to do a physics calculation, now, every time you do that in the rest of your life, like, you got to figure out how to —
that seems not a good solution to me.
But on individuals’ private work, under — yeah, we try not to train on that stuff. We really don’t want to be here upsetting people. Again, I think other people in the industry have taken different approaches. And we’ve also done some things that I think, now that we understand more, we will do differently in the future.
Like, what we do differently — we want to figure out new economic models, so that, say, if you’re an artist, we don’t just totally block you. We don’t just not train on your data, which a lot of artists also say, no, I want this in here. I want whatever.
But we have a way to help share revenue with you. GPTs are maybe going to be an interesting first example of this. Because people will be able to put private data in there and say, hey, use this version, and there can be a revenue share around it.
I feel like that might be a good place to take a break, and then come back and talk about the future.
Yes. Let’s take a break.
[MUSIC PLAYING]
Well, I had one question about the future that kind of came out of what we were talking about before the break, which is — and it’s so big, but I truly need to hear your thoughts on this, which is, what is the future of the internet app as ChatGPT rises? And the reason I ask is, I now have a hotkey on my computer that I type when I want to know something. And it just accesses — it accesses ChatGPT directly through software called Raycast.
And because of this, I am using Google search not nearly as much. I am visiting websites not nearly as much. That has implications for all the publishers and for, frankly, just the model itself.
Because presumably, if the economics change, there will be fewer web pages created. There’s less data for ChatGPT to access. So I’m just curious what you have thought about the internet in a world where your product succeeds and the way you want it to.
I do think, if this all works, it should really change how we use the internet. There’s a lot of things that the interface 4 is, like, perfect. If you want to mindlessly watch TikTok videos, perfect.
But if you’re trying to get information or get a task accomplished, it’s actually quite bad, relative to what we should all aspire for. And you can totally imagine a world where you have a task that, right now, takes, like, hours of stuff, clicking around the internet, and bringing stuff together.
And you just ask ChatGPT to do one thing, and it goes off and computes, and you get the answer back. And I’ll be disappointed if we don’t use the internet differently.
Yeah. Do you think that the economics of the internet as it is today are robust enough to withstand the challenge that AI poses?
Probably.
OK.
What do you think?
Well, I worry in particular about the publishers. The publishers have been having a hard time already for a million other reasons. But to the extent that they’re driven by advertising and visits to web pages, and to the extent that the visits to the web pages are driven by Google search in particular, a world where web search is just no longer the front page to most of the internet, I think, does require a different kind of web economics.
I think it does require a shift, but I think the value is — so what I thought you were asking about was like, is there going to be enough value there for some economic model to work. And I think that’s definitely going to be the case. Yeah, the model may have to shift.
I would love it if ads become less a part of the internet. Like, I was thinking the other day, like — I just had this, like, for whatever reason, this thought in my head as I was browsing around the internet, being like, there’s more ads than content, everywhere.
I was reading a story today, scrolling on my phone, and I managed to get it to a point where between all of the ads on my relatively large phone screen, there was one line of text from the article visible.
You know, one of the reasons I think people like ChatGPT, even if they can’t articulate it, is we don’t do ads —
Yeah.
— like, as an intentional choice. Because there’s plenty of ways you could imagine us putting ads.
Totally.
But we made the choice that ads-plus-AI can get a little dystopic. We’re not saying never. Like, we do want to offer a free service. But a big part of our mission fulfillment, I think, is if we can continue to offer ChatGPT for free at a high quality of service to anybody who wants it and just say, like, hey, here’s free AI, and good free AI — and no ads. Because I think that really does — especially as the AI gets really smart, that really does get a little strange.
Yeah. Yeah, yeah.
I know we talked about AGI and it not being your favorite term, but it is a term that people in the industry use as sort of a benchmark or a milestone or something that they’re aiming for. And I’m curious what you think the barriers between here and AGI are. Maybe, let’s define AGI as sort of a computer that can do any cognitive task that a human can.
If it — let’s say we make an AI that is really good, but it can’t go discover novel physics. Would you call that an AGI?
I probably would. Yeah.
You would. OK.
Would you?
Uh, well, again, I don’t like the term, but I wouldn’t call that, we’re done with the mission. I’d say we still got a lot more work to do.
The vision is to create something that is better at humans than doing original science that can invent, can discover —
Well, I am a believer that all real sustainable human progress comes from scientific and technological progress. And if we can have a lot more of that, I think it’s great. And if the system can do things that we, unaided on our own, can’t, just even as a tool that helps us go do that, then I will consider that a massive triumph and happily be — I can happily retire at that point.
Mm-hmm.
But before that, I can imagine that we do something that creates incredible economic value but is not the kind of AGI superintelligence, whatever you want to call it thing that we should aspire to.
Right. What are some of the barriers to getting to that place where we’re doing novel physics research?
Um —
And keep in mind, Kevin and I don’t know anything about technology and —
That seems unlikely to be true.
(LAUGHING) Well, if you start talking about retrieval augmented generation or anything, like, I might —
I’ll follow, but you’ll lose Casey.
You’ll follow. Yeah.
We talked earlier about just the model’s limited ability to reason. And I think that’s one thing that needs to be better. The model needs to be better at reasoning.
Like, GPT 4 — an example of this that my co-founder, Ilya, uses sometimes is really stuck in my mind — is there was a time in Newton’s life where the right thing for him to do —
You’re talking, of course, about Isaac Newton, not my life.
Isaac Newton.
OK.
Well, maybe you, too.
But maybe my life. We’ll find out. Stay tuned.
Where the right thing for him to do is to read every math textbook he could get his hands on. He should, like, talk to every smart professor or talk to his peers, do problem sets, whatever. And that’s kind of what our models do today.
And at some point, there was — he was never going to invent calculus doing that. What didn’t exist in any textbook. And at some point, he had to go think of new ideas, and then test them out and build them and whatever else. And that phase, that second phase, we don’t do yet. And I think you need that before it’s something we want to call an AGI.
Yeah. One thing that I hear from AI researchers is that a lot of the progress that has been made over the past, call it five years, in this type of AI has been just the result of just things getting bigger, right? Bigger models, more compute. Obviously, there’s work around the edges in how you build these things that makes them more useful.
But there hasn’t really been a shift on the architectural level of the systems that these models are built on. Do you think that is going to remain true? Or do you think that we need to invent some new process or new mode or new technique to get through some of these barriers?
We will need new research ideas, and we have needed them. I don’t think it’s fair to say there haven’t been any here. I think a lot of the people who say that are not the people building GPT 4, but they’re the people sort of opining from the sidelines.
But — but there is some kernel of truth to it. And the answer is, we have — OpenAI has a philosophy of, we will just do whatever works. Like, if it’s time to scale the models and work on the engineering challenges, we’ll go do that.
If now, we need a new algorithm breakthrough, we’ll go work on that. If now, we need a different kind of data mix, we’ll go work on that. So, like, we just do the thing in front of us. And then, the next one, and then the next one, then the next one.
And there are a lot of other people who want to write papers about level 1, 2, 3, and whatever, and there are a lot of other people who want to say, well, it’s not real progress. They just made this incredible thing that people are using and loving. And it’s not real sci — like, but our belief is like, we will just do whatever we can to usefully drive the progress forward. And we’re kind of open-minded about how we do that.
What is superalignment? You all just recently announced that you are devoting a lot of resources and time and computing power to superalignment, and I don’t know what it is. So can you help me understand?
It’s alignment that comes with sour cream and guacamole.
There you go.
San Francisco taco shop. That’s a very San Francisco-specific joke, but it’s pretty good. I’m sorry. Go ahead, Sam.
I don’t — can I leave it at that? I don’t really wanna follow — I mean, that was such a good answer.
No, so alignment is how you get these models to behave in accordance with the human who’s using them, what they want. And superalignment is how you do that for supercapable systems. So we know how to align GPT 4 pretty well, but — like, better than people thought we were going to be able to do.
There’s this — when we put out GPT 2 and 3, people were like, oh, it’s irresponsible research, because this is always going to just, like, spew toxic shit. You’re never going to get it. And it actually turns out, like, we’re able to align GPT 4 reasonably well. Maybe too well.
Yeah. It’s — I mean, good luck getting it to talk about sex, is my official comment about ChatGPT 4. [LAUGHS]
But that’s — in some sense, that’s an alignment failure, because that’s — it’s not doing what you want it to do.
Yeah.
So — but now, we have that — now, we have the social part of the problem. We can technically do it.
Right.
But we don’t yet know what the new challenges will be for much more capable systems. And so that’s what that team research is.
So, like, what kinds of questions are they investigating, or what research are they doing? Because I confess, I sort of — I lose my grounding in reality when you start talking about supercapable systems and the problems that can emerge with them. Is this sort of a theoretical future forecasting team?
Well, they try to do work that is useful today, but for the theoretical systems of the future. So they all have their first result coming out, I think, pretty soon. But yeah, they’re interested in these questions of, as the systems get more capable than humans, what is it going to take to reliably solve the alignment challenge?
And I mean, this is the stuff where my brain does feel like it starts to melt as I ponder the implications, right? Because you’ve made something that is smarter than every human, but you, the human, have to be smart enough to ensure that it always acts in your interests, even though, by definition, it is way smarter.
We need some help there.
Yeah, I do want to stick on this issue of alignment or superalignment, because I think there’s an unspoken assumption in there that — well, you just put it as, alignment is what the user wants it to behave like. And obviously, there are a lot of users with good intentions.
No, no, yeah, it has to be like what society and the user can intersect on. There are going to have to be some rules here.
And I guess, where do you derive those rules? Because if you’re Anthropic, you use the UN Declaration of Human Rights and the Apple terms of service, and that becomes —
The two most important documents —
— in rights governance.
But If you’re not just going to borrow someone else’s rules, how do you decide which values these things should align themselves to?
So we’re doing this thing — we’ve been doing this thing, where we’ve been doing these democratic input governance grants, where we’re giving different research teams money to go off and study different proposals. And there’s some very interesting ideas in there about how to fairly decide that.
The naive approach to this that I have always been interested in — maybe we’ll try at some point — is what if you had hundreds of millions of ChatGPT users spend an hour, a few hours a year, answering questions about what they thought the default setting should be, what the wide bound should be. Eventually, you need more than just ChatGPT users. You need the whole world represented in some way, because even if you’re not using it, you’re still impacted by it.
But to start, what if you literally just had ChatGPT chat with its users? It can — I think it’s very important. It would be very important in this case to let the users make final decisions, of course. But you could imagine it’s saying, like, hey, you answered this question this way.
Here’s how this would impact other users in a way you might not have thought of. If you want to stick with your answer, that’s totally up to you, but are you sure, given this new data? And then, you could imagine GPT 5 or whatever just learning that collective preference set. And I think that’s interesting to consider.
Yeah. I want to —
Better than the Apple terms of service, let’s say.
[CHUCKLES]: I want to ask you about this feeling. Kevin and I call it “AI vertigo.” Is this a widespread term that people use?
No, I think you invented this.
So there is this moment when you contemplate, even just kind of the medium AI future, you start to think about what it might mean for the job market, your own job, your daily life, or society. And there is this kind of dizziness that I find sets in.
This year, I actually had a nightmare about AGI. And then, I sort of asked around, and I feel like people who work on this stuff — like, that’s not uncommon. I wonder, for you, if you have had these moments of vertigo, if you continue to have them. Or is there at some point where you think about it long enough, that you feel like you get your legs underneath you?
I think I used to have — I mean, there were some — I can point to these moments, but there were some very strange, like, extreme vertigo moments, particularly around the launch of GPT 3. But you do get your legs under you.
Yeah. What —
And I think the future will somehow be less different than we think. Like, it’s this amazing thing to say, right? Like, we invent AGI, and it matters less than we think. It doesn’t sound like a sentence that parses. And yet, it’s what I expect to happen.
Why is that?
There’s a lot of inertia in society, and humans are remarkably adaptable to any amount of change.
Hmm. One question I get a lot, that I imagine you do, too, is from people who want to know what they can do. You mentioned adaptation as being necessary on a societal level. I think for many years, the conventional wisdom was that if you wanted to adapt to a changing world, you should learn how to code. That was like the classic advice.
It may not be such good advice anymore.
Exactly. So now, AI systems can code pretty well. For a long time, the conventional wisdom was that creative work was sort of untouchable by machines. If you were a factory worker, you might get automated out of your job. But if you were an artist or a writer, that was impossible for computers to do.
Now, we see that’s no longer safe. So where is this sort of high ground here? Like, where can people focus their energy if they want skills and abilities that AI is not going to be able to replace?
My answer is — my meta answer is, you always — it’s always the right bet to just get good at the most powerful new tools, most capable new tools. And so when computer programming was that, you did want to become a programmer. And now that AI tools totally change what one person can do, you want to get really good at using AI tools.
And so, like, having a sense for how to work with ChatGPT and other things — that is the high ground. And that’s like — that’s — we’re not going back. Like, that’s going to be part of the world. And you can use it in all sorts of ways, but getting fluent at it, I think, is really important.
I want to challenge that. Because I think you’re partially right in that I think there is an opportunity for people to embrace AI and sort of become more resilient to disruption that way. But I also think if you look back through history, it’s not like we learn how to do something new, and then the old way just goes away, right?
We still make things by hand. There’s still an artisanal market. So do you think there’s going to be people who just decide, you know what? I don’t want to use this stuff.
Totally.
And there’s going to be something valuable in their, sort of — I don’t know — non-AI-assisted work?
I expect, like — I expect that if we look forward to the future, things — everything that — things that we want to be cheap can get much cheaper, and things that we want to be expensive are going to be astronomically expensive.
Like what?
Real estate, handmade goods, art. And so totally, like, there will be a huge premium on things like that. And there will be many people who really — there’s always been a — even when machine-made products have been much better, there has always been a premium on handmade products. And I’d expect that to intensify.
This is also a bit of a curveball. Very curious to get your thoughts. Where do you come down on the idea of AI romances? Are these net good for society?
I don’t want one, personally.
You don’t want one. OK. But it’s clear that there is a huge demand for this, right? Yeah. Like, I think that — I mean, you know, Replica is building these. They seem like they’re doing very well. I would be shocked if this is not a multi-billion dollar company, right?
Someone will —
That’s what I’m saying. Yeah, somebody will. Yeah.
For sure.
Yeah. Do you — like, I just personally think we’re going to have a big culture war. Like, I think Fox News is going to be doing segments about the generation loss to AI girlfriends and boyfriends at some point within the next few years. But at the same time, you look at all the data on loneliness, and it seems like, well, if we can give people companions that make them happy during the day, it could be a net-good thing.
It’s complicated.
Yeah.
I have misgivings, but I don’t — this is not a place where I think I get to impose what I think is good on other people.
Totally. But OK, but it sounds like this is not at the top of your product roadmap — is building the boyfriend API.
No.
All right.
You recently posted on X that you expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes. Can you expand on that? What are some things that AI might become very good at persuading us to do, and what are some of those strange outcomes you’re worried about?
The thing I was thinking about at that moment was the upcoming election. There’s a huge focus on the US 2024 election. There’s a huge focus on deepfakes and the impact of AI there.
And I think that’s reasonable to worry about, good to worry about. But we already have some societal antibodies towards people seeing, like, doctored photos or whatever. And yeah, they’re going to get more compelling. It’s going to be more. But we kind of know that those are there.
There’s a lot of discussion about that. There’s almost no discussion about what are the new things AI can do to influence an election — AI tools can do to influence an election. And one of those is to carefully —
one-on-one persuade individual people.
Tailored messages.
Tailored messages. That’s a new thing that the content farms couldn’t quite do.
Right. And that’s not AGI, but that could still be pretty harmful.
I think so, yeah.
I know we are running out of time, but I do want to push us a little bit further into the future than the sort of — I don’t know — maybe five-year horizon we’ve been talking about. If you can imagine a good post-AGI world, a world in which we have reached this threshold, whatever it is, what does that world look like? Does it have a government? Does it have companies? / What do people do all day?
Like, a lot of material abundance. People are — people continue to be very busy, but the way we define work always moves. Like, if you — our jobs would not have seemed like real jobs to people several hundred years ago, right?
This would have seemed like incredibly silly entertainment. It’s important to me. It’s important to you. And hopefully, it has some value to other people as well.
There will be — and the jobs of the future may seem — I hope they seem even sillier to us. But I hope the people get even more fulfillment, and I hope society gets even more fulfillment out of them. But everybody can have a really great quality of life, like, to a degree that I think we probably just can’t imagine now. Of course, we’ll still have governments. Of course, people will still squabble over whatever they squabble over. Less different in all of these ways than someone would think. And then, like, unbelievably different in terms of what you can get a computer to do for you.
One fun thing about becoming a very prominent person in the tech industry as you are is that people have all kinds of theories about you. One fun one that I heard the other day is that you have a secret Twitter account where you are way less measured and careful.
I don’t anymore. I did for a while. I decided I just couldn’t keep up with the OpSec.
It’s so hard to lead a double life.
What was your — what was your secret Twitter account?
Obviously, I can’t. I mean, I had a good alt. A lot of people have good alts. But —
Your name is literally Sam “Alt-man.” I mean, it would have been weird if you didn’t have one.
But I think I just got too — like, too well-known or something to be doing that.
Yeah. Well, and the theory that I heard attached to this was that you are secretly an accelerationist, a person who wants AI to go as fast as possible, and that all this careful diplomacy that you’re doing and asking for regulation — this is really just the sort of polite face that you put on for society, but deep down, you just think we should go all gas, no breaks, toward the future.
No, I certainly don’t think all gas, no brakes to the future, but I do think we should go to the future. And that probably is what differentiates me than, like, most of the AI companies is, I think AI is good. Like, I don’t secretly hate what I do all day. I think it’s going to be awesome.
Like, I want to see this get built. I want people to benefit from this. So all gas, no brake — certainly not. And I don’t even think most people who say it mean it. But I am a believer that this is a tremendously beneficial technology and that we have got to find a way, safely and responsibly, to get it into the hands of the people, to confront the risks, so that we get to enjoy the huge rewards.
And maybe, relative to the prior of most people who work on AI, that does make me an accelerationist. But compared to those accelerationist people, I’m clearly not them. So you know, I’m somewhere — I think you want the CEO of this company to be somewhere —
You’re accelerationist-adjacent.
— in the middle, which I think I am.
You’re gas and brakes.
I believe this will be the most important and beneficial technology humanity has ever has yet invented. And I also believe that if we’re not careful about it, it can be quite disastrous. And so we have to navigate it carefully.
Yeah.
Yeah.
Sam, thanks so much for coming on “Hard Fork.”
Thank you guys.
When we come back, we’ll have some notes on that interview, now with the benefit of hindsight.
[MUSIC PLAYING]
So Casey, now, with five days of hindsight on this interview and after everything that has transpired between the time that we originally recorded it and now, are there any things that Sam said that stuck out to you as being particularly relevant to understanding this conflict?
I keep coming back to the question that you asked him about whether he was a closet accelerationist. Right? Is he somebody who is telling the world, hey, I’m trying to do this in a very gradual, iterative way, but behind the scenes, is working to hit the accelerator? And during the interview, he gave a sort of very diplomatic answer, as you might expect, to that question.
But learning what we have learned over the past few days, I do feel like he is on the more accelerationist side of things. And certainly, all of the people rallying to his defense on social media over the weekend — a good number of them were rallying because they think he is the one who is pushing AI forward. How about you? What did you think?
Totally. I thought that was very interesting and, now, with the additional context of the last three days, explains a lot about the conflict between Sam Altman and the board. We still don’t, obviously, know exactly what happened. But I can imagine that Sam going around saying things like, I think that the future is going to be amazing, and I think that everything’s going to be great with AI — I can see why that would land poorly with board members who are much more concerned, from the looks of things, about how the future is going to look.
So it seems like he is sort of an optimist who is running a company where the board of that company is less optimistic about AI. And that just seems like a fundamental tension that it sounds like they were not able to get past. I was also struck by something else that he said.
It was interesting. When we talked about GPTs, these build-your-own chat bots that OpenAI released at Dev Day a few weeks ago, he said that he was embarrassed, because they were so simple and sort of not all that functional and pretty prosaic. And that’s just such a striking contrast, because some of the reporting that came out over the weekend suggested that the GPTs were actually one of the things that scared Ilya Sutskever and the board, that giving these AIs more agency and more autonomy and allowing them to do things on the internet was, at least if you believe the reporting, part of what made the board so anxious.
Yes. And at the same time, if it is true that the board and Ilya found out about GPTs at Developer Day, that speaks to some fundamental problems in how this company was being run. And I don’t know if that is a Sam thing or a board thing or what, but you would think that by the time the keynote was being delivered, all of those stakeholders would have been looped in.
Totally. And I guess my other reflection on that interview is that it just sounded like Sam had no idea that any of this was brewing. This did not sound like someone who was trying to carefully walk the line between being optimistic and being scared of existential risk. This did not sound like someone who thought that he was on thin ice with his board. This sounded like someone who was very confidently charging ahead with his vision for the future of AI.
That’s right.
I really hope we are not doing more emergency podcasts on this. Could the news just give us a little break for a minute?
Well, if I were you, Kevin, I would clear your Tuesday morning.
[LAUGHS]: Oh, god. Happy Thanksgiving.
Happy Thanksgiving!
“Hard Fork” is produced by Rachel Cohn and Davis Land. We’re edited by Jen Poyant. Today’s show is engineered by Rowan Niemisto, original music by Marion Lozano, Rowan Niemisto, and Dan Powell.
Our audience editor is Nell Gallogly. Video production by Ryan Manning and Dylan Bergeson. Special thanks to Paula Szuchman, Pui-Wing Tam, Kate LoPresti, and Jeffrey Miranda. As always, you can email us at hardfork@nytimes.com.
[MUSIC PLAYING]