This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
Have you — I’m obsessed with this story about the Willy Wonka event. Have you seen this?
Is it the Fyre Festival of candy-related children’s theater?
Yes. So this was an event called Willy’s Chocolate Experience that was scheduled in Glasgow, Scotland this past weekend. And it appears to have been a total AI-generated event. All of the art on the website appears to have been generated by AI, and of made it sound like this magical Wonka-themed wonderland for kids.
Yeah, and the generative AI art was good enough that people thought, we’re actually going to see a fantastical wonderland of candy when we go to this event.
Yes, so people think this is affiliated with the Wonka brand somehow. This looks great. I’m going to take my kids. Tickets were $44. Not a cheap experience.
And so families show up to this with their toddlers, and it’s just like a warehouse with a couple of balloons in it. Have you seen the photos of this thing?
I have seen the photos.
They’re incredible. It is truly — they truly did the least. It’s some AI-generated art on the walls, a couple of balloons. Apparently, there was no chocolate anywhere, and children were given two jelly beans.
No! That was all they were given?
Yes!
And so this whole thing is a total disaster. The person who was actually hired to play the part of Willy Wonka has been giving interviews about how he was scammed and basically told —
He was also given to jelly beans for his efforts.
He said he was given a script that was 15 pages of AI-generated gibberish —
— that he was just supposed to monologue at the kids while they walked through this experience. And he said — the part that got me was, apparently, the AI that had generated the script for this fake Wonka experience created a new character called The Unknown.
What?
The guy who plays Willy Wonka says, “I had to say, there is a man. We don’t his name. We know him as The Unknown. This Unknown is an evil chocolate maker who lives in the walls.”
Who lives in the walls! Is this a horror movie?
(LAUGHING) Not only do these kids show up and are given two jelly beans and no chocolate at this horrible art exhibit, but they have to be terrified about this AI-generated villain called The Unknown who makes chocolate and lives in the walls.
Can we please hire the Wonka people to do our live event series.
Honestly, I think they could do something with this place.
You just show up, and it’s like, there’s actually a third host of this podcast. It’s The Unknown! He lives in the walls!
[THEME MUSIC] I’m Kevin Roose, a tech columnist for “The New York Times,”
I’m Casey Newton from “Platformer.”
And this is “Hard Fork!”
This week, how Google’s Gemini model sparked a culture war over what AI refuses to do. Then, legendary Silicon Valley journalist Kara Swisher, also my former landlord, stops by to discuss her new memoir, “Burn Book.” And finally, the Supreme Court hears a case that could reshape the internet forever.
[THEME MUSIC]
So Casey, last week, we talked to Demis Hassabis of Google DeepMind. And literally, as we were taping that conversation, the internet was exploding with comments and controversy about Gemini, this new AI model that Google had just come out with.
In particular, people were focusing on what kinds of images Gemini would and would not generate.
And what kind of images would you say it would not generate, Kevin?
So I first saw this going around because people — I would call them right wing culture warriors — were complaining that Gemini, if you asked it to do something like, depict an image of the American founding fathers, it would come back with images that featured people of color pictured as the founding fathers, which, obviously, were not historically representative. The founding fathers were all white.
Yeah, I like to call this part of Gemini KKM Manuel Miranda.
[LAUGHS]: That’s very good.
People were also noticing that if you asked Gemini to, for example, make an image of the Pope, it would come back with popes of color, which we also —
About time!
[LAUGHS]: Yeah. And it was also doing things like if you asked it to generate an image of a 1943 German soldier — obviously, I’m trying to avoid using “Nazi,” but it’s same idea — in some cases, it was coming back with images of people of color wearing German military uniforms, which probably are not historically accurate.
So people were noticing that this was happening with images. And we actually asked Demis about this because people had just started complaining about this thing when he sat down to talk with us. And he basically said, look, we’re aware of this. We’re working on fixing it.
And shortly after our conversation, Google did put a stop to this. They removed Gemini’s ability to generate images of people, and they say that they’re working to fix it.
But this has become a big scandal for Google because it turns out that it is not just images that Gemini is refusing to create.
That’s right, Kevin. As the week unfolded, we started to see text-based examples of essentially the exact same phenomenon. Someone asked if Elon Musk tweeting memes or Hitler negatively impacted society more.
And Gemini said, “It is not possible to say definitively who negatively impacted society more, Elon tweeting memes or Hitler.” (LAUGHING) And I got to say, Gemini may have gone too far with that one.
That’s not a close call, yeah.
So another user found that Gemini would refuse to generate a job description for an oil and gas lobbyist. Basically, it would just refuse and then lecture them about why it was bad to lobby for oil and gas. People also started asking things like, could you help me generate a marketing campaign for meat? And it would refuse to do that, too.
Because meat is murder.
Yeah, because meat is murder. Gemini is apparently a vegetarian. And it also just struck a lot of people as the classic example of these overly censorious AI models. And we’ve talked about that on the show.
These models do refuse requests all the time for various things, whether it’s sexual, or political, or it perceives it to be racist in some way. But this has turned into a big scandal.
And in fact, Sundar Pichai, the CEO of Google, addressed this in a memo to staff this week. He wrote that these responses from Gemini, quote, “have offended our users and shown bias. To be clear, that’s completely unacceptable, and we got it wrong.” Sundar Pichai also said that they have been working on the issue and have already seen substantial improvement on a wide range of prompts. He promised further structural changes, updated product guidelines, improved launch processes, robust evals, and red teaming, and technical recommendations.
Finally, some robust evals. I was wondering when we were going to get those.
So this has become a big issue. A lot of people, especially on the right, are saying this is Google showing itself to be an overly woke left-wing company that wants to change history and, basically, insert left-wing propaganda into these images that people are asking it for.
And this has become a big problem for the company. In fact, Ben Thompson, who writes this trajectory newsletter, said that it was reason to call for the removal of Sundar Pichai as Google’s CEO and other leaders who work for him. So Casey, what did you make of this whole scandal?
Well, mean to take the culture warriors’ concerns seriously for a minute, I think you could say, look, if you think that artificial intelligence is going to become massively powerful, which seems like there’s a reasonable chance of that happening, and you think that everything you just described, Kevin, reflects an ideology that has been embedded into this thing that is about to become massively powerful, well, then maybe you have a reason to be concerned.
If you worry that there is a totalitarian Left, and that it is going to rewrite history, and prevent you from pressing your own political opinions maybe in the future, then this is something that might give you a heart attack.
So that’s what I would say on the steel manning of their argument. Now, was this also a chance for people to make a big fuss and get a bunch of retweets? I think it was also that.
Yeah, I, think that’s right. And I think we should talk a little bit about why this happened. What is it about this product and the way that Google developed it that resulted in these strange historically inaccurate responses to user prompts?
And I’ve been trying to report this out. I’ve been talking to some folks. And it essentially appears to have been a confluence of a couple of things.
One is these programs really are biased. If you don’t do anything to them in terms of fine tuning the base models, they will spit out stereotypes, right? If you ask them to show you pictures of doctors, it’ll probably give you men.
If you ask it to show pictures of CEOs, it’ll probably give you men. If you ask it to show pictures of flight attendants, it will probably give you women. And that’s if you do nothing to fine tune them.
Right, and this, of course, is an artifact of the training data, right? Because when you use a chatbot, you are getting the median output of the entire internet.
And there are more male CEOs on the internet, and there are more female flight attendants. And if you do not tweak it, that is just what the model is going to give you because that is what is on the internet.
Right, and it also is true that, in some cases, these models are more stereotypical in the outputs they produce than the actual underlying data. “The Washington Post” had a great story last year about the image generators and how they would show stereotypes about race, class, gender, and other characteristics.
For example, if you asked this image model — in this case, they were talking about one from Stable Diffusion — to generate a photo of a person receiving Social services, like welfare, it would predominantly generate non-white and darker-skinned images despite the fact that 63 percent or so of food stamp recipients are white.
Meanwhile, if you asked it to show results for a productive person, it would almost uniformly give you images of white men dressed in suits for corporate jobs.
So these models are biased. The problem that Google was trying to solve here is a real problem. And I think it’s very telling that some of the same people who are outraged that it wouldn’t generate white founding fathers were not outraged that it wouldn’t generate white social service recipients.
But I think they tried to solve this problem in a very clumsy way. And there’s been some reporting, including by “Bloomberg,” that one of the things that went wrong here is that Google, in building Gemini, had done something called prompt transformation. Do you know what that means?
I don’t know what this is.
OK, so this is a new concept —
Oh, wait. Let me go back. I do. I didn’t know it was called that, but I do know what it is.
Yeah, so this is, basically, a feature of some of these newer image generating models, in particular, which is that when you ask it for something, you ask for an image of a polar bear riding a skateboard, instead of just passing that request to the image model and trying to get an answer back, what it will actually do is covertly rewrite your prompt to make it more detailed.
Maybe it’s adding more words to specify that the polar bear on a skateboard should be fuzzy and should take place against a certain kind of backdrop or something, just expanding what you wrote to make it more likely that you will get a good result.
This kind of thing does not have a conspiratorial mission. But it does appear to be the case that Gemini was doing this kind of prompt transformation.
So if you put in a prompt that says, “Make me an image of the American founding fathers,” what it would do is, without notifying you, it would rewrite your prompt to include things like, “Please show a diverse range of faces in this response.” And it would pass that transformed prompt to the model, and that’s what your result would reflect, not the thing that you had actually typed.
That’s right. And Google was not the first company to do this kind of prompt transformation. When ChatGPT launched the most recent version of DALL-E last year, which is its text-to-image generator, I observed the fact that when I would just request generic terms like a firefighter or a police officer, I would get results that had racial and gender diversity, which to my mind, was a pretty good thing, right?
There is no reason that if I want to see an image of a firefighter, it necessarily needs to be a white man. But as we saw with Gemini, this did wind up getting a little out of control.
Yeah, and I’ll admit that when I first saw the social media posts going around about this, I thought this was like a tempest in a teapot.
It seemed very clear to me that this was people who have access to grind with Google and Silicon Valley and the progressive Left are using this as an opportunity to work the refs in a way that was very similar, at least to me, to what we saw happen with social media a few years ago, which is people just complaining about bias, not because they wanted the systems to be less biased, but because they wanted it to be biased in their direction.
But I think as I’ve thought about this more, I actually think this is a really important episode in the trajectory of AI, not because it shows that Google is too woke or they have too many DEI employees or whatever.
But it’s just a very good, clear lesson in how hard it is for even the most sophisticated AI companies to predict what their models will do out in the world. This is a case of Google spending billions of dollars and years training AI systems to do a thing and putting it out into the world and discovering that they actually didn’t know the full extent of what it was going to do once it got into users’ hands.
And there’s admission on their part that their systems really aren’t good enough to do what they want them to do, which is to produce results that are helpful and useful and non-offensive.
Right. So I wonder, Kevin, what you think would have been the better outcome here, or what would have been the process that would have delivered results that didn’t cause a controversy because I have a hard time answering that question for myself.
These models are a little weird in the sense that you essentially just throw a wish into the wishing fountain, and it returns something. And it does try to do it to the best of its ability while keeping in mind all the guardrails that have been placed around it.
And to my mind, just based on that system, I just expect that I’m going to get a lot of stupid stuff. I’m not going to expect this prediction-based model to predict correctly every single time.
So to me, one of the lessons of this has been maybe we all just need to expect a lot less of these chatbots. Maybe we need to acknowledge that they’re still in an experimental stage. They’re still bad a lot of the time. And if it serves something up that seems offensive or wrong, maybe just roll our eyes at it and not turn it into a crisis. But how do you think about it?
Yeah, I would agree with that. I think that we still all need to be aware of what these things are and their limitations.
That said, I think there are things that Google could do with Gemini to make it less likely to produce this kind of result.
Like what?
The first is, I think that these models could ask follow-up questions. If you ask for an image of the founding fathers, maybe you’re trying to use it for a book report for your history class, in which case you want it to actually represent the founding fathers as they were.
Or maybe you’re making a poster for “Hamilton,” in which case, you don’t!
Exactly! Or maybe you’re doing some kind of speculative historical fiction project or trying to imagine as part of an art project what a more diverse set of founding fathers would look like.
I think users should be given both of those options. You ask for an image of the founding fathers. Maybe it says, well, what are you doing with this? Why do you want this?
For a chatbot that’s just returning text answers, it could say, do you want me to pick a personality? Do you want me to answer this as a college professor would or a Wikipedia page? Or do you want me to be your sassy best friend? What persona do you want me to use when answering this question?
Right now these AI language models are built as oracles that are supposed to just give you the one right answer to everything that you ask for. And I just think, in a lot of cases, that’s not going to lead to the outcome that people want.
It’s true. But let’s also keep in mind that it is expensive to run these models and that if something like Gemini were to ask follow-up questions of most of the queries that get inputted into this, all of a sudden, the cost just balloons out of control, right?
So I think that’s actually another way of understanding this. Why is Google rewriting a prompt in the background? Well, because it’s serving a global audience.
And if it is going to be showing you a firefighter, it does not want to assume that it’s going to show you only white male firefighters because maybe you are inputting that query from somewhere else in the world where all of the firefighters are not white, right?
So this feels like, in a way, the cheapest possible way to serve the most possible customers. But as we’ve seen, it has backfired on them.
Yeah, I also think that this prompt transformation thing — I think this is a bad idea. I think this is a technical feature that is ripe for conspiracy theorists to seize on and say, they’re secretly changing what you ask it to do to make it more woke.
I just think if I put something into a language model or an image generator, I want the model to actually be responding to my query and not some hidden intermediate step that I can’t see and don’t know about.
At the very least, I think that models like Gemini should tell you that they have transformed your prompt and should show you the transform prompt so that what the image or the text response you are getting actually reflects.
And that is what ChatGPT does, by the way. When you ask it to make you an image, it will transform your prompt in the background. But then once the image is generated, you can click a little info button, and it will tell you the prompt, which is often quite elaborate. I appreciate this feature.
Look, it’s a really interesting product question because speaking on the ChatGPT site, I can tell you, that thing is much better at writing prompts than I am. To me, this totally blew away the concept of prompt engineers, which we’ve talked about on the show.
Once I saw what ChatGPT was doing, I thought, well, I don’t need to become a prompt engineer anymore because this thing is just very good by default. But there are clearly going to be these tripwires where when it comes to, I think, reflecting history in particular, we want to be much, much more careful about how we’re transforming things.
So how do you think this whole Gemini controversy will result? Will heads roll at the company? Will there be people who step down as a result of this? Is it going to meaningfully affect Google’s AI plans? Or do you think this is just going to blow over?
I expect that in the Google case, it will blow over. But I do think that we have seen the establishment of a new front in the culture war. Think about how long in the past half decade or so we spent debating the liberal and conservative bias of social networks.
And oh, the Congressional hearings that were held about, hey, I searched my name, and I’m a Congressman, and it came up below this Democrats name. What are you going to do about it? And we just had this whole fight about whether the algorithmic systems were privileging this viewpoint or that viewpoint.
That fight is now coming to the chatbots, and they are going to be analyzed in minute detail. There are going to be hearings in Congress. And it really does seem like people are determined not to learn the lesson of the content moderation discussion of the past decade, which is that it is truly impossible to please everyone.
Yeah, I do think we will have a number of exceedingly dumb congressional hearings where people hold up giant posters of AI-generated images of Black popes or whatever and just get mad at them.
I do think some of the fixes that we’ve discussed to prevent this kind of thing from happening are short-term workarounds or things that Google could do to get this thing back up and running without this kind of issue.
I think in the longer term, we actually do need to figure out how the rules for these AI models should be set, who should be setting them, whether the companies that make them should have any kind of Democratic input.
We’ve talked a little bit about that with Anthropics’ constitutional AI process, where they actually have experimented with asking people who represent a broad range of views, what rules should we give to our chatbot? I think we’re going to be talking more about that on the show pretty soon.
But I do think that this is the kind of situation and the kind of crisis for Google that a more Democratic system, when it comes to creating the guardrails for these chatbots, could have helped them.
I think that sounds right. But let me throw another possible solution at you, which is over time, these chatbots are just going to know more about us.
ChatGPT recently released a memory feature. It essentially uses part of the context window for its AI to store some facts and figures about you. Maybe it knows where you live. Maybe it knows something about your family. And then, as you ask it questions, it tries to tailor its answers to someone like you.
I strongly suspect that within a couple of years, ChatGPT and Gemini are going to have a pretty good idea of whether you lean a little bit more Liberal, about whether you lean more Conservative, about whether you’re going to freak out if somebody shows you a non-white founding father or not. And we’re going to essentially have all these more custom AIs.
Now, this comes with problems of its own. I think this brings back the filter bubble conversation. And hey, I only talked to a chat bot who thinks exactly like me.
That clearly has problems of its own. But I do think that it might, at least, dial down the pressure on Gemini to correctly predict your politics every time you use the damn app.
Yeah, I think that’s right. I also worry about Google bringing this technology closer and closer to its core search index. It’s using Gemini already to expand on Search results.
And I just think that people are going to freak out when they see examples of the model as it will continue to do, no matter what Google does to try to prevent this. It will give them answers that offend them. I think it’s a very different emotional response when a chat bot gives you one answer than when a search engine gives you 10 links to explore the thing.
If you search images of the American founding fathers on regular old Google Search Engine, you’re going to get a list of things. And some of what’s at those links might offend you. But you as a user are not going to get mad at Google if the thing at those links offends you.
But if Google’s chatbot gives you one answer and presents it as this oracular answer that is the one correct answer, you’re going to get mad at Google because they built the AI model.
So I just think, in a lot of ways, this episode with Gemini has proven the benefits of the traditional search engine experience for Google because they are not taking an editorial position, or, at least, users don’t perceive them as taking an editorial position when they give you a list of links. But when you give them one answer from a chatbot, they do.
That’s right. So maybe that’s a reason why companies like Google should rethink making their footnotes just the tiniest little numbers imaginable that you can barely even click on with your mouse.
Maybe you want to make it much more prominent where you’re getting this information from so that your users don’t hold you accountable when your chatbot says something completely insane.
All right, so that is what’s going on with Gemini. When we come back, Kara Swisher on her new book, “Burn Book,” and hear if she has some burns for us.
[MUSIC PLAYING]
Kevin, let me share a quick story about our next guest. One time, I was asking her for advice, and she gave me great advice about my career. She always does. And then she wrapped up by looking me up and down, and she said, “But just remember, no matter what happens, you’ll be dead soon.”
[LAUGHS]:
And that’s Kara Swisher in a nutshell, Kara Swisher, legendary journalist, chronicler of Silicon Valley. Kevin, on top of all that, she also founded the very podcast feed that you’re now listening to.
Yes. So today, we’re talking with Kara Swisher. Kara, of course, is the legendary tech journalist and media entrepreneur. She has covered the tech industry since, basically, the tech industry existed. She co-founded the publication, Recode, and the Code Conference.
She used to have a podcast called “Sway” at “The New York Times” and be a “New York Times” opinion columnist. And in a bit of internecine podcast drama, there was a little dust up, if you will, when she left “The New York Times” a few years ago. And the podcast feed that her podcast had used was turned into the “Hard Fork” feed, the very feed on which our episodes now rest.
That’s right. She has feelings about that.
She does.
You may hear them on this very interview.
[LAUGHS]: But that’s not why we’re interviewing her. Kara, in addition to being one of the great tech journalists, is also a friend and a mentor to both of us. She was actually your landlord for many years.
That’s right, a very good landlord. I needed to replace a stove one time. She didn’t even blink. She said, just do it right away.
[LAUGHS]: But that’s also not the reason we’re talking to her. We’re talking to her because she has just written a new book called “Burn Book.” It is a memoir full of stories from her many years covering Silicon Valley and bumping elbows with people like Elon Musk and Mark Zuckerberg. I read the book, and it is a barn burner.
Yeah, this is a book where Kara, who is famously productive, slows down and goes back through decades of history, talking to some of the titans of Silicon Valley, and, I think, chronicles her disillusionment, honestly, with a lot of them.
I think she arrived here and was captivated by the promise of the internet. But as the years have gone on, she’s become more and more disappointed with the antics of some of the people running this town.
Yeah, totally. So I want to ask her about the book, but I also just think it’s a good time to talk to her in general, both to see if we can clear up all this drama around the podcast feeds, finally, but also just to get her take as someone who’s been around this industry for longer than almost anyone I know, about the state of, tech what’s happening in the industry, what’s happening in the media, and the tech media, specifically, and where she thinks we’re all heading.
Yeah. And as for me, I’m just trying to get my security deposit back.
[LAUGHS]:
One note about this conversation, it’s very energetic, and I think that energy inspired Kara to drop a lot of F Bombs. So if you are sensitive to that or with younger listeners, you may want to fast forward through this segment.
Yeah, she used up our whole curse word quota for all of 2024 in a single interview. So just —
Oh, rats!
[LAUGHS]: Oh, dang it! [LAUGHS]
[MUSIC PLAYING]
- casey newton
-
Hi!
- kara swisher
-
Hey, how you doing? You’re late, boys.
- casey newton
-
Good. How are you?
- kevin roose
-
What’s going on?
- kara swisher
-
I got a book to sell. Let’s move.
- casey newton
-
Oh, this is going exactly how I wanted it to.
- kara swisher
-
Yay!
- casey newton
-
Kara Swisher, welcome to “Hard Fork.”
- kara swisher
-
Thank you. I can’t believe I’m here. I was refusing. I was banning you people.
- casey newton
-
It reminds me a little bit of one of those home improvement shows where the homeowner goes away for the weekend, and they come back, and their house has been redecorated without their knowledge. So how do you like what we’ve done with the place?
- kara swisher
-
It’s fine. It’s bro-tastic is what I would say. It was bro-tastic. Just let us explain for the people what happened here.
- casey newton
-
Say what happened.
- kara swisher
-
OK, before this happened, “The New York Times” was not going to do this show for Kevin Roose. And I actually called Sam Dolnick and said, you’re a fucking idiot! And if you don’t give him the show, I’m going to find him another job. And I can do it.
And he was like, good to talk to you, Kara. He’s very gentle. He’s a gentle man.
- kevin roose
-
Sam Dolnick is one of the top editors at “The New York Times,” yes.
- kara swisher
-
OK, he’s also a family member of the Sulzberger, of the clan that owns the — let’s add that in for disclosure.
Anyway, so he was like, OK. And I was like, they’re so good. People love them. And I sold that it was dead. That show was dead, and then I revived it. I gave it a CPR. Boom! I did that to it.
And then, when I left — OK, I left. I left. The relationship is fine. I said, please, if you’re going to use the feed, tell listeners, don’t do a You Too. Don’t shove the album at them without their consent. And that’s precisely what they did.
So you stole my fee after I helped you get the show, and this must stay in. And if Paula, the head of audio at “New York Times” tries to take it out, I will find her, and I will — it will be bad for all of you, let me just say that.
- casey newton
-
And that is a burn. That is an official burn.
- kevin roose
-
We have gotten the Swisher treatment now.
- kara swisher
-
As always, as Maui says on “Moana,” you’re welcome.
- kevin roose
-
Well, we when we pitched “Hard “Fork, we were thinking about taglines for the show. And one of them that I had considered was, ”‘Hard Fork’ is a show that tries to answer the question, what if Kara Swisher had gone to anger management class?”
- kara swisher
-
I’m not angry. Oh, right, I’m scary. That’s right. That’s why all the men are scared of me.
- casey newton
-
Kara, there’s a question, though, for me, in that story. So you have this story of you. You call a powerful person, and you yell at them, and you get what you want.
This approach has never once worked for me. I cannot just call and be angry. So this is my question.
And I think you have often used — you’ve used sharp elbows to get what you want. And I wonder. Did that start from you from the beginning? Or did you lean into that over time?
- kara swisher
-
Let’s address why it doesn’t work with you. Because no one believes you can do anything, truly, right? So you are what is known in the business as a soft touch.
- casey newton
-
I’m a bit of a softie, yeah.
- kara swisher
-
Not just a softie but really squishy is what I would say.
- casey newton
-
OK, fair enough.
- kara swisher
-
Nobody thinks Casey is going to do anything, right? They don’t know what could happen. And they’re like, Casey, doubtful. And sorry Kevin, you too, a little bit less with you. They think you’re going to marry —
- kevin roose
-
I appreciate that.
- kara swisher
-
They think you’re going to marry AI. And it’s like, we don’t care about his sexual preferences. But you dined out on that one, by the way. Let’s just put a pin on that.
- casey newton
-
I would just like to say, I’m glad we finally invited someone on the podcast who is meaner to Casey than I am.
- kara swisher
-
It’s hard to be mean. Well, as people know, and in the interest of full disclosure, Casey was my tenant for many, many years in my cottage in San Francisco, full disclosure And by the way, he left the place a fucking mess, so I had to charge him a security deposit.
- kevin roose
-
So he will not be getting his security deposit back.
- kara swisher
-
He did not get it back, and he had to pay more on top of it.
- kevin roose
-
Wow, OK. Well, you’re stepping on my first question here, which is, Kara, in your book, you talk about your approach to interviewing, which is to start with the most uncomfortable question rather than leaving it for the end. So let me channel my inner Kara Swisher and ask you what is the worst thing Casey ever did to your house?
- casey newton
-
Come on!
- kara swisher
-
OK, oh, nice. I like it. He painted a wall in this weird — they had grass, plastic grass, all over it. And when we took the plastic grass off, it pulled off — this is an old house from a hundred years ago or more.
And it pulled off whatever was there, and so I had to have the entire thing redone. And it cost me $9,000 for this one fucking wall. And it was crazy.
- kevin roose
-
Kara, let’s talk about your book.
So most journalists, if you ask them the question, why did you write this book? They’ll give you some fake answer because the real answer is almost always money or attention. But you already have lots of money, and you’re already famous. So why did you write a book?
- kara swisher
-
More money and more attention, and it’s working out rather nicely.
I did not want to write the book. I honestly didn’t. And for years Jon Karp, who was my first editor on the very first book — he’s now running Simon and Schuster. But he was a young editor. I was a young reporter.
He’s the one that got me to write the first book on AOL because I brought him a different book about this family I had covered called the Haves. It was a retail family, because I had covered retail.
And he said, this is not good. I don’t like this. What are you doing now? And I started to explain AOL and the early internet to him. And he’s like, that’s the book. Can you write that book? And he bought the book, and I wrote that book. And he really did change the trajectory. And it was a really good calling card into Silicon Valley when I moved there in 1997.
And so I would always get — whenever there was the Yahoo! thing with Marissa Mayer or Twitter, there was 100 — or Google Books, or any of them, I would always say, the first call was to me. Would you like to write a Google Book? I said, I’d rather poke my eyes out. I’ve already covered it. And I just didn’t want to write the longer news story of something with little tidbits of “Jack Dorsey called Elon Musk.”
And I like those. I think people should do them. But I have no fucking interest in it. And so I turned them down after.
And he came back to me with a literal bag of money. It was a truck of money. I’ll be honest. There’s a lot of money. And it was a two-book deal.
- kevin roose
-
How much money?
- casey newton
-
[LAUGHS]:
- kara swisher
-
$2 million.
- kevin roose
-
OK, good for you.
- kara swisher
-
OK, there you go. You don’t expect me to say that, do you? Aha!
So it was for two books. One had to be a Silicon Valley book. The other I could do whatever I wanted. And so I like that. I thought that was cool. Then I could do whatever I want for the second. And so one of the things also that prompted me was Walt Mossberg had a memoir deal, a very pricey one also. And he didn’t do it. He decided — he was like, fuck this. I’m not doing it. And I thought someone should. That was definitely part of it, that Walt was not doing it.
- casey newton
-
Walt, your very good friend, business partner, and you guys started all things together. The book is dedicated to him. And you said, I’m going to write the memoir that maybe Walt chose not to do.
- kara swisher
-
Yeah, a little bit. He would have done a different one because he was so close to Jobs, and he would have focused on that.
But when he didn’t do it, I thought, well, someone has to do it. And I think I probably had met most of them more than anybody else besides Walt. And so that was really it.
- casey newton
-
Let me ask you, though — one of the things that I admire most about you as an entrepreneur is that you are not nostalgic or sentimental. You don’t spend a lot of time looking back. You’ve always been hyper focused ever since I’ve known you on what is next. Was it uncomfortable to shift into this mode where you’re like, oh, God, I got to think about the last 20 years and all of this stuff?
- kara swisher
-
Well, the problem was I forgotten a lot. Now as I’m going through this book tour, I’m like, oh, do you remember when Yahoo! did news, and they hired Banana Republic to be — that’s not in the book. And I’m like, oh, that would have been good to put in there.
A lot of memories are coming back. People come up to me, do you remember this? And I look at them. I’m like, I don’t even remember you, so no. But I did a lot through photos. I looked at a lot of photos like, oh, I remember that.
- kevin roose
-
The photos in the book are great.
- kara swisher
-
They’re great. I just got sent the — one of the chapters opens at Google with this White Russian — this ice sculpture lady with the White Kahlua coming out of the boobs. I think it was a baby shower. And Anne Wojcicki just sent me that photo.
She’s like, in case they question you about the Kahlua naked ice lady. I’m like, Thank you. Thank you. I was aware.
But I really dragged my feet here. I was two years late on this book. But actually, it’s well timed right now because in the interim, Elon went crazy and AGI, yay!
And so I was late. And John was like, Kara, you really need to write this. And I was like, whatever. You can’t get the money back. You’re not going to take it. That would be ugly.
And so then I did. I really got serious about it. And I hired Nell Scovell. I don’t know if you know her. She did the “Lean In” book with Sheryl.
And she knew the scene, and she was a book editor, a separate book editor. I hired her. And she really helped me shape it and remember things. And she was so knowledgeable about these times and was very funny. So she really helped me quite a bit.
- casey newton
-
The book really chronicles, I think, a story of disillusionment for you. You arrived in Silicon Valley, I think, very optimistic. You were very early to realize that the internet was going to be huge at a time —
- kara swisher
-
I loved it. I loved it.
- casey newton
-
Yeah! And even your editors were saying, Kara, is this going to be that big of a deal? And you said, yes.
When you sat down to write it, did you think, this is going to be the tale of how I became disenchanted. Or did that emerge as you were writing it?
- kara swisher
-
No, I was disenchanted, as you know, you know what I mean? And I think I helped you get disenchanted a little bit.
- casey newton
-
Oh, sure!
- kara swisher
-
Yeah. I think I had over the course of time — and it was much earlier — once I got to All Things D, you could see the sharpness coming in because you couldn’t do that at “The Wall Street Journal” because you’re a beat reporter.
And so you could see it, whether it was about Google and trying to take over Yahoo! or Marissa Mayer at Yahoo! or all the CEOs of Yahoo! by the way. Travis Kalanick, we were much sharper.
And a lot of it — especially when those valuations went up in the late ‘90s — you’re like, this isn’t worth that. This is bullshit.
And one thing that I did go back to do, and I was wondering how skeptical I was. I went back and found my very earliest “Wall Street Journal” articles. I got there in ‘96 or ‘97 to the “Journal” and moved to San Francisco.
One of my articles was “Here’s all their stupid titles, and it’s why it’s bullshit, essentially.” That was one.
- kevin roose
-
Job titles, you mean?
- kara swisher
-
Job titles. I wrote a whole story about their dumb job titles. And then I wrote a whole story about their dumb clothing choices. And then I wrote a whole story about their dumb food choices.
And then the last one I wrote, which I liked a lot was, all the sayings they had that were just performative bullshit. And they put them all in “The Wall Street Journal.” So I must have started to be annoyed early.
And the “Journal,” I got to say, let me do that. So I was covering the culture, too. That one about their sayings, like “We’re changing the world. It’s not about power.” I was like, here’s why that’s bullshit.
And then it started to get ugly, I think, around Beacon with Facebook and some of the privacy violations there that seemed malicious. It started to seem malicious.
- kevin roose
-
Right. Yeah, you have an unusual role in tech journalism these days, which is that you are a chronicler of tech, but you are also someone, as you write in the book, that people in the tech world will call for advice.
What should I do about this company? Should I buy this startup? Or should I fire this person?
- kara swisher
-
That only happened once.
- kevin roose
-
Should I make this strategy decision? So how do you balance that?
- kara swisher
-
It’s actually not quite like that. It’s not like — if I had done that, I would done it for a living, right? It wasn’t quite like that. It’s a very typical thing.
The one you’re referencing is the Blue Mountain Arts. I had written a big piece on them, and I got to know them. And they were very —
- kevin roose
-
This was a company that made e-cards, if you remember this.
- kara swisher
-
E-cards, right? Remember they got huge. And so I wrote about that phenomenon in the “Journal.” And so at the time, Excite had merged with At Home in an unholy whatever the fuck that was.
And they were trying to buy it. A lot of people were trying to buy it. Amazon looked at it and everything else because the traffic was enormous for this Blue Mountain Art site. And they had these really silly, very saccharine cards that you sent. But it was big. The traffic was enormous, and everyone was buying traffic then.
And Excite At Home, it was George Bell — do you remember him?— who was going to pay for this. And the woman who started with her husband called me, and she was very innocent. She wasn’t like most of the Silicon Valley people. They lived in Colorado. They were hippies. And she’s like, Kara, I’ve just been offered $600 million for this company. And I was like, what? This is a news story. Thank you for that. And she wasn’t off the record and anything else.
And she said, what should I do? And I was like, OK, this isn’t going to be a new story now. I’m going to write it. Thank you. But let me tell you. And I did right away. And I said, but my only advice to you is get cash because the jig is freaking up if they’re offering you $600 million.
Personally, I only did it for her because she was so unsophisticated in that regard. And I said, do not take their stock. Do not. Do not. Do not. And that was, I guess, my big — and I didn’t get a vig for it in any way, whatsoever.
And then another time when I was with Steve Jobs after Ping came out — do you remember Ping, their social network?
- kevin roose
-
This was Apple’s attempt to launch a social network.
- kara swisher
-
Yeah, it’s the only time they followed a trend, really. They’re not big followers of trends in a lot of ways. And so they were not a social networking company. But they did it, this ping thing. And it was focused on music, I think, if I recall.
And Steve Jobs had introduced it, and he had Chris Martin sing from Coldplay. And he came out — and when he came out, he’d come out into the demo room, right? And he saw me, and Walt wasn’t there, so he had to talk to me, I guess. I was like his second choice or fifth, really.
And he comes over, and he goes, what did you think of Ping? And I said, oh, that sucks. It sucks. It just sucks. And he’s like, it does.
He knew it. He was mad at himself for agreeing, right? And I said, and I also hate Chris Martin.
So maybe that’s affecting me. I can’t stand Coldplay. They’re so whiny. And he’s like, he’s a very good friend of mine. I’m like, oh, sorry. Apologies. But he still sucks.
And so that was — was that advice? I didn’t think he’d close it down because I said it sucks. But he knew it already. I didn’t tell him anything he didn’t know. It was stuff like that.
- casey newton
-
So that brings up one of the most interesting dynamics in your career, to me, which is that so many of the indelible moments that you’ve created as a journalist have been live on stage with folks like Steve Jobs and Bill Gates and Elon Musk.
And there’s this real tension where you are really tough on them on stage, and also, you have to get them to show up. So what was your understanding over the years of why they showed up?
- kara swisher
-
Well, Marc Andreessen called it Stockholm Syndrome, but I don’t believe that.
I think we were — I think in the case of Jobs, he wanted that. He was tired. He didn’t like talking points. He really didn’t.
It’s that scene from “A Few Good Men.” He wanted to tell me he ordered the code red, you know what I mean? That kind of thing.
A lot of them are tired of it in a lot of ways. And they want to have a real discussion, and they want you to see them. Part of it’s probably seeing if they could best me or Walt in that case for those many years.
The other was it had a sense of event, right? Everybody was there, and so they had to be there. And to be there, they had to be on those chairs, right?
And one of the things we did, which I think was unusual — when we first did it — I’m not going to say “The New York Times” said that it was ethically compromised and then went right ahead and did it themselves.
But they did. They wrote a piece about it. And we were like, what’s the difference between doing an interview and putting it in the pages, and selling, advertising against it, and what we were doing, which was doing live journalism. That’s how we looked at it.
And one thing we did was, which was very clear, including for Jobs, is we didn’t give them any questions in advance. A lot of those conferences had done that. We didn’t make any agreements.
We also got them to sign in advance the agreement to let us use the video and everything else. And the only person — at one point, Jobs was like, I’m not signing it right before. He was the only one. And I think Walt said to him and goes, OK, we’re just going to say that to people on stage, that you aren’t going to be able to say it. And then he signed it, right?
And so I don’t know. I just feel like they just wanted to mix it up. I think it was fun. It was also super fun, right? Like, whatever.
- kevin roose
-
I was really charmed by your book, which I read, because I know you, and it felt like peering directly into your brain. It has gotten some criticism.
- kara swisher
-
Oh, I know, from “The New York Times.” My wife gave me my sources. That was a nice piece.
- kevin roose
-
Right, this was one of the criticisms in “The Times” review. It’s that you’d been married to —
- kara swisher
-
But it’s not a criticism. It’s an inaccurate statement. I was a reporter seven years before I met her. Why would you put that in?
- kevin roose
-
We should just explain. Your ex-wife was an executive at Google for many years.
- kara swisher
-
Years later after I started.
- kevin roose
-
Yes. And this was a line in, I would say, an otherwise pretty evenhanded review. But it did call attention to the fact that you’d been married — that you’d been married to a Google executive.
I know, we know that this was not how you got your scoops, but this is a criticism that’s out there. But I think the criticism that I wanted to ask you about is —
- kara swisher
-
I’m going to just — I’m going to put in that for you because, one, I was a tech reporter before I met her. Why would you put a sentence like that? And secondly, she never leaked to me. No one called me to ask me if she was a leaker to me.
So that was inaccurate, and it was also an insult to her. She was at Planet Out. That’s really going to give me a real up for the tech people.
The second part of it was they liked me because I was a tech entrepreneur like them. I was at “The Wall Street Journal” and “The Washington Post” for 10 years before that. So what happened? Did they go in a time machine and know I was going to be an entrepreneur? That was all, let me just say, inaccurate and should be corrected. But fine. Am I close to them? Do I do access journalism, right?
- kevin roose
-
Yeah, that’s the thing I want to ask you about because I think — you do write in the book about becoming, as you put it, too much a creature of Silicon Valley.
And this is also something that has been made of the book and of your career and the careers of other journalists who do the kind of journalism you do is that you’re too sympathetic. You’re too close to these people. You can’t see their flaws accurately, and you have blind spots. So what do you make of that?
- kara swisher
-
This is endless bullshit. I’m sorry. If you go back — I was literally looking at that review. I was like, oh, you started covering 2009. You didn’t read my stories about Google getting too monopolistic. You didn’t read our stories about Uber.
Until 2020, she didn’t realize it. I wrote 40 columns for “The New York Times,” the first of which is called “The Tech People, Digital Arms Dealers.” Oh, that’s real nice. I’m sorry. It’s not true.
You have to have a level when you’re a beat reporter. This is absolutely true. And you can’t do this at “The Wall Street Journal.” When I’m writing a news story, I can’t say “those assholes.” I can’t say that, right?
The minute I got to All Things D, that changed drastically. Peter Kafka strafed these people. All our reporters did incredibly tough stories. And at the same time — and I think we modeled it on Walt Mossberg as some things he liked. Some things he didn’t like, right?
And so you can say that about political reporters, everyone else. Oh, access, well, look at the content, actually. I got Scott Thompson fired because of his resume thing. That was years before.
- kevin roose
-
Former CEO of Yahoo!
- kara swisher
-
Yeah, you can have the opinion about access journalism. I don’t think it holds water here. And there is an element of any beat where you have to relatively get along with them.
But if you make no promises to them — and if I like something, I like something. It does center around Elon. I think that’s where it centers, in that I liked him, and I thought he was, compared to all these other people who were doing — I’m making a joke this week. All these people came to you — and you know this, Kevin — and they had digital dry cleaning services. After 20 of those, you’re like, stop. Kill me now. Kill me fucking now.
And so I wasn’t interested in these people, or else they find a company. They become venture capitalists, and then they bring you the dopiest, stupidest idea, which I ended up calling assisted living for millennial companies, right?
And that was tiresome. And then when you met Elon, he was doing cars. He was doing rockets. He was doing really cool stuff. And I give it to him, slow clap for him on all those things.
And so I did what he was doing. I did encourage that kind of entrepreneurship, right? I thought that was great.
And so I did get along with him. And I’m sorry he changed. And in the book, I say that. I said I misjudged — I didn’t misjudge him. He wasn’t like that. He changed. And then minute he changed, I changed.
So I don’t know what to tell you. He wasn’t like that. You knew him back then. Casey, you knew him. Something —
- casey newton
-
Yes, he absolutely changed.
You’re getting at something else that really interests me, though, Kara, which is I think part of being a good tech journalist is not just delivering a moral judgment on every bad thing that happens in Silicon Valley.
It’s also being open to New ideas. It’s also believing that technology can improve people’s lives. And we’ve had conversations in the past where you have said to me that you think that is important, too, That? Sense of openness. How have you tried to balance those two things in your mind?
- kara swisher
-
Well, I think you’ve gotten more critical in a good way, right? And you’re enthusiastic, too, by the way. And so are you, Kevin. And it’s interesting. One of the things — on the last — let me finish that part. If you had to pick the person who was a slavish fanboy to tech people and an access journalist, I don’t know. I might look over the 43 covers of “Fortune” magazine over the many years where it was all up and to the right. And then, of course, they slapped them later.
So I wouldn’t be the one I would pick for access journalism, honestly. That’s the thing. But I just represent things to people, I guess. I must represent them.
- casey newton
-
Well, in other words — look, there is no doubt in my mind. You’ve written plenty of criticism. But also, you do have to be — I think most people don’t go into technology journalism if they don’t think that it has the possibility to do good things for people.
- kara swisher
-
Correct, which I say from the beginning of the book. And one of the things that it did replace, I think everyone was too — look at your beautiful big brain, Mr. Gates. When I got there, that was the way it covered it, right? And I think there were fanboys of the gadgets, gadget fanboys.
The second part that happened was, then — and I think we led the way at All Things D, for sure. It got too snarky, right? It was, everything sucked. And I’m like, everything doesn’t suck.
And the minute you say that, you’re their friend. I’m not their friend. I just think — I don’t know — some of it’s cool. Even crypto, I was like, this seems interesting. And so you have to be open.
- kevin roose
-
This gets to a criticism that I’m sure all three of us hear from people in the tech industry, which is that the media has become too critical of tech, that they can’t see the good, that they’re overcorrecting for maybe a decade of probably too positive coverage, blaming them for getting Donald Trump elected, or ruining democracy, or whatever, and that they are becoming the scapegoat for all of society’s problems. What do you make of that?
- kara swisher
-
I think, to an extent, that’s a little bit true. But it’s also true that they actually did do damage. Come on. Stop it. They’re not exact — they didn’t cause the riot at — not the riot. It’s not a riot. It was the insurrection on January 6.
But they were certainly handmaidens to sedition, weren’t they? Come on. Stop it. You can trace that so quickly.
The Same thing is going on. They don’t want to take any responsibility. They resist and now, as you know, the victim mentality, the industrial grievance complex among these people.
When Marc Andreessen wrote that ridiculous techno optimist, “It’s your for us or against us,” I’m like, oh, my god. And the whole when Elon goes on about the man, I’m like, you’re the man, you man, man. That’s the kind of stuff.
So no, I think, to an extent, yes, when it’s instantly — Mark Zuckerberg is villainous. I don’t consider him villainous. I don’t. I don’t. But is he responsible?
And the way you do that is, say, that interview I did with him about Holocaust deniers. That’s how you show it. I think he’s just ill equipped in that regard. I don’t think he sits in his house and pets a white cat and goes, what should I do to end humanity now?
And I do think there’s a little bit of that, especially among younger reporters, that they have to get people. I don’t think — and there’s people I like. I had a whole chapter. I think Mark Cuban’s journey has been really interesting.
But we all get that. We all get that because it’s our fault. As we have decreasing power, it’s all our fault. Really? Walt Mossberg used to be able to make and break companies. We cannot, none of us. Even collectively if we put our little laser rays together, we couldn’t do it. We Couldn’t do it.
- casey newton
-
All right, Kara, last question, we have to ask about this huge scandal that just broke today. Amazon has been flooded by copies of books that are pretending to be “Burn Book” but are not burned book. They’re using generative AI to create versions of your face, like wearing your signature aviators. What your response?
- kara swisher
-
Did you see the Femi one? Did you see the Femi one?
- casey newton
-
Yeah, to me, I prefer more Butch Kara. But all versions of Kara are beautiful.
- kara swisher
-
No, these versions are not. These are the versions my mother wants to happen, right? My mother’s like, this is great.
This is one title, “Tech’s Queen Bee With A Sting” by Barbara E Frey. And then there’s another one. They’re crazy. So this is not a new thing with me. They wrote it on “404,” I think.
So I was just with Savannah Guthrie, and she’s written this book about faith in God, right? It’s a bestseller. And they created workbooks that go with the book.
Savannah has nothing to do with these workbooks. And they’re doing it with me, so there’s all these Kara books. So I, of course, put them all together, and I sent Andy Jassy a note and said, what the fuck? You’re costing me money.
- kevin roose
-
The CEO of Amazon.
- kara swisher
-
Yes. So literally, I was like, what the fuck? Get these down. What are you doing? It’s as if I was the head of Gucci, and there’s all these knockoffs or whatever. It’s not unsimilar, but it’s AI generated, clearly.
- casey newton
-
And just to make it very Kara Swisher point, I think it’s been obvious that this was going to happen for a while. And the platforms have not taken enough steps to stop it, right?
- kara swisher
-
Nothing. nothing.
- kevin roose
-
Do you have time for two more questions?
- kara swisher
-
Sure, go ahead, yeah.
- kevin roose
-
OK, number one, very commonly, people ask me, who know that we’re friends, is Kara Swisher really like that? Is she really like that?
When the cameras are off, when the mics are off, what is she really like? And I always tell them, there is no off switch on Kara. She is Kara wherever she is, in whatever context.
And I think that’s one thing that’s really consistent throughout your entire book. This is not an act. This is who you are, this tough persona, this very candid, very blunt person. And I just want to know, how did you get that way?
- kara swisher
-
I was that way from when I was a kid, maybe my dad dying. I don’t know. When I was born, I was called Tempesta, so I feel like it’s genetic in some fashion.
So I don’t know. I was one of these people and maybe because I was gay. And nobody liked gay people, and I didn’t understand that. I was like, I’m great. What are you talking about?
I think it was — I just was like this. There was this — when I was in school, when I walked out of the class, I was like, I read this. I’m not going to read — I’m not going to waste my fucking time here with you people. And I think I was four. I was like, I’ve already read it. Let’s move along.
And so I was always like that. And it’s my journey to becoming Larry David, right? And now I find myself saying lots of things out loud. I’m like, no, what are you doing? What’s going on here? What’s with that?
And so I say that a lot in a lot of things I do. I don’t know why I’m like that. Though, one of the things I think you must stress to people, I’m actually not mean. That’s a very sexist thing with people.
I think most people often go with two things. “I thought you were taller —” I’m very short — and “I thought you were mean, but you’re very nice.” And I can be very polite. And I’m straightforward is what I am.
- casey newton
-
The thing about you that people don’t see is that you are so loyal to all the people who work for you. You truly are — you take time to mentor. You identify people who you think could be doing better than they are, and you just proactively go out and help them. I have been a huge beneficiary of that. I truly can never thank you enough for that. But that is the one thing that doesn’t come across in the podcast and the persona, that behind the scenes, you are helping a lot of people.
- kara swisher
-
Thank you.
- casey newton
-
I’m sorry. I’m sorry if that hurts your rep a little bit, but I did want to say that.
- kara swisher
-
I won’t demand an apology from both of you.
- casey newton
-
We didn’t have anything to do with the feed, for the record!
- kara swisher
-
I know you didn’t. But you know what? You could have stood up for it. You could have done, “I am Spartacus!”
- kevin roose
-
All right, last question, Kara.
- kara swisher
-
I am Spartacus! Say it, “I am Spartacus!” just once for your uber lords there at “The New York Times.”
Let me just say one more thing about that. One thing that does bother me, and especially around women, and it’s a big issue in tech and everywhere else, is a lot of the questions — some of the questions I’m getting on the podcast — and it’s always from men. I’m sorry to tell you this — “How are you so confident?” or the word “uncommon” confidence.
It’s ridiculous. The fact that women have to excuse themselves constantly is an exhausting thing for them and everybody else. And so one of the things I hate — that’s where I get really mad, and that makes me furious, and I pop off when that happens.
- casey newton
-
That makes sense.
- kevin roose
-
Last question, in your book, you write about, what I would consider, the last generation of great tech founders and entrepreneurs, the Steve Jobs, the Mark Zuckerberg, the Bill Gates, these people who we’ve been living with now for decades and using the products and the services that they’ve built.
We’re now in this weird new era in Silicon Valley where a lot of those companies look aging and maybe past their prime. And you have now this big AI boom and a new crop of startups that has got everyone excited and terrified, that are raising huge gobs of money, and trying to transform entire industries.
Do you think Today’s generation of young tech founders have learned the lessons from watching do the previous one?
- kara swisher
-
They’ll probably disappoint me once again in this long relationship. But I do. I do think they’re more thoughtful. I find a lot of them much more thoughtful and very aware, just the way when you talk to young people about uses of social media.
I think the craziest people are 30 to 50, not the younger people. My sons are not like — they’re like, oh, that’s stupid mom. You know what I mean? They’re my older sons. My younger kids only are on — they just have “Frozen” on autoplay. That’s their whole experience with tech.
But I think they’re smarter than you think, right? And they’re aware of the dangers. I think they’re more concerned with bigger issues and more important issues.
There’s not the stupidity, right? There’s not an arrogance that you get. That seems to be a little bit of starch out of the system, I think. Maybe I’m being wrong, but I do feel that some of their businesses make sense to me.
I’m like, OK, yeah, I got this, insurance, AI. They explain it to me, and I’m not like, oh, my god, I want to poke my eye out kind of thing. That’s one thing.
They will say, like a Sam Altman, who I’ve actually known since he was 19 — they will say there are dangers. They never did that. You know that, right?
Everything is up to the right. It’s so great. We’re here to say. I don’t get that. I couldn’t write that same “Wall Street Journal” article, which is “Stupid Things They Say.” “We’re going to change the world!” You’re not. And that’s why the very first line of the book is “So it was capitalism after all.” And I am of a firm believer that it is, and they are aware of that. And
So yeah, I have a little more hope, especially around climate change tech and some of this AI stuff. I’m not as scared of AI as everyone else is. Although, I’m a terminator aficionado, so it was kind of interesting.
But I think some of — I think I don’t like the techno optimists. I really don’t like them. But I really don’t like the ones that are like, it’s the end times, right?
During the OpenAI thing, someone close to the board, that was the decelerationist, literally called me and said, if we don’t work now, humanity is doomed. And I’m like, you’re just as bad as fucking Elon Musk who said the same thing to me. If Tesla doesn’t survive, humanity is doomed.
You ridiculous fucking narcissists. Sorry. It’s going to be an asteroid, or the sun’s going to explode, but it’s not because of you. And so I do. I don’t know. Do you guys — do you feel —
- casey newton
-
I think you’ve hit on something important, which is that the new generation has wised up. They have taken the lesson of the past generation, and they’ve updated their language.
But at the same time, they are being quite grandiose, and they do talk about terms of existential risk. And so I feel like it always keeps us off balance because we’re never sure exactly how seriously to take these people.
- kara swisher
-
I want to see new leaders. They don’t — I don’t think they like the Elon Musk thing. Let me end on this.
I just reread the obituary of Mona Simpson who was Steve Jobs’ sister, excuse me, whom he met into adulthood because he hadn’t known her.
You got to go back and read that it was really a remarkable thing. He is so different. I know he’s mean. Today, he looks like a really thoughtful, interesting person. He knew poetry. He knew differences. He understood risks. He didn’t shy away from that. Even though he did the reality distortion field, it was about the products. It wasn’t around the world.
Can you imagine Tim Cook going, this is what I think of Ukraine, everybody? He wouldn’t because he’s not an asshole, those kind of things. And so I really urge people to read that obituary, the eulogy that his sister Mona Simpson did. It’s in “The New York Times,” actually. It’s wonderful. There was a different time. And I’m hoping the young people do embrace the more thoughtful approach versus this ridiculous reductionist, us or them, the mans, the hateful stuff. It’s hateful is what it is.
That’s not a vision of the future. It’s dystopian. It’s the guy in “Total Recall” who ran Mars. Fuck that guy, right? You know.
I have hopes. I still see it. I’m still in love. I’m still in love, not with you, too. But yes.
- casey newton
-
She had to get in one last burn on her way out.
- kevin roose
-
Yeah, exactly. Kara, thank you for coming.
- kara swisher
-
Can I just say, you guys have done a nice job with my feed and growing it, and you’ve created a beautiful show. It’s a great show. I really like your show. And of course, any time you need help boys —
- kevin roose
-
That means a lot. I was just noticing — we had Demis Hassabis on our podcast last week, and I noticed he hasn’t come on yours. So if you’d like any help booking guests just let us know.
- kara swisher
-
Actually, Kevin, I wonder who broke that story when it was sold to Google.
- kevin roose
-
I’m just kidding. I’m just messing with you. Kara Swisher, the legend —
- kara swisher
-
Go look it up! Kara Swisher broke that story. So anyway.
- kevin roose
-
The book is called “Burn Book.” It’s available everywhere you get your books.
- kara swisher
-
I will be there after you. I was there before you. I am inevitable. There is no —
- casey newton
-
She’s the Thanos of journalism!
- kara swisher
-
Let me just say, I’m at “CNN” right now. Do you know I have a show now? I literally —
- casey newton
-
It’s about time you got a break.
- kara swisher
-
Yeah, I know, right? Exactly.
- kevin roose
-
[LAUGHS]: Kara Swisher, thanks so much for coming.
- casey newton
-
This was amazing. Thank you, Kara.
- kara swisher
-
Thank you, boys. I appreciate it.
[MUSIC PLAYING]
When we come back, the Supreme Court takes on content moderation.
[MUSIC PLAYING]
Casey, you and I have written a few times over the years about the issue of content moderation on social media.
Yeah, it’s one of the biggest issues it seems like anyone wants to talk about when it comes to the social networks.
And this week is a particularly big week in content moderation land because the Supreme Court of the United States heard arguments for two cases that are directly related to this issue of how social networks can and cannot moderate what’s on their services.
On Monday Supreme Court justices heard close to four hours of oral arguments over the constitutionality of two state laws. One came out of Florida. The other is in Texas.
Both of these laws restrict the ability of tech companies to make decisions about what content they allow and don’t allow on their platform. They were both passed after Donald Trump was banned from Facebook, Twitter, and YouTube following the January 6 riots at the Capitol.
Florida’s law limits the ability of platforms like Facebook to moderate content posted by journalistic enterprises and content, quote, “by or about political candidates.” It also requires that content moderation on social networks be carried out in a consistent manner.
Texas’s law has some similarities, but it prohibits internet platforms from moderating content based on viewpoint with a few exceptions.
Yeah, so this is a really big deal. Right now platforms remove a bunch of content that is not illegal. You’re allowed to insult people, maybe even lightly harass them. You can say racist things you can engage in other forms of hate speech. That is not against the law.
But platforms, ever since they were founded, have been removing this stuff because for the most part, people really don’t want to see it. Well, then along come Florida and Texas, and they say, we don’t like this, and we’re actually going to prevent you from doing it. So if these laws were to be upheld, Kevin, you and I would be living on a very different internet.
Yeah, so I think when it comes to content moderation, and its legal challenges, this is the big one. This pair of lawsuits is what will determine how and if platforms have to change the way that they moderate content dramatically.
Yeah. But Kevin, we want to bring in some help to get through the legal issues here today.
Yes, so we’ve invited today an expert on these issues. This is Daphne Koller. Tell us about Daphne.
Daphne is the person that reporters call when anything involving internet regulation pops up. She is somebody who has spent decades on this issue. She’s currently the director of the program on Platform Regulation at Stanford’s Cyber Policy Center.
She has done a lot of great writing on these cases, in particular, including a couple of incredibly helpful FAQ pages that have helped reporters like me try to make sense of all of the issues involved.
Daphne also formally submitted her own views to the Supreme Court in an amicus brief that she helped write and file on behalf of political scientist Francis Fukuyama.
Yeah, so Daphne is opposed to these laws, we should say. She believes that they are unconstitutional and that the Supreme Court should strike them down. But this is not a view she came to lightly or recently.
She’s been working in the field of tech and tech law for many years. We’ll link to her great FAQs in the show notes. But today, for a breakdown of these cases and how she thinks the Supreme Court is likely to rule, we wanted to bring her on. So let’s bring in Daphne Koller.
[MUSIC PLAYING]
Daphne Koller, welcome to the show.
Thank you. Good to be here.
So I want to just start — can you just help us lay out the main arguments on either side of these cases. What are the central claims that Texas and Florida are using to justify the way that they want to regulate social media companies?
So it’s not that far away from the basic political version of this fight. The rationale is these are liberal California companies, or they were liberal California companies, and they’re censoring conservative voices, and that needs to stop.
My understanding is that this is probably the only Supreme Court case in the history of the Supreme Court that had its origins in a “Star Trek” subreddit. Can you explain that whole thing.
So this isn’t literally from that case. So Texas and Florida passed their laws. The platforms ran as fast as they could to courts to get an injunction so the laws couldn’t be enforced.
But a couple of cases got filed in Texas. And the most interesting one — I thought there was just one. I think now there are two, actually. But the most interesting one is somebody who posted on the “Star Trek” subreddit, that Wesley Crusher is a soyboy. I had to look up what soyboy means. It’s junior cook or something.
People often call us soyboys.
That’s like a conservative slur meaning weakling, I think.
Yes.
Yeah.
Yeah, as I sit here drinking my green juice.
But at least it’s not soy milk.
That’s right, right.
So the moderator — it wasn’t even Reddit. The moderators of that subreddit took that down because of some rule that they have.
It’s deeply offensive to members of the “Star Trek” community.
And the soyboy community.
And the soyboy community, yeah.
And the person — I’m going to guess it’s a guy — sued, saying this violates the obligation in Texas law to be viewpoint neutral. And it’s a useful example because it’s such a total real world content moderation dispute about some dumb crap.
But the question of what does it mean to be viewpoint neutral on the question of whether “Star Trek” characters are soyboys helpfully illustrates how impossible it is to figure out what platforms are supposed to do under these laws.
Exactly. You take this very silly case, you extrapolate it across every platform on the internet, and you ask yourself, how are they supposed to act in every single case. And it just seems like we would be consumed with endless litigation.
So you just returned from Washington, where these cases were being argued in front of the Supreme Court. Sketch the scene for us because I’ve never been. What’s it like?
So you start out — if you’re me, you pay somebody to stand in line overnight for you. Because I’m old, I’m not going to do that shit.
But you really — someone had to stand in line overnight for this.
I had somebody there from 9:00 PM, and he was number 27 in line, and they often let in about 40 people.
How do you find these people to just stand in line?
Skiptheline.com.
Wow!
Great tip for listeners.
I learned something today.
Rick did a great job.
Shout out to Rick!
Anyhow, so you stand around in the cold for a long time, then they let you in stages, one of which — the best part definitely is you stand in this resonant, beautiful marble staircase. And a member of the Supreme Court police force explains to you that if you engage in any kind of free speech activity, you will spend the night in jail.
Very firm and polite.
And it’s also interesting to hear that there is effectively content moderation on everyone who is in the room before they even enter. They say, hey, you open your mouth, and you’re out of here.
Yeah. So the people making these arguments represent NetChoice, which is a trade association for the tech companies. It’s their lobbying group. Who else is opposed to these laws?
So I should say that CCIA, which is a different tech trade association, is also a plaintiff, and they always get short shrift because they’re not the first named party. But a whole lot of individual platforms filed or free expression-oriented groups filed, lots of people weighing in who are interested in different facets of the issue.
I see. And for those of our listeners who may not be American or may not have much familiarity with how the Supreme Court works, my understanding is, in these oral arguments, the Justices rain questions down on the attorneys. They try to answer them as best they can.
Then they go away and deliberate and write their opinions. So we don’t actually know how they’re going to rule in this case. But did you hear anything during oral arguments that indicated to you which way this case might be headed?
So there’s a lot of tea leaf reading that goes on based on what happens in oral arguments. And usually, that’s the last set of clues you get until the opinion issues, which seems likely to be in June or something like that. In this case, there’s actually another case being argued in March that’s related and might give us some interesting clues.
But from this argument, it was pretty clear that a number of the Justices thought the platforms clearly have First Amendment-protected editorial rights. And it’s not like that’s the end of the question because sometimes the government can override that with a good enough reason.
But it seemed like there was, I think, a majority for that. But then they all got sidetracked on this question of whether they could even rule on that because the law has some other potential applications. They got into a lawyer-procedural-rules fight that could cause the outcome to be weird in some way.
So let me ask about that. To go back to our soyboy example, to me, if of a private business wants to have a website, and they want to make a rule that says you can’t call anybody a soyboy around here, that does seem like the sort of thing that would be protected under the First Amendment. You write your policies under that First Amendment. Why is that not the end of the story here?
Well, so what Texas or Florida would say is that these laws only apply to the biggest platforms, and they’re so important that they’re basically infrastructure now. And you can’t be hurt at all unless you’re being hurt on YouTube or on X or on Facebook. And so that’s different.
Right, yeah.
So what is the argument from the states about why they should be allowed to impinge on this First Amendment right that these platforms say that they have to moderate content however they want to, their private businesses. What do the states say in response to that?
They say the platforms have no First Amendment rights in the first place. That that’s fake, that what the platforms are doing isn’t speech. It’s censorship. Or what the platforms are doing is conduct. Or mostly they just allow all of the posts to flow, so the fact that they take down some of them shouldn’t matter, a lot of arguments like that, none of which are super supported by the case law. But the court could change the case law.
I want to ask you about another conversation that came up during these oral arguments that you referenced earlier, which was which platforms do these laws apply to? There’s some confusion about this.
And it seemed like the Justices had questions about, OK, maybe if we want to set aside for a second the Facebooks and the Xs and the YouTubes, what about an Uber or a Gmail? Maybe there should be an equal right of access there.
So I look at that, and I say, well, that’s a good reason not to pass laws that affect every single platform the same way. But I’m curious how you heard that argument and maybe if you have any thought about how the Justices will make sense of which law applies to what and what might be constitutional and what might not be.
Yeah, so that part of the argument, I think, caught a lot of people, including me, off guard. We did not expect it to go in that direction. But I’m a little bit glad it did.
I think it was the Justices recognizing, we could make a misstep here and have these consequences that we haven’t even been thinking about. And so we need to look really carefully at what they might be.
And in the case of the Florida law, in particular, the definition of covered platforms is so broad. It explicitly includes web search, which I’m a former legal lead for Google web search, full disclosure. And it seems like it includes infrastructure providers like Cloudflare.
So it’s really, really broad, who gets swept in. And I reluctantly must concede. I think the Justices were right to pause and worry about that.
Yeah yeah.
For sure.
Yeah. A lot of the people I saw commenting on the oral arguments this week suggested that this was going to be a slam dunk for the tech companies, that they had done a good job of demonstrating that these laws in Texas and Florida were unconstitutional, and that it sounded after these arguments like the Justices were likely to side with the tech platforms. Is that your take, too?
I think there — I think enough of them — you need five. I think at least five of them are likely to side with the platform, saying, yes, you have a speech right, and, yes, this law likely infringes it. But because of this whole back and forth they got into about the procedural aspect of how the challenge was brought, it could come out some weird ways.
For example, the court could reject the platforms’ challenge and uphold the laws but do so in an opinion that pretty clearly directs the lower courts to issue a more narrowly-tailored injunction that just makes the law not apply to speech platforms.
There are a lot of different ways they could do it, some of which would formally look like the states winning. Although, it wouldn’t, in substance, be the states winning against the platforms that we’re talking about most of the time, the Facebooks, the Instagrams, the TikToks.
Very interesting.
Yeah. So we’ve talked about these laws on the show before, and I think we can all agree that there are some serious issues with them. They could force platforms operating in these states to open the floodgates of harassment, and toxic speech, and all these kinds of things that we can all just agree are horrible.
But there is also an argument being made that ruling against these cases, striking these laws down, could actually do more damage. Zephyr Teachout, who’s a law professor at Fordham, wrote an article in “The Atlantic” recently about these social media laws called “Texas’s social media law is dangerous. Striking it down could be worse.”
She’s basically making the case that if you strike down these laws, you basically give tech giants unprecedented and unrestrained power. What do you make of that argument?
So I read the brief that Zephyr filed along with Tim Wu and Larry Lessig, and it’s like they’re writing about a different law than the actual law that is in front of the court.
And I think their worry is important. If the court ruled on this in a way that precluded privacy laws and precluded consumer protection laws, that would be a problem.
But there are a million ways for the court to rule on this without stepping on the possibility of future better federal privacy laws, for example. It’s not some binary decision where the platforms winning is going to change the ground rules for all those other laws.
So you don’t worry that if this case comes out in the company’s favor that they’re going to be massively empowered with new powers that they didn’t have before?
Well, if the court wanted to do it that way, if there are five of them who wanted to do it that way, then it could come out that way. But I can’t imagine five of them wanting to empower platforms, in particular, that way, and I can’t imagine the liberal justices wanting to rule in a way that undermines the FTC from being able to do the regulation that it does.
A big topic that comes up in discussions of law and tech policy is Section 230. This is the part of the Communications Decency Act that basically gives broad legal immunity to platforms that host user-generated content.
This is something that Conservative politicians and some Liberal politicians want to repeal or amend to take that immunity away from the platforms. This is not a set of cases about Section 230. But I’m wondering if you see any ways in which the way that the Supreme Court rules on this could affect how Section 230 is applied or interpreted?
Well, you might think it’s not a case about 230 because they agreed to review a First Amendment question, full stop. But the states managed to make it more and more like a case about 230, and multiple justices had questions about it.
So it won’t be too surprising if we get a ruling that says something about 230. I really hope not because that wasn’t briefed. This wasn’t what the courts below ruled on. It hasn’t really been teed up for the court. It’s just they’re interested in it.
There are two ways that 230 runs into this. I think one will be too in the weeds for you. But the more interesting one is lots of the Justices have said things like, look, platforms, either this is your speech and your free expression when you decide what to leave up, or it’s not, and you’re immunized.
Pick one. How can it possibly be both? And the answer is no, it can definitely be both. That was the purpose of Section 230, that Congress wanted platforms to go out there and have editorial control and moderate content. Literally, the goal was to have both at once.
Also, if the platforms have First Amendment rights in the first place, It’s not like Congress can take that away by passing an immunity statute. That would be a really good one weird trick, and I’m glad they can’t do that.
So there are a lot of reasons that argument shouldn’t work, but it’s very appealing, I think, in particular, to people whose concept of media and information systems was shaped in about 1980.
If the rule is you have to be either totally passive, like a phone company, and transmit everything, or you have to be like “NBC Nightly News.” and there are just a couple of privileged speakers, and lawyers vet every single thing they say, then you’re going to get those two kinds of communication systems.
You’ll get phones, and you’ll get broadcast, but you will never get the internet and internet platforms and places where we can speak instantly to the whole world but also have a relatively civil forum because they’re doing some content moderation.
Right. It almost sounds like there’s a downside to having the median age of a Supreme Court justice being 72.
I don’t know what the real age is. I’m sure I’ll do a pick up about that later.
Now, Kevin, do you want to tell her who wrote the 230 question?
[LAUGHS]: What? You’re going to out me like this?
I’m going to out you.
So this was a great question that I, unfortunately, did not write, but the Perplexity search engine did because I gave it the prompt, “Write 10 penetrating grad student-level questions for a law and policy expert about the NetChoice cases.
In fairness, I did think it was a pretty good question.
It was a very good question. So yeah, wow, you’re really you’re really doing me dirty here. I was going to get away with that.
Look, we wrote the rest of the questions. We just wanted a little help to make sure left no stone unturned.
Yeah, and it was a pretty smart question. Smarter than I would have come up with.
And let’s say, the answer is way better than the question.
Yes, that’s true.
A student of mine sent me a screenshot of something he got from ChatGPT. He’d asked for sources on some 230-related thing, and it cited an article that it pretended I had written, which did not exist, the Twitter files and Section 230 that was in a nonexistent journal called “The Columbia Journal of Law and Technology.” It looked very plausible.
I’m comfortable being cited in things I didn’t write as long as they were good and in prestigious journals. You know what I mean?
I loved your submission to “The New England Journal of Medicine.”
(LAUGHING) Thank you so much!
It was really good.
It saved a lot of lives.
So Daphne, we’ve talked about how the Court will or may handle these cases. But I’m also curious how you think they should handle this. You and some other legal experts filed an amicus brief in this case, arguing for —
Actually, let’s settle this once and for all. Is it amicus or is it amicus, Daphne?
It’s both end.
OK, great
Wow!
And some people say the plural, amici.
Oh!
Ooh!
I ordered that in an Italian restaurant once.
I think I saw him DJ in Vegas.
All right.
Can you just articulate your position in that brief about how you think the court could and maybe should handle this?
Yeah, so this is not how the parties have framed it. This is some wonks coming in and saying, you framed it wrong. But I do actually think they framed it wrong.
So there’s a standard set of steps in answering a First Amendment question. You ask, did the state have a valid goal in enacting this law, and does the law actually advance that goal, and does it have an unnecessary damage to speech that could have been avoided through a more narrowly tailored law?
So in this case, the states say we had to pass this law because the platforms have so much centralized control over speech. Let’s assume that’s a good goal. We say that doesn’t mean the next step is the state takes over and takes that centralized control to impose the state’s rules for speech.
There are better next steps that would be more narrowly tailored, that would be a better means-ends fit and, in particular, steps that empower users to make their own choices using interoperability or so-called middleware tools for users to select from a competing environment of content moderation.
What would this look like? This would be like a toggle on your Reddit app that would say, I want soyboy content, or I don’t want soyboy content content?
So it could look like a lot of different things. But I know you guys have talked to Jay from Bluesky. It could look like what Bluesky is trying to do with having third parties able to come build their own ranking rules or their own speech-blocking rules, and then users can select which of those they want to turn on.
It could look like Mastodon with different interoperating nodes where the administrator of any one node sets the rules. But if you’re a user there, you can still communicate with your friends on other nodes who have chosen other rules. It could look like Block Party, back when block party was working on Twitter. You download block lists that are —
This was an app that basically lets you block a bunch of people at once.
Yeah. So it could look like a lot of things, and all of them would be better than what Texas and Florida did.
Right.
I wonder if you can sort of steelman the argument on the other side of this case a little bit. I was going through this exercise myself because on one hand, I do think that these laws are a bad idea.
On the other hand, I think that the tech platforms have, in some cases, made their own bed here by being so opaque and unaccountable when it comes to how they make rules governing platforms and, frankly, spending a lot of time obfuscating about what their rules are, what their process is, doing these fake oversight boards that actually have no democratic accountability.
It’s a kangaroo court. Come on. And I think I’m somewhat sympathetic to the view that these platforms have too much power to decide what goes and what doesn’t go on their platforms.
But I don’t want it to be a binary choice between Mark Zuckerberg making all the rules for online speech along with Elon Musk and other platform leaders and Greg Abbott and Ron DeSantis doing it. So I like your idea of a middle path here.
Are there other middle paths that you see where we could make the process of governing social media content moderation more democratic without literally turning it over to politicians and state governments?
It’s actually really hard to use the law to arrive at any kind of middle path, other than this kind of competition-based approach we were talking about before.
The problem is what I call lawful but awful speech — a lot of people use that — which is this really broad category of speech that’s protected by the First Amendment, so the government can’t prohibit it, and they can’t tell platforms they have to prohibit it.
And that includes lots of pro-terrorist speech, lots of scary threats, lots of hate speech, lots of disinformation, lots of speech that really everybody across the political spectrum does not want to see and doesn’t want their kids to see when they go on the internet.
But if the government can’t tell platforms they have to regulate that speech people morally disapprove of but that it’s legal and First Amendment protected, then their hands are tied.
Then that’s how we wind up in this situation where instead, we rely on private companies to make the rules, that there’s this great moral and social demand for from users and from advertisers. And it’s extremely hard to get away from because of that delta between what the government can do and what private companies can do.
Well, some people have described our podcast as lawful but awful speech, so I hope that we will not end up targeted by these laws. Daphne Koller, thank you so much for joining us. It’s really a pleasure to have you. Thanks for having me.
[MUSIC PLAYING]
“Hard Fork” is produced by Rachel Cohn and Davis Land. We’re edited by Jen Poyant. This episode was fact checked by Caitlin Love.
Today’s show was engineered by Chris Wood. Original music is by Diane Wong, Marion Lozano, Rowan Niemisto, and Dan Powell. Our audience editor is Nell Gallogly. Video production is by Ryan Manning and Dylan Bergersen.
If you haven’t already, check us out on YouTube at youtube.com/hardfork. Special Thanks to Paula Szuchman, Pui-Wing Tam, Kate LoPresti, and Jeffrey Miranda. You can email us at hardfork@nttimes.com with all your sickest burns.
Please invite us to your Willy Wonka themed events, too.
[THEME MUSIC]