Mystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
Episode 39: Newsrooms Pivot to Bullshit (feat. Sam Cole), Aug 5 2024
The Washington Post is going all in on AI -- surely this won't be a repeat of any past, disastrous newsroom pivots! 404 Media journalist Samantha Cole joins to talk journalism, LLMs, and why synthetic text is the antithesis of good reporting.
References:
The Washington Post Tells Staff It’s Pivoting to AI: "AI everywhere in our newsroom."
Response: Defector Media Promotes Devin The Dugong To Chief AI Officer, Unveils First AI-Generated Blog
The Washington Post's First AI Strategy Editor Talks LLMs in the Newsroom
Also: New Washington Post CTO comes from Uber
The Washington Post debuts AI chatbot, will summarize climate articles.
Media companies are making a huge mistake with AI
When ChatGPT summarizes, it does nothing of the kind
404 Media: 404 Media Now Has a Full Text RSS Feed
404 Media: Websites are Blocking the Wrong AI Scrapers (Because AI Companies Keep Making New Ones)
Fresh AI Hell:
Google advertises Gemini for writing synthetic fan letters
Dutch Judge uses ChatGPT's answers to factual questions in ruling
Is GenAI coming to your home appliances?
"AI" generated images in medical science, again (now retracted)
You can check out future livestreams on Twitch.
Our book, 'The AI Con,' comes out in May! Pre-order your copy now.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
- Bluesky: emilymbender.bsky.social
- Mastodon: dair-community.social/@EmilyMBender
Alex
- Bluesky: alexhanna.bsky.social
- Mastodon: dair-community.social/@alex
- Twitter: @alexhanna
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
Alex Hanna: Welcome, everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it, and pop it with the sharpest needles we can find.
Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come.
I'm Emily M. Bender, Professor of Linguistics at the University of Washington.
Alex Hanna: And I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. And this is episode 39, which we're recording on August 5th of 2024. As companies in different sectors size up the hype around generative AI and try to find places for ChatGPT in their metaphorical, or as we made fun of in the past, literal pistons, journalism remains no exception.
Perhaps too predictably, even the most venerable institution isn't safe from the temptation to try to use mathy maths to dig out of a budget hole, rather than, say it with me, investing in the skills and well being of its workforce.
Emily M. Bender: We spoke with tech journalist Karen Hao mere months ago about the ways newsrooms are eyeing AI and how that will undermine the quality of journalism and reduce opportunities for highly skilled reporters.
And unfortunately, the Washington Post did not take her very wise words to heart. In May, the Washington Post's chief technology officer announced that AI will be integrated into every part of the newspaper's business. This summer, the paper unrolled AI summaries of its climate reporting. And all of this, we are here to say, is unlikely to benefit the alleged ideals of an institution that claims so loftily, "Democracy dies in darkness."
Alex Hanna: With us is newsroom skeptic- in-chief Samantha Cole, a journalist and co founder of the tech outlet 404 Media. We bring her work to the stream nearly every week, so we thought it might just be great to, uh, invite her on. Thanks so much for joining us today, Sam.
Samantha Cole: Thank you so much for having me. I hope you're bringing my work on in, uh, good ways.
We mostly write about the world ending, so I don't know, it's, you know, usually kind of a bummer, probably.
Emily M. Bender: If we're making fun of something in your work, we're making fun of the people you're reporting on, not your work.
Samantha Cole: Okay, good. Okay, great. (laughter)
Alex Hanna: It is, it is, it is drawn on very approvingly, and so we're, uh, thank you for fighting the good fight against the kind of deluge of things that happens on a daily basis.
Emily M. Bender: Yeah.
Samantha Cole: Never a dull moment.
Emily M. Bender: We learn a lot from 404 Media and we're super excited to be following what you're doing and to get to talk to you today about what's going on over at the Washington Post. Which, by the way, the Washington Post has also had some really great reporting on these topics and then some not so great stuff and then they're doing this. So it was it was sort of extra disappointing to me. So our first artifact is a pretty short article from Futurism, published May 22nd 2024 by Noor Al-Sibai and the sticker is, "Keep me posted," which is not a sticker I've seen before and--and were we right Sam that's called a sticker, that thing there?
Samantha Cole: Um, I don't know. I think every site has a different name for it. I think it's like a topic on some sites. I like sticker though.
Emily M. Bender: We picked up sticker somewhere. I picked it up from Alex. So the headline is, "The Washington Post tells staff it's pivoting to AI," and then the subhead is a quote, "'AI everywhere in our newsroom.'"
Um, and let's see, um, this is short enough. I think we're probably going to read the whole thing. And I'm really sorry for the ad. Um, folks on the show, I'll try to keep it out of the way. Ah.
Samantha Cole: Relevant to the conversation.
Emily M. Bender: Yes. They've got to make money somehow, right?
Okay, so: "Already facing scandal, the Washington Post's new ish CEO and publisher, Will Lewis, has announced that the newspaper will be pivoting to artificial intelligence to turn around its dismal financial situation. As Semafor media industry editor Max Tani tweeted, 'Lewis told Post Staffers today that the newspaper will be looking for ways to use AI in its reporting as it seeks to recoup some of the $77 million it lost last year.'"
What do we think? Are they going to succeed with that?
Samantha Cole: I mean, it's like, anytime you're throwing like these big tech companies at a money problem, it's never going to go well for journalism, in my opinion.
Um, you know, what the, what the CEO is kind of pointing to here is this much bigger problem. Like he says, he's like in a hole. And has been for some time, um, and it's repairable and doable if they somehow pull together and accept AI as their new overlords.
Which I've heard before personally in a newsroom, but it was Facebook instead of AI. So that didn't go well.
It was like, oh, everything needs to be video now because it plays well with the social media. Um, and obviously we watched you know, whole newsrooms pivot to new strategies to kind of appease whatever the new thing was, whether it was Facebook or TikTok or whatever it was. Um, yeah, I'm kind of worried that this is the same thing happening all over again.
Um.
Alex Hanna: Yeah.
Samantha Cole: I think it's pretty obviously that's the case.
Alex Hanna: I totally agree. I mean, the kind of pivot to video was a moment kind of in the mid 2000-teens where it was, you know, we need to go to video because Zuckerberg has made you know, Facebook and, you know, and the engineering team, be this video-first kind of thing and, or it was kind of moving to this, even kind of the short amount.
I mean, it was less about Twitter, although I imagine Twitter doesn't drive as much--I mean, it drives traffic, but Facebook is also just kind of like has these autoplay videos. And then that was, that was the kind of thing that you know, large companies. I remember, I feel like Tronc was like a big lever in the pivot to video.
I don't know why I'm associating--
Samantha Cole: Oh, yeah.
Alex Hanna: --it with Tronc. But it was, which is the, which I think is back to being called the Tribune. Um, and so, yeah, I can't follow, but it's, yeah, it's, it's--and it's kind of a wonder that it's--I mean, through what mechanism though? And I'm assuming that what they're trying to do in kind of reading a bunch of this stuff, like, I don't know what, what, um, what, I don't think Lewis had said anything in the tweet or from the tweet.
I'm assuming it's kind of replacing reporters in the newsroom. Um, and then now Emily is now Emily's clicking, clicking on the tweet.
Emily M. Bender: Trying to click on the tweet.
Alex Hanna: Yeah. Yeah. And he says--so journalist says, "'Lewis says that says that the says," I think that's a mistake.
Emily M. Bender: Yeah.
Alex Hanna: "--the three pillars of the new strategy are: great journalism, happy customers and making money." cand And the quote, "If we're doing things that don't meet all three, we should stop doing that." Incredible. Um, "He adds that the company will also be looking for ways to use AI in its journalism." So it's, it's not, it's just kind of a throwaway line there. And then yeah. And then, and then the chief technology officer said it's going to be AI is everywhere in the newsroom.
Emily M. Bender: So the, the CTO, Oh, sorry. No, now I need to back up. Come on. Um, Alex, speaking of the connection to pivot to video, we're going to talk in a moment about Phoebe Connelly, who is the, um, Washington Post first ever senior editor for AI strategy and innovation. And you, you've looked at her LinkedIn profile and discovered that she, um, oversaw the pivot to video there, too.
Alex Hanna: Yeah, it's actually it's actually very missing in this profile. And this is a profile that we're looking um at Neiman Lab, um uh, which is a publication that covers kind of innovations in journalism. But yeah within that I mean her she says that she talks about the this lab, which is called like the Next Gen lab. Which we're and she was also the first, but if you look at her LinkedIn, it's it's kind of all I think she was a deputy director of like the video--so, so, so, uh, and it was at the height of the turn to video or the pivot to video. Anyways.
Emily M. Bender: So let's go back here. I want to say that the, I see the similarities to the pivot to video, like the, the newsroom sort of saying, where's the next shiny thing? Um, and the pivot to video clearly was disastrous for the finances of these newspapers and other journalistic outfits. I think there's a secondary tragedy here, which is the synthetic text going everywhere, right?
So not only are they not going to end up financially good, they're going to be putting all this garbage out into the information ecosystem. Um, so, oh no, we have someone, um, in the chat saying they canceled my, their WaPo subscription just now. "I've been meaning to do that for a long time already, but this is a good excuse as any."
Um, yeah. Uh, I have, I have to say, I have mixed feelings about that. Um, this is a bad move by the WaPo. Um, I also think that, um, there is some good journalism happening there and subscriptions are important. So, um, but do subscribe to 404 Media. Speaking of, you know, if you save some money now on your, uh, your, uh, news budget, you can, you can go, uh--
Samantha Cole: Yeah, go to 404 and subscribe there.
If you feel like subscribing to independent media, um, yeah, I mean, yeah. And. I mean, I agree. It's like, I wouldn't like start fleeing from outlets because they're pivoting to this AI thing. Like it does-- it sucks. And like, it's not good. Um, I don't like it, but I can guarantee you that the reporters and the journalists who are working at these places also really hate it.
Um, it's, it's the, the people who are like several levels above them making decisions that are totally detached from the actual journalism that they're doing. Um, that's that quote about like, happy, 'we just need to have happy customers and make money' is so funny. Cause it's like, obviously like, duh, like that's, isn't that like the, the endeavor of everything, um, if it wants to be successful is to make people happy and make money at the same time.
But like, I just, I don't think that like, the pivot to AI or like having AI and everything on your site is going to make anyone happy. Like, I really think it's going to do the opposite. It gets in the way, um, more than it does, you know, actually like fulfill any kind of missing need. You know, I'm never reading the newspaper and thinking, I wish I had an AI summary of this.
Emily M. Bender: So Irate Lump says in the chat, "Democracy dies in automatically generated text." (laughter) It's pretty good. And earlier we had Abstract Tesseract saying something about how a genAI "famously makes money." Which, you know it's not, right.
Samantha Cole: That's the irony of this. It's like, it's a bubble. Like it's about to burst.
Alex Hanna: Yeah.
Samantha Cole: Um, yeah.
Alex Hanna: Well, he, he, Abstract Tesseract says, "GenAI being famously effective at recouping financial losses," which I think is, is, is, is more peaked in terms of the kind of hold that the Washington Post is, is finding itself in. Now Sam, one question I have for you just in this is if you read AI boosters in journalism, they're kind of, like the argument that they're making is, well, AI is going to basically free up our reporters from like stuff they don't want to do so they can like do real like shoe leather journalism.
And on the face of it as a non journalist, I'm like, that sounds ridiculous. It's going to do nothing of the sort, but I'd love to hear from you what like your read of that situation is.
Samantha Cole: Yeah, I mean, if anything, I think it muddies the waters and gets more in the way than it does actually help any kind of journalism or like improve the experience of your everyday journalist.
Um, you know, it's not like we're using ChatGPT to answer questions, because it's always wrong or it's wrong a lot. Um, it's like, we can't rely on that as like a source, obviously, or we wouldn't even to begin with. Um, I think the only, the only real use that I have seen in like my everyday, um, and this is just me personally and maybe other people have different opinions about it, but like the only thing that journalists typically really bemoan in that regard is transcribing.
I hate transcribing, um, and I think a lot of like the, the voice to text systems have gotten really good. Um, God knows what some of these are using our, our recordings to like turn into some new training model or system. Um, but the ones that are like secure are pretty good at, you know, what they're doing.
So, um, yeah, I know some, like, I, I'm not sure about iPhone, but I know my phone has like a, I have an Android phone and it has like a recorder on it. Um, and I'm pretty sure it's done on-device, where like, you know, it's transcribed--recording and transcribing and it's not like leaking that out anywhere. But, um, yeah, I, you know, I, That's the only thing that's like, that helps me during the day, that I can think of in terms of AI.
The rest is just kind of like bloat that I don't need to look at.
Emily M. Bender: The automatic transcription is not AI. It's an application of machine learning, right? It's using pattern recognition.
Samantha Cole: Yeah, that's true. So, yeah.
Emily M. Bender: Yeah. And it's also using language models. So the, the original purpose of the language model was not to extrude synthetic text, but to choose among different possible texts as what looks more like the language that the model is trained on.
And so you have in these automatic transcription systems, notionally, two parts, they're now merged together, but there was the part that went from the acoustic signal to a bunch of possibilities of what it might have been, and then the language model that reranked those and here's the most likely thing.
And then you get the hilarious artifacts, right, where if you've got someone's name, um, Christie, our producer is, also for the transcript, starts with automatic transcription and corrects them and, uh, what, Abstract Tesseract, which who's a frequent commenter, so we mentioned their, their handle a lot in the text, came up as like "a Tesla artifact" or something? Christie will have to tell me what it was.
Alex Hanna: Yeah, sorry about that, Nick.
Emily M. Bender: "Outside Tesla rack" is what it was. So, so these are, you know, uh, amusing. And part of the reason that I really like automatic transcription as a use case for, um, statistical pattern matching is that it's a situation where it's saving you time and you are in a position to correct it, right?
So you've got the first pass thing that comes out of the automatic transcription and then you listen to it again and you correct 'outside Tesla rack' to Abstract Tesseract, um, who is saying in the chat, "lol, new alt just dropped."
Alex Hanna: Yeah.
Emily M. Bender: So, yeah.
Samantha Cole: Okay, so I take it back. There is no use for AI in my daily day, in my daily workday.
Um, yeah. And, and actually now that, so now that you pointed that out, I'm thinking about in particular I use Otter, um, which I should get off of because I think it got, it got bought or something recently, but, um, I've liked it until now. But, um, it does the transcription, but that now it has like an AI overview.
Like everything does now, everything has this AI overview BS. So like there's a tab that it takes me to before it takes me to the transcript that's processing. It shows me topics that were discussed during the call. And I'm just like, get out of my face. Like, I don't want to see this. Usually it's not even right.
It's just like not necessary.
Emily M. Bender: And it's a good practice to like, not read it because you know, it's not right. But if you've read it, then that's going to impact how you're looking at the actual transcript.
Samantha Cole: Yeah.
Alex Hanna: Yeah.
Emily M. Bender: Yeah.
Samantha Cole: Exactly.
Alex Hanna: But so much is having this, I mean, last, on the last episode, we were talking about, um, this, uh, interview with the Zoom CEO that the editor in chief of, um, the Verge was doing.
And, and he could just go on and on about his like AI companion. And it wasn't really clear what this thing actually did. But I think from the article, it only seemed like it was doing like a summary of a meeting. But yeah, like how reliable is that summary at all? And you know, why are you going to rely that?
And especially if you're a journalist, where getting things pretty precisely right is really critical. You know, you can't just take kind of a, uh, a quote unquote 'summarization' of this, where the summarization might just be things, like just leaving out really, really key words, you know.
And actually this is, this, this is actually great.
Wise Woman For Real put out in the, in the chat, this article I've been meaning to read, but a few friends have flagged it for me, which is this article that's, um, on this publication called R&A IT Strategy and Architecture, where someone wrote, "When ChatGPT summarizes, it actually does nothing of the kind."
Um, I haven't read this in full, but it's, I think, effectively, it just kind of looks like it's summarizing. Um, but it seems--
Emily M. Bender: Exactly, because it's a synthetic text extrudging machine, right?
Alex Hanna: Yeah, because it's synthetic text, yeah. So it's not actually doing what is, what is, what is a summary of the text, of rewriting.
It's the sort of, it sounds like a summary. It's sort of shortens the text and leaves out really critical elements of it.
Emily M. Bender: Yeah, absolutely. And I can imagine in the context of journalism, even if everything that came out in that summary was actually something that was in the conversation, right. Which isn't guaranteed. Oftentimes, I would bet as a journalist, Sam, that you are interested in the sort of the side points, the things between the lines, where you want to dig further, that might not end up in a summary.
Samantha Cole: Yeah, exactly. It's like, I don't need the, the top line overviews. Which, maybe if you're like working in like the business world or like any other industry, um, you might like to have that kind of like top line, like let's sum it up, about a meeting or something, but like, as far as a transcript, I need to go through it and make sure, you know, every single, like you said, everything is correct. Like I'm getting all the quotes right. Um, yeah. And it's, it's just not very useful for me to see, um, vague, you know, overviews that might not even be correct, of a conversation that I had. It's like, I have a memory. I can remember what we talked about, but
I don't, you know, I don't need the, the robot to do it for me, I guess.
Emily M. Bender: Yeah. Um, so I'm going to wrap up this artifact and get to the other one. There's just a couple other things in here that have caught my eye. Um, one is, um, wait, did I just scroll down past it? Um, somewhere it was saying.
Alex Hanna: Yeah--
Emily M. Bender: Go ahead Alex.
Alex Hanna: Well, there's just a few things to point out. I mean, Will Lewis has mentioned, you know, this is a, this is a guy who used to be at, um, I think one of Rupert Murdoch's um, companies. I mean, he was at news Corp, um, and he led a quote unquote, 'witch hunt' to find people responsible for leaking, um, for like involved--this hacking scandal that involved the royal family and several celebrities and then, um, so he's already a pretty controversial figure, kind of really, um--there's been a few other hires at the Washington Post that have come directly from other, like, UK tabloidy publications.
Um, anyways, but then the last part of this might be the thing that you're looking for, Emily, about this landmark deal between News Corp and OpenAI.
Emily M. Bender: Yeah, so that's part of it. Actually, it's the last two paragraphs. So the second to last says, "Notably, the newspaper's announcement about its new revenue boosting efforts doesn't include any explicit mention of AI. Instead, it makes vague reference to 'experimentation with new offerings' over the next few months."
Um, and then last paragraph, so just like, this AI thing is sort of internal, and I'm glad that someone leaked it to Futurism. And, um, maybe they already know that they shouldn't be bragging about it, but we'll see.
And then, "Coincidentally, news of WaPo's AI pivot comes the same day as the announcement of a landmark deal between News Corp and OpenAI that will allow the AI firm to use content from the conglomerate's properties, which include the Wall Street Journal, the New York Post, and Times of London."
And this is so frustrating to me because, um, this, I think, fits what you were saying before about something about, 'we have to chase the revenue,' and so now the new thing is, well, licensing our content, as if ChatGPT isn't just making paper mache out of the, its input. Right, that you know, now it can make paper mache that seems to refer to events from more recently than 2021 because they have all of these news deals, but it's still not news. It's still not a summary of the news. It's still not anything accurate. It's still synthetic text.
Samantha Cole: Exactly.
Yeah. I mean, I would definitely--if people are interested in this, I would recommend reading, um, maybe you talked about this in our previous episode, but, um, so stop me if you did, but, um, Jessica Lessin wrote a really good article in, um, the Atlantic about media companies and how they're making a big mistake with AI.
And she gets to all these points as well. It's just like, we're just seeing like history repeat itself with, with, you know, chasing the next shiny thing. Um, and you know, like News Corp did the same thing with, uh, Apple and the iPad and they, all they really ended up doing was selling more iPads. (laughter) Um, so it's just constantly getting tricked by these big companies into, um, you know, helping them actually sell their technology and not getting much of anything in return.
Alex Hanna: Yeah. It really is. Yeah. I mean, it really is a short sighted type of maneuver. I mean, all these companies just dumping all these data and then, I mean, it's not going to help them in the long run, you know, they might cover, you know, a shortfall that year, but it's going to, you know, in the long run only empower these technology companies to do the next, you know, next thing that's going to undercut their entire business model.
Emily M. Bender: Yeah. And feed into the inevitability argument.
Alex Hanna: Yeah.
Emily M. Bender: Okay. So let's look at this, uh, interview in Nieman Lab, um, with the sticker, "the first ever," and uh, the interviewer is Andrew Deck and the interviewee is Phoebe Connolly. Headline, "The Washington Post's first AI strategy editor talks LLMs in the newsroom."
And then subhead, "Phoebe Connolly on prompt training, AI anxieties, and her first-of-its-kind role." And this is from March 28th of this year.
And so, uh, we, we've got her title, um, kind of skip down to where he's actually talking to her. Um, and just like how, so she starts by saying, in the third paragraph of her answer to the first question, "I think there is someone at almost every major organization who has been asked to add AI to their portfolio. I joked with Zach Seward, Editorial Director of AI at the New York Times, that we should start our own support group for those newly charged with figuring out AI. Please reach out, we're considering organizing a helpline."
Um, and like some sympathy here, because Phoebe here has been basically charged with dealing with the AI stuff because someone higher up got FOMO about it, basically.
And there's a lot of people in that position. And unfortunately, it seems like those positions are always possible to fill, like for everybody who would be like refusing, saying, no, this doesn't make sense, if the higher ups are saying we've got to have somebody, they can find someone.
Alex Hanna: There's um, the first line of the interview is also worth reading where she says, "I'm the first person with this title at the Washington Post. After me, the title is going to an LLM. Kidding, I hope."
And I'm like, ooh, (laughter) little cringe.
Emily M. Bender: Yeah, exactly. Resist. Like you don't have to capitulate to this.
Samantha Cole: Yeah. I mean, how does that, I'm curious how that makes you two feel when she's like, you know, now she's charged with figuring out AI.
It's like, there are people who have been doing this work for a long time. (laughter) Um, people who have figured out how this stuff works. Um, you know, but then, like you said, it's like, it's not really her fault that suddenly she's in charge of this, but--
Emily M. Bender: I really-- (crosstalk)
Samantha Cole: --frustrating.
Alex Hanna: Yeah, go ahead Emily.
Emily M. Bender: I really wish that the people put into that position, and some of them do. Sometimes I get contacted by people who are put in this position and they figure out to talk to me or someone like me to get a better sense of what's actually going on.
But too many folks end up orienting towards the companies that are selling this crap. And so they're not actually going to get a clear idea. One would hope that a journalist would know better than that?
Alex Hanna: I would also say that, I mean, it's, you know, they're, I mean, they're being asked to do, they're definitely being asked to do a lot with--well, let me reframe this.c They're in a part of the organization that is already very tech solutionist, so it's not going to be necessarily like that they are orienting towards, I'm the head of, you know, the, you know, the, the, the DMV desk and in the DMV, like we have to cover XYZ and maybe there's like an input from some kind of a local news source that we can automate in some sense, which might be something good for like this data journalist, like a data journalist.
And she does mention that she, um, you know, met--is, works with the head of data journalism. It's more like, it seems like she is rolling up to the engineering team, or she's rolling up to the CTO or whoever. And so, you know, everything is going to look like, you know, everything's going to look like a nail that has like an AI hammer, right?
And it's, you know, these, that's kind of, you know, that's kind of how it goes.
I mean, you can learn a lot--this is me with the organizational sociologist hat on, but you can learn a lot about what this person's intended to do with like who their manager is and what part of the organization they live in. Right?
Um, you know, I was at Google and we rolled up to like research, and research was actually pretty freeing to what you could do. But then there's also people like Responsible AI that are rolled up to the legal team. And we're like, well, we know what approach they're effectively going to take, right? I mean, they're going to take a way that is going to be about guarding about against like personal liability or rather corporate liability and, and, and such things.
And so, yeah, I mean, it seems to be like, you can really tell like what people are going to do or the orientation just from like where in the organization they live.
Emily M. Bender: I had not noticed this graphic before. It looks like it's actually pointing to a different article. Um, so related article about the New York Times, but above it, there's this thing that says, "AI journalism works when it's--" And then there's five bullet points where there's a struck out thing on the left hand side and then a regular text thing on the right hand side.
So, "Not unchecked, but vetted; not lazy, but rigorous; not selfish, but reader-first; not dishonest, but truthful; not opaque, but transparent." And my reading of that is, yes, these five things are probably valuable qualities in good journalism, period, right? And uh, you shouldn't be using synthetic text anywhere in journalism, right?
So that's like "vetted" would mean something different if you are, you know, maybe you're vetting sources. I don't know. You probably vet things, Sam, right?
Samantha Cole: Yeah. Of course. All the time. All day.
Emily M. Bender: Yeah. But not the output of ChatGPT. That would be a waste of time.
Samantha Cole: No, that's a waste of time. Yeah.
Emily M. Bender: Yeah.
Samantha Cole: Pointless. It's so detached from the, from the original source. It's like, you know, why would I start there?
Alex Hanna: Yeah.
Samantha Cole: Why, why would anyone? (laughter)
Alex Hanna: Right.
Emily M. Bender: Yeah.
Alex Hanna: Absolutely. Yeah. And there's, um, a comment in the chat where someone says, uh, Aishwarya_What says, "Who's the audience for articles like this?"
And, and Homsar315 points out that, "Neiman Lab is an industry publication that covers journalism in the context of the internet."
So I'd imagine, you know, the people are like probably other newsrooms, right? And other editors who are like also facing this kind of shortfall. Um, so in some ways when Phoebe is kind of putting out a call for help, it's sort of like probably those people, you know, are there other people, maybe other large, but, or kind of conglomerate, maybe publications, uh, or syndicated, uh, organizations like AP or AFP or something, they have a similar role.
So I'd imagine that's the audience. Um, so anyways.
Emily M. Bender: I should say, I turned this up by searching on Google, uh, 'Washington Post AI newsroom,' were the search terms that that got me to this article, um, on like the first page of results for that.
Um, okay.
Samantha Cole: It's a good question because some of these questions are, I don't know.
It's like, it's, Neiman is like an, I don't want to rag on Neiman too much or Washington Post really. They're both, um, good organizations, but like the, the questions in this Q&A are so incredibly softball. And also the answers that she gives are so buzzword and then get no pushback. (laughter) It's like, okay, moving on.
It's like, a lawyer sounds like they wrote this answer, some of these answers, and it's like they're canned answers from like a PR professional and you're just letting her kind of say them and then moving on. It's really strange to me.
Alex Hanna: Well, they--
Samantha Cole: We'll get to some of those in a minute, but.
Alex Hanna: They did say that, that, so Andrew Deck writes a, this is just an email interview, like there's no pushback.
Samantha Cole: Right.
Alex Hanna: You know, so it was just like she responded to like a set of things and.
Samantha Cole: Yeah.
Alex Hanna: I mean, I wonder if he would push back, if you know, it was an interactive interview, but yeah, you're completely right. I mean, there are, I mean, there's completely softball. It's not like, you know, there's no questions about like, what do you do with AI's false information?
Samantha Cole: Yeah.
Alex Hanna: How are you going to deal with that, like when it makes shit up, are you going to do anything about that? You know, there could be. Yeah.
Emily M. Bender: Yeah. How are you going to be transparent to your readers about where stuff is coming from?
Alex Hanna: Yeah.
Emily M. Bender: Yeah. So. This question here that I'm looking at now was, so Deck says, "What did working with AI look in the, like in the Washington Post newsroom before this role was created? How do you expect that to change?" And in the middle of her answer, she says, "Last year, the Next Generation team started some experiments that leveraged generative AI." Yikes. Uh, "We held a company wide hackathon and our product manager, Tony Guzman started prototyping news delivery surfaces that incorporated generative AI." Like, um, first of all, news delivery surfaces, maybe that's an industry term, but that strikes me as a, as a very, strange and I think sort of focused on the monetization, maybe way of thinking about it. Um, and like, I don't want generative AI anywhere near news that I'm reading. Like, no.
Samantha Cole: It's like a sales term.
Emily M. Bender: Yeah.
Alex Hanna: Well, the next part, well, the next part of that, I do want to read, because they said we had a, "We had a company wide hackathon and our product manager tony Guzman started prototyping." Yeah. So I mean, let's, I mean, the hackathon thing is like, you're already sort of doing this thing where your organization is moving and try to like, trying to reorganize such that, you know, we, you know, if we just have more innovation, then that's going to be, you know, the thing that solves everything.
And it's just, ugh.
Emily M. Bender: Yeah. All right. There was one or two super cringe things in here. Do you remember anything you wanted to come to? Um. I should, I should have had my notes. Oh, yeah. It was this one.
Samantha Cole: Oh, this question was, this is a crazy response.
Emily M. Bender: Okay. So Deck says, "There's a lot of anxiety in journalism at the moment around job displacement due to AI adoption. How do you plan on addressing anxieties in your own newsroom?" And Connelly says, "We all need to encourage experimentation with generative AI. Once it stops being an idea and becomes a tool, then we can move on to the fun part, which is figuring out uses that we can put it to. I'm not afraid of AI as a journalist. We are so good at leveraging new tools to report and deliver the news. Generative AI is just the latest. Journalists introduce new facts into the conversation, and we do this through multiple source, transparent reporting. This skill set and our core values are even more valuable in an AI mediated landscape."
(laughter)
Emily M. Bender: So Sam, how did you take that?
Samantha Cole: What does that mean? What does that mean?
(laughter)
Samantha Cole: That's just, that's like, it's total word salad. Yeah. It's like, it sounds like, it extremely sounds like an email that I would get from, um, someone in the C suite at a big media company, you know, it's like, that's, that's completely corporate, uh, calm the masses speak. No, you know, it's like, it's going to be fun.
It's going to be great. We're, you know, it's going to be amazing.
(laughter)
Alex Hanna: It's fun.
Samantha Cole: Yeah. And it's like, yeah, the next, the next email they get will be how they are offering buyouts or something like.
Alex Hanna: Yeah, exactly. 'I'm sorry to inform you.'
Samantha Cole: Yeah, it's just, it's just this, this awful cycle in journalism. The industry is so, it's in such trouble in general that like, it's so, I just, the reaction I have to that is like, Oh, it's, this is not going to be fun.
This is going to suck.
Emily M. Bender: And this part here just struck me as such doublespeak. "Journalists introduce new facts into the conversation. And we do this through multiple source, transparent reporting." Like. Like that sounds like a true thing to say about journalism, but how does generative AI fit into that at all?
Alex Hanna: Yeah.
Samantha Cole: I don't really understand journalists, 'journalists introduce new facts.' Like, I don't really understand that as a phrase, like, we're not, what's a new fact? Like, we, like, we just, we uncover new--
Emily M. Bender: Yeah, you're not you're not making up facts. I think--
Samantha Cole: Yeah, we're not like, ChatGPT introduces new, new facts.
Emily M. Bender: Yeah, they're not facts. Yeah. But the most charitable reading I can come up with here is like me as a reader, as a consumer of news, I learn about new things because a journalist went and did the reporting and uncovered it and then wrote it up in a way that I could learn about it. Um.
Samantha Cole: But she could have just, she could have just say that. Like, it's like, this is such a worked over Q&A.
Yeah. I can tell that this was like vetted by a comms person.
I don't know. Maybe it wasn't, who knows, but yeah. That's how it reads.
Emily M. Bender: Yeah. So, so this is, so this is how the higher ups at the Washington Post deal with generative AI. Um, but like, what, what's it like for you about 404? You know, as this stuff is, is it affecting your operations?
Is it, is it something that you have to deal with?
Samantha Cole: Yeah, for sure. I mean, we, a couple months into launching we--we, we write about generative AI all the time, obviously. So, um, it's kind of odd to be writing about the thing that's also like impacting your business. And it's odd to be the journalist and like, and also like the person in charge of making money. (laughter)
It's like we have to stay afloat somehow to keep doing the job that we like to do and that we love to do, but, um, that requires making business decisions. So we were looking at, you know, where our, our traffic was coming from and things like that. Um, and we realized that as we were doing this reporting other like AI generated sites were stealing our work, basically like ripping, like turning them and ripping them off.
Um, which is something that happens to like journalism all over the place. It's really rampant. So, um, it's not a unique problem, but because we're so small and we're new. It's like when you would go to Google and search for a story that we wrote, the first outlets you would get to were like the big ones, like Verge, Wired. They're all aggregating us.
But then Google wouldn't because Google is running on, you know, this algorithm that depends on how popular something is. It's like, they were floating to the top. And then it would be like these AI articles that were ripping us off. And you're like, okay, wait a minute. We need people to come to our website.
So yeah, it's, we've, we've experienced this kind of firsthand. It's like AI is actually destroying, um, our, our revenue, our business model in a way that we had to figure out how to get around. So we started doing email, um, walls. So like it's a soft, it's not a pay wall it's a subscriber wall. So you sign up with your email in that way. Um, you know, you get the, you get the story in your inbox and also it's harder for scrapers to get around.
And it's not impossible by any means. Um, but even now it's like, we have to figure out new ways to kind of get around like the, like these, we wrote about how scrapers are ignoring, um, like robots.txt in a lot of cases and how they're evolving to get around these usual kind of stop gaps that would keep them from scraping. Um, they're getting better and better at scraping. So people have to kind of keep, you know, outlets have to keep evolving faster than the AI. And then the AI learns how to get around it and the cycle goes until forever.
Emily M. Bender: Yeah. The AI hasn't learned anything.
The people who are designing those scrapers figure out how to get around it.
Samantha Cole: Yes. Exactly. Yeah.
Alex Hanna: And it's just, it's, it's such a wild thing. I mean, cause I mean, y'all y'all have done such great reporting, writing on things like robots.txt, or the way that, you know, Anthropic, or even the big companies, like, have been just either, or like, Perplexity, have just been ignoring robots.txt.
And, you know, part of it is sort of like, it's so interesting to see, because some people think that the organizations doing these are just like, are these like nefarious, you know, like actors who are, you know, like the, the, the, the kind of picture is, you know, the 13 year old kid in Macedonia, he was just like creating mass web scrapers and like just creating content just for like, you know, to drive clicks to this random site.
But it's no, these are like, these are like Silicon Valley based organizations are just like, yeah, we're going to do this safely. We're just going to steal all this stuff and, and subvert like this entire, you know, you know, just really ignore this and separate the business model of these organizations.
And I mean, yeah, I mean, I remember the discussion y'all had around the email wall. And how like a few people were up in arms and they were kind of like, well, I don't want to put my email in here, but you're like, well, we kind of have to, you know, this is, it's a pretty, it's a pretty low barrier just to read our content. And even that, you know, they're, you know, are going to subvert, they're going to probably just find, you know, log in with something and then just do mass scraping or something of that nature.
Emily M. Bender: Yeah, I think you could imagine them subscribing and then just like taking the emails, but that's probably a little bit too slow for their taste, like the, as they come.
All right. There's one more thing in this, uh, Connelly interview that I want to read because it was cringe. Um, so Deck asks, "How do you think about newsroom talent as it relates to AI? In your role, will you be training up journalists at the Washington Post on how to best use AI tools? Or also bringing prompt engineers and other AI specialists into the newsroom to use these tools."
It's like no, prompt engineers are not journalists, right?
Um, and Connolly says, excitedly, with an exclamation point, "We just completed our first round of prompt training in the newsroom! We were led by the excellent David Caswell. His piece for the Reuters Institute is great reading if you're trying to figure out where to start."
And it's like, okay, so at least she's not saying, yeah, we're going to bring AI specialists in to do the journalism with the AI tools, but like, no, prompt training doesn't, figuring out how to put things in to the large language models so that what comes out looks more convincing to you as a journalist--still not journalism.
Alex Hanna: This is, this is incredible. I clicked, I clicked on this link, the David Caswell link, and it's just, oh, it's awful. So--
Emily M. Bender: This one here?
Alex Hanna: Yeah. So it says, "AI and journalism, what's next?" And the subhead is, "Expert David Caswell on why generative AI may transform the news ecosystem and how journalists and news companies should adapt." And then there's a Midjourney image of like a woman silhouetted against like some random windows, uh, flying. Um,
And then the first, uh, paragraph is bad: "Innovation in journalism is back following a peak in the mid 2010s, the idea of fundamentally renewed--reinventing how news might be produced and consumed had gradually become less fashionable, giving way to incrementalism, shallow rhetoric and in some cases, even unapologetic 'innovation exhaustion.' No longer."
And I'm just like, ugh, this is, this is at a time when, you know, journalism is suffering through, you know, a revenue crisis when everyone is still reeling from, you know, the kind of ad spend going primarily to Google and Facebook, you know, it's just, it's just what world does this man live in?
You know, that it's 'innovation is back, baby.' It just is so divorced from really the facts of what journalism a--is and is becoming kind of in the current era. And, uh, I'm just, just makes my skin crawl a bit.
Emily M. Bender: This is, yeah--
Samantha Cole: So bad.
Emily M. Bender: Just horrific.
Um, and I'm, I--
Samantha Cole: It reminds me of like the metaverse. You guys remember when the metaverse was like the new thing in journalism?
Alex Hanna: Yeah.
Samantha Cole: Journalism is back with the metaverse and NFTs were a thing for a bit. Somehow that's--
Emily M. Bender: NFTs, how, how was that a thing in journalism? What were you supposed to be doing?
Samantha Cole: It was like, I mean, it was like everything was going to be on the blockchain operating in the metaverse with the NFTs. It was like, it was all like a whole ecosystem.
I don't know. It's, it was really bad. That's as you know, just horrible, horrible ideas. The things that are happening in these meetings, terrible.
Alex Hanna: That's really, were they, are they, were they trying to like mint like articles as like NFTs and user and like readers could like buy individual articles as a revenue?
I don't know.
Samantha Cole: God, don't give anybody any ideas. I don't remember exactly what it was.
Alex Hanna: Sometimes I say something on this podcast and people are like, Alex, don't speak this into existence.
Samantha Cole: No. Yeah, please.
Yeah. But, you know, 'innovation exhaustion.' I imagine it would be exhausting to be a journalist and being told now you have to use the metaverse. Now you have to use NFTs. Now you have to use the blockchain. Like, yeah, that's exhausting. And the innovation that I like seeing is things like 404 Media. It's like, okay, how do we come up with a model that will work that will support the kind of journalism we wanna do? Like--
Alex Hanna: Yeah.
Emily M. Bender: That, pobably also exhausting to make it work, but.
Alex Hanna: Yeah, in, in a different way.
Yeah. Homsar315 in chat says, "Innovation exhaustion is real and not in the way this article thinks."
Samantha Cole: So true. Also great username.
Alex Hanna: Yeah.
Emily M. Bender: And we also have Privacy With Mitch who says "Minted articles would literally make me cry."
Alex Hanna: Yeah. If you're a C suite exec listening to the podcast and you do this, I'm just, I'm going to find a way to make it not happen.
I don't know how.
(laughter) Oh gosh.
Uh, all right.
Emily M. Bender: Are we ready to move to the Fresh AI Hell, or you got one more you want to do?
Alex Hanna: Well, one thing, let's just jump over to the, to the CTO of, um, the next one of, uh, the--so this is when they hired the CTO from Washington, the Washington Post, Vineet uh, Khosla, is that how you say his name?
Um, and he comes to the Post from Uber where he has been a senior engineering leader since 2018. And I just wanted to, I just wanted to say that because it's just incredible that, you know, you are having uh, big tech executives come in as, as, as, as tech officers and C suite officers in, uh, journalistic organizations.
Um, so it doesn't bode well.
Emily M. Bender: It does not bode well. And you know, yes, you need somebody who understands the technology, but there's gotta be journalists who have come up, you know, both as journalists and let me think about people like Meredith Broussard. She can't be the only one who has sort of, and you know, you all at, at 404 media have a lot of tech chops at this point.
Like it, it seems like you could get somebody who's journalist-first into that role and it would be much more effective.
Alex Hanna: Yeah.
Samantha Cole: Yeah. It used to be the other way around. It's like people would leave journalism and go to Uber, but I guess now there's going both ways, which is terrifying.
Emily M. Bender: Yeah. SJayLett says, "Uber, which famously had stunning respect for journalism."
Alex Hanna: (laughter) And Abstract Tesseract saying, " Speaking of innovation for, for as me, uh, 404 Media's RSS support is one of the best recent media things for me." So I know y'all put a lot of work into your CMS and like making it um, pretty, pretty great. So.
Samantha Cole: Yeah, yeah. I'm glad people like the, I'm so glad people enjoy the RSS.
That was one of the most demanded--people were knocking down the door trying to get uh RSS. So I'm glad we were able to make it happen. Um, and that people like it.
Emily M. Bender: Yeah.
Alex Hanna: That's awesome.
Emily M. Bender: All right, I'm going to take us out of this and Alex, musical or non musical improv prompt today?
Alex Hanna: I'm feeling musical. I'm recording this, by the way, not in my home office, but in like a little, um, meeting booth.
That's literally like four feet by, you know, four feet at the University of Toronto. Um, so yeah, I feel like I'm in a music booth right now. Just hit me with your best shot.
Emily M. Bender: Okay. Okay. So. Uh, you often say that I give you too many layers in the improv prompt. So this one's going to be shallow. You go where you want.
Um, you are at the, uh, Fresh AI Hell news desk, the, the demon news anchor reading the headlines.
Alex Hanna: Okay. So am I, am I the editor?
Emily M. Bender: Um, as you like.
Alex Hanna: Oh, okay, let's see. Uh, so imagine me in a newsie cap with like a cigarette, just chain smoking, um, and I'm like, 'What you got for me today?' Someone walks in, a demon walks in, 'Boss, I got this, I got this thing right here. Uh, Uh, this, this app called Friend where you wear a friend on your neck, uh, and, but if you drop it, it actually dies. What do you think?' 'Well, I, I mean, it sounds, it sounds great. I don't know. Ship it. All right. What else? What do you got for me, Larry? Or what'd you got for me, B eezlebub.'
'Well, you know, I got, listen, boss, I've been hitting these streets. I got, you know, I got 15. 20 people just trying to sell me on, uh, NFTs, uh, for, uh, for news articles. I think it's, I don't know. What do you think about it?' 'Ship it, ship it. What are you thinking? Come on. I need to make a buck here.'
That's what I got.
Emily M. Bender: All right, thank you. And that takes us into Fresh AI Hell starting with AI Alan Turing.
So this is reported in the BBC by Andy Trigg. Uh, headline is, "Alan Turing to quote, 'answer questions' in new AI display," from five days ago. So that would be July 31st, I think. And it reads, "Alan Turing, known as the father of artificial intelligence, AI, is to become a groundbreaking interactive display that will answer questions from museum goers. An AI life-sized version of Turing is being created for Bletchley Park near Milton Keynes, uh, the once secret home of Britain's World War II code breakers."
There's so much awful about this, but I want to start with Alan Turing is to become an interactive display. What does that even mean?
Alex Hanna: They've resurrected him. (laughter)
Emily M. Bender: Arcane Sciences, "Can we please stop trying to do necromancy with computers?"
Alex Hanna: Yeah, I think, I think, I mean, I've read a little Turing and we, you know, we talked about the Turing test on, uh, Our Opinions Are Correct with Charlie Jane Anders and Annalee Newitz and, you know, I feel like from what I know of Turing, I feel like he would have really, really hated this.
Um, and I just would love, you know, I would love for the ghost of Turing to like be, you know, to come back and, and, and destroy this thing somehow, just really get into the gears of it.
Emily M. Bender: And so, "Once complete, visitors will be able to ask in quotes, 'Turing' questions about his life and work, with the AI character tailoring its responses based on whether it is speaking to an individual, group or children. Bletchley Park says the technology involved is a world first."
It's like, no, we saw the, the, you know, large language model driven version of, um, how am I losing her name? The recently--
Alex Hanna: Oh, uh, Harriet, Harriet Tubman.
Emily M. Bender: There was that one too. Yes and then before that--
Alex Hanna: You're thinking of another hell.
Emily M. Bender: Yes. I'm, I'm thinking of, um, the Supreme Court justice who passed away a year or two--Ruth Bader Ginsberg.
I think that was AI21 Labs. Like this is not a new idea. It's not a world first and it continues to be a bad idea. Um, just, yeah, um, I actually got an email from somebody at my university asking me, I think it was for this, what question would I ask it? And I wrote back and I said, nothing because I am not interested in reading synthetic text.
Alex Hanna: Yeah, yeah, and a few people putting in the in the chat super the super disrespectful just given you know, how poorly Turing was treated for being gay, for being, you know, chemically castrated. And then and then you know taking his own life. So yeah, really really fucked up, Bletchley Park.
Emily M. Bender: Yeah Oof. Okay. We got to keep moving because it's Fresh AI Hell.
Um.
Alex Hanna: Yeah.
So this is from the Washington Post. This is by satirist, uh, Alexander Petri. "Opinion: I hate the Gemini 'Dear Sydney' ad more every passing moment. You're missing out. You're missing all of it." And so, uh, Alexander Petri, the, uh, uh, great satirist, uh, has got this video. And if you haven't seen this video.
I actually haven't watched this video. Mostly due to this article where, um, where it's, where a little girl is writing a letter to, uh, write like to an Olympic runner and, and she, you know, and she asked her dad to help her and she, he's like, let's use Gemini, uh, and then it writes it.
And so, yeah, Petri writes, 'Let me tell you about the Gemini Dear Sydney ad. This is an ad for, the, this ad for Google's AI product is very bad. This ad makes me want to throw a sledgehammer into the television every time I see it. Given the choice between watching this ad and watching the ad about how I need to be giving money now to make, certain that dogs do not perish in the snow, I would have to think long and hard. It's one of those ads that make you think perhaps evolution was a mistake and our ancestors should have never have left the sea. This could be slight hyperbole, but only slight."
And it kind of goes on like this, this, this, um, this, uh, this, this nonsense about, and yeah, I haven't watched this ad. Uh, I'm taking her warning quite seriously, but Google has pulled this ad from the backlash. So.
Emily M. Bender: And some of the reporting on the backlash also mentioned that, um, this ad was designed by a Google internal team, as opposed to anybody who, uh, knows something about advertising or is connected to the real world.
Alex Hanna: Yeah. Hey, so, but let it be a lesson, ridicule as praxis works, folks.
Emily M. Bender: It works. All right, moving to Holland, uh, here's an article, um, I'm, I'm not going to be able to pronounce any of this, um, uh, an article in some Dutch publication, um, NU.NL, um, and I, I pushed this through a machine translation. So I have a sense of what it's about, also from the person who sent it to us.
Um, apparently a judge, uh, was dealing with a case about, uh, someone had put up solar panels and then their neighbor had built a structure that was blocking some of the sun and so the judge needed to decide, um, how much value was removed from the expected value of the solar panels because of that, uh, extra shade.
And, um, so the judge asked ChatGPT, you know, questions like, what's the expected lifetime, you know, value of a solar panel? And used that in making their decision. Like, don't do that ever. And what we, this is, you know, coming back to our point about the, um, the not having generative AI in newsrooms, right, what we need journalists to be doing, the new facts we need them to be bringing in, to quote Connolly, um, is like educating people like this judge on what this stuff actually does, rather than saying that they're playing with it too.
Yeah. Okay. Um, what's this one? Oh, yes. Internet of shit.
Alex Hanna: Yes. One of my favorite topics. So this is in Forbes, the title, "Generative AI is coming to your home appliances," by Bernard Marr. And this is posted March 29th. So let's see what this has. So it's got a fancy looking home, uh, from Adobe stock where it looks like it's got like a CIA command base and a fancy home and solar panels. Hey, at least your internet of shit will be carbon neutral.
Uh, "Across all industries, organizations are rapidly embracing generative AI. Among them, makers of home appliances, like fridges and ovens. Generative AI in your oven. Why not? Atter all--" I think that means after all, so they didn't even proofread this. "--AI has been creeping into our houses for years. Think smart light bulbs and Alexa. But thanks to generative AI, these interactions will become even more human and more personal. Imagine, for example, asking your washing machine whether it's safe to wash a beloved item of clothing on a certain setting, literally asking it aloud or via an app. Or you could say to your fridge, Hey, when am I going to run out of milk? And it telling you. Integrating generative AI into everyday products could lead to a new era of smart appliances that are not only more adaptive to our needs, but also more interactive and engaging. Let's explore what this looks like in practice."
I'd prefer not to, but holy shit.
Emily M. Bender: But why, so clothing comes with labels that have the washing instructions.
Alex Hanna: Yeah.
Emily M. Bender: Right?
Samantha Cole: No one's going to do, like, who's going to be talking out loud to their fridge in their house?
Alex Hanna: How lonely--
Samantha Cole: That's so embarrassing even if there's nobody there.
Alex Hanna: How lonely do you have to be?
Samantha Cole: Do you remember the Amazon, uh, the dash buttons?
Alex Hanna: Yeah.
Samantha Cole: If you run out of something in order, you push the button. Those are dead now. I had to Google this because I was thinking about it, but those are gone now. They're no more. People didn't use them. Who's going to use this?
Alex Hanna: I thought they were gone because like children, when you get ahold of them and then like they press them like 30 times.
Samantha Cole: Oh my God. A kid is going to be like, 'order more chocolate cake' at the fridge.
Like, you know, all day long.
Alex Hanna: Oh, a hundred percent. A hundred percent.
Samantha Cole: So bad.
Emily M. Bender: There was a, some early ad where like, it was a TV news reporting, I think about one of these Alexa things. And it was like asking Alexa to order something. And they said, 'Alexa, blah, blah, blah,' on the newscast. And then like, all of these families had the packages shipped to them.
Alex Hanna: I do remember this. Incredible.
Emily M. Bender: Okay. This next thing, speaking of advertising, I got this link because it was sent to me at my academic address, something called Tlooto, T L O O T O. And it was also advertised as something like Academic GPT. And it says, "Hello, how can I help you?" And then there's these four, uh, little cards here, "research topics, contents of table, literature review, and methodology."
I'm going to read contents of table. It says, "I'm a doctoral student planning to conduct research on quote, 'the status of ESG management and implementation strategies in small and medium sized enterprises, SMEs,' end quote. Please provide a detailed outline for my PhD thesis." So this is basically synthetic text generating machine trained on academic text to supposedly help with research, which like, no.
Um, but also they couldn't even be bothered. Oh, maybe this was supposed to be table of contents. I was like, what does contents of table--
Samantha Cole: Oh my God.
Alex Hanna: Oh, I was also, I was also confused by that too. Oh my gosh.
Emily M. Bender: I thought that this was going to be a prompt for asking it to generate a table. But no, this is, so this is coming from people who, um, yeah, don't really speak English.
So there were a couple of hilarious things here at the bottom. It says, "Even as the most powerful Academic GPT, Tlooto is not Infallible your vigilance is crucial."
Samantha Cole: Random capitalization, like infallible. That's helpful. Your vigilance is crucial.
Alex Hanna: This is really incredible. I mean, even the thing that you read, Emily, is, you know, the person who wrote this is like, surely this person is like in an MBA program or an AI program.
Uh, you know, we're going to, you know, do this outline of the thesis and, uh, it's just, I'm just, yeah. It's, it's bad, bad stuff.
Emily M. Bender: Up at the top, they've got two versions of their thing, Tlooto 1.0 and Tlooto 2.0, and then above Tlooto 2.0, it says, "five times smarter." (laughter)
Like, yeah.
Okay.
Samantha Cole: Smarter than what? Five times smarter than I don't know what?
Alex Hanna: Yeah.
Samantha Cole: We need a reference point.
Alex Hanna: Yeah, exactly. Um, this one, this is the last one we have. This is, uh, a, a, um, what are they called skeets on Bluesky, by Brynne Robinson, PhD. Uh, and it says, "Another day, another AI generated Cronenberg nightmare published in, uh, then retracted from a scientific journal," uh, kind of a test tube emoji and then a pufferfish emoji. "Alternative title, Gout Gives You Flippers? Don't Tell J. D. This."
I don't know who J.D. is. Um, Justin Davenport. There, uh, I'm just making that up. "There are artists amongst us who know phalanges from flippers. Call them." And it's got like, um, bizarre AI generated image with it looks like like kind of a skeletal, um, muscular image of like knees that are bent plus an arm, and there's just like crystals in all them.
And, um, Like a piece of a tissue that looks like an orange. I don't know how to describe this. And of course, there are words that are wholly made up, including, I'm trying to get close enough to read this. So sorry if the sound gets a--yeah, can you read these, Emily?
Emily M. Bender: I don't know if I can, I can zoom it a little bit.
Can I zoom it more? There we go. Okay. Um, uh, alo goclut offsif (gibberish)
more-- like it's, it's, it's garbage.
Alex Hanna: Yeah.
Emily M. Bender: (gibberish) and they're not even really letters, like they're, I'm, I'm, I'm sort of, I'm helping too much there. And then the, the, uh, bones below the knees look like toes, maybe?
Alex Hanna: Well, yeah, and there's no bones in the fingers. And I think that's where the, the flip, the flippers come from.
And also I'm told by Homsar here that the JD and this is JD Vance with the, uh, you know, the, um, the meme of the, uh, the dolphin.
Emily M. Bender: Oh, the dolphin.
Alex Hanna: The dolphin fetishization. Yeah.
Emily M. Bender: Yeah. Yeah. And this, so this thing you called an orange, I think people have been calling it a grapefruit too. And it's, uh, I don't know what it's supposed to be and just it's so sparkly like it looks like a cross between sort of, um, science fiction for kids and then like princess stuff for kids, like sparkly blingy.
It's yeah. (laughter)
Alex Hanna: It's giving real like Dianetics slash anime something of that nature.
Emily M. Bender: Yeah, this is oh, SJayLett in the chat say"Areare they tasting shots of blood in the, um, on the right hand side here? These look like little tasting glasses."
Alex Hanna: It looks mixed. It does look mixed with the, uh, grapefruit though.
Emily M. Bender: Yeah. Oh, and Abstract Tesseract, "I can't begin to render this, but we have in your.... musical notes, emojis surrounding the keyboard smash connected to the keyboard smash."
Alex Hanna: Yeah. Incredible. Uh, well, that'll be, hey, that will be on the Rat Ballz EP. Um, you know, it's the bone song for nonsense words.
Emily M. Bender: We'll have to figure out how to sing it more than keyboard smash.
Yeah. All right. This was fun. Um, that is all for this week. Sam Cole is a journalist and co founder of 404 media. Thank you so much, Sam.
Samantha Cole: Thank you so much for having me. This was super fun.
Alex Hanna: It was such a pleasure. Our theme song was by Toby Menon. Graphic design by Naomi Pleasure-Park. Production by Christie Taylor.
And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us by rating and reviewing us on Apple Podcasts and Spotify and by donating to DAIR at DAIR-Institute.org. That's D A I R hyphen institute dot O R G.
Emily M. Bender: Find us and all our past episodes on PeerTube and wherever you get your podcasts.
You can watch and comment on the show when it's happening live on our Twitch stream. That's twitch.tv/DAIR_institute. Again that's D A I R underscore institute. I'm Emily M. Bender.
Alex Hanna: And I'm Alex Hanna. Stay out of AI hell, y'all.