Mystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
Episode 38: Deflating Zoom's 'Digital Twin,' July 29 2024
Could this meeting have been an e-mail that you didn't even have to read? Emily and Alex are tearing into the lofty ambitions of Zoom CEO Eric Yuan, who claims the future is a LLM-powered 'digital twin' that can attend meetings in your stead, make decisions for you, and even be tuned to different parameters with just the click of a button.
References:
The CEO of Zoom wants AI clones in meetings
All-knowing machines are a fantasy
A reminder of some things chatbots are not good for
Medical science shouldn't platform automating end-of-life care
The grimy residue of the AI bubble
On the phenomenon of bullshit jobs: a work rant
Fresh AI Hell:
LA schools' ed tech chatbot misusing student data
AI "teaching assistants" at Morehouse
"Diet-monitoring AI tracks your each and every spoonful"
A teacher's perspective on dealing with students who "asked ChatGPT"
Using a chatbot to negotiate lower prices
You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
- Twitter: https://twitter.com/EmilyMBender
- Mastodon: https://dair-community.social/@EmilyMBender
- Bluesky: https://bsky.app/profile/emilymbender.bsky.social
Alex
- Twitter: https://twitter.com/@alexhanna
- Mastodon: https://dair-community.social/@alex
- Bluesky: https://bsky.app/profile/alexhanna.bsky.social
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
Alex Hanna: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.
Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come.
I'm Emily M. Bender, Professor of Linguistics at the University of Washington.
Alex Hanna: And I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. This is Episode 38, which we're recording on July 29th of 2024. How would you like to yeet all of your Zoom meetings into the sun? While the use of the online video conferencing platform grew widely at the height of COVID's stay at home era, the fatigue of staring at online meetings all day is real, but Zoom's CEO has a solution for us.
What if an AI agent could do all your Zoom meetings for you?
Emily M. Bender: In an interview with the Verge in June, Zoom CEO and founder, Eric Yuan revealed that this is his exact vision, or at least it is now that tech companies are all scrambling to find some cool new use of generative AI that might actually drive profits. On top of being your digital avatar in meetings, he says a so called 'digital twin' might also read and reply to your emails. As in, this meeting could have been an email that I didn't even have to read. All in the name of freeing you up to do more creative work, spend time with your family, and yes, maybe get us to a four day work week.
Alex Hanna: But we regret to inform you, as you might have suspected, that these lofty dreams are, in fact, teetering high atop Bullshit Mountain.
So, let's get into it.
Emily M. Bender: So before I share this lovely artifact, I want to explain that strange word that was in our livestream promos. Um, that's a word that I learned when I was spending some time in Denmark, um, in 2014. And it's pronounced, I should have picked the Norwegian version of it because Danish is like impossible, but it's pronounced something like agurketid, and it means cucumber time.
And the idea here is that this time of year, uh, in the Nordic countries, the--sorry, that's the cat scratching on things. Maybe we'll see him. This time of year in the Nordic countries, um, the, everyone's off work, including like the policymakers. And so nothing's happening. And so the newsrooms are like scrambling for something to cover.
And so they end up covering really silly stories. And so what we have here is a really silly story. Like this is, you know, there's real issues in here, but this is sort of lightweight and fun, I think for us. And it's not that there aren't terrible things going on in the world, right? Um, but, uh, there is, I think, some value in having some fun with the silly kind of hype.
So that's what we're doing here. You want to start us off, Alex?
Alex Hanna: Yeah. And I want to, I want to try to make a really strenuous or excuse me, not strenuous, but tenuous connection between "cucumber time," and the kind of color of the article, which is this, as you said, uh, off stream, Emily, uh kind of a chartreuse color.
I said it was brat adjacent. Um, but it maybe is kind of a rotted moldy spots on the cucumber. Anyways, so this is a podcast that is, uh, Nilay Patel, uh, who's editor in chief of the Verge, um, host of the Decoder podcast and a co host of the VergeCast, and this is a transcript of a podcast he did with, um, the CEO, Eric Yuan of Zoom.
The title being, "The CEO of Zoom wants AI clones in meetings." So you could imagine--and the subhead says, "Zoom founder Eric Yuan has big ambitions in enterprise software, including letting letting your AI powered digital twins attend meetings for you." So you know just from the title alone and the subhead, it's going to be a, it's going to be a knee slapper for sure.
Emily M. Bender: We're going to have some fun. So I think we can skip over the little intro that Nilay Patel wrote and just get into the part where it's the transcript. So the transcript has been lightly edited for length and clarity. Um, and just to our listeners, we're going to be reading bits of this transcript, but we might screw things up a bit.
So if you want the actual text, go to the source and you could listen to it as audio, which Alex, I understand you did today.
Alex Hanna: Yeah, I listened to, I read half of it and then I listened to half of it. I have to say, I appreciate the reading it because when I was listening to it, I was a little more rage prone, yelling in my car.
Yeah. So let's get into it. So--
Emily M. Bender: Let's get into it. Yeah.
Alex Hanna: Yeah. You want to go Emily?
Emily M. Bender: Um, so I'm just trying to figure out where is a good place to start. This is like you, you're the CEO of Zoom blah, blah, blah. But I want to get down to where they, um, start talking about what's happening now. Um, and--
Alex Hanna: yeah, I, well, there's a few things he said, so actually a little bit here.
Yeah. He says, so Nilay says, "When you, when you think about the elevator pitch for Zoom, you, you had the founder and you had to go raise money once upon a time. In the beginning, it was very simple, right? Video conferencing is very hard. It requires some dedicated hardware and expensive connections. And Zoom is going to be a, uh, be a simple to use as a consumer out, uh, consumer app. It's video conferencing, but simple. What's the elevator pitch now?
And then, he's saying, you know, we have Zoom but now we're expanding. Um, and they kind of want to move into enterprise business suite software. And scroll down a little bit. And then they say, "Today, for this session, I don't need to, I don't need to join. I can send a digital version of myself to join. So I can go to the beach. Or I don't, or I do not need to check my emails. The digital version of myself can read most of the emails. Maybe one or two emails will tell me, 'Eric, it's hard for the digital version to reply. Can you do that?' Again, today we spend a lot of time either making phone calls, joining meetings, sending emails, deleting some spam emails, and replying to some text messages. Still very busy. How do we leverage AI? How do we leverage Zoom Workplace to fully automate that kind of work? That's something that is very important for us." (laughter)
And I, yeah, I mean, like, and I just from the jump, you're just like, okay. A digital version of yourself, like, and it's just, to me, I mean, I want to get, I'm going to get into this a little more, but the kind of, you know, like people tend not to like meetings, they tend not to like email, there's still ways in which email and meetings are the best way to facilitate a lot of communication.
And I think especially like there is a dogma in Silicon Valley that emails and meetings are just anathema to the real work, the real work I need to be doing. And I'm surprised that, uh, you know, um Eric Yuan isn't saying this here, like, 'I can get into real work. I can, I can code. I can like dig into the code.' He's like, no, I can go to the beach. So, you know, this, he's really into this kind of fully automated luxury communism, I guess.
And so, but he's like, but the thing is, like, meetings are not just like that. They're just, you know, they're--part of having meetings is about, I mean, if you're doing meetings, right I mean, there's some mentorship happening. There's some accountability being held for the decision makers in a way that the, the meetings that do that, I find in Silicon Valley are very, a very feminized type of labor. Um, and you know, he doesn't mention this, but it's kind of an undercurrent. He mentions his assistant below, we'll read that, who's like an admin, and you know, it's like, my admin, she just does that, you know. And I'm just like, hmm, okay, I already see this kind of like current of like feminization and like the, the accountability work, which is just seen as like not real work to like these people.
That's, that's what's going to get automated.
Emily M. Bender: Yeah, yeah. And the, the idea that the, in meetings, like you say, there's mentoring, there's decision making, there's accountability, and there's also like a chance to sort of feel people out and understand how, how they react to ideas. So that if you're trying to do some decision making, there's things that people might not be comfortable putting in an email, or there's the hesitations in how they're speaking that would allow you to be like, oh, hold on, I need to talk to this person a little bit more because they're not actually on board.
I need to understand what their hesitation is. And there's another thing we'll get to down below where they talk about the current AI features in Zoom is this thing where we'll automatically summarize meetings and I'm like, it's not catching any of that.
Alex Hanna: Yeah. Yeah.
Emily M. Bender: Absolutely not.
Alex Hanna: Absolutely. Yeah. So it's just, it's just really.
Yeah. So, I mean, if you could, I want to like, yeah, so like scrolling down the next thing he says, "We have a big audience of product managers, engineers, and designers." And, "I think what you're saying is they're going to send AI avatars to their standups every morning." And he responds, and that's, that's, that's Nilay, that's the interviewer.
"More than that. It's not only for meetings, even for my emails. I truly hate reading email every morning. And ideally it, my AI version for myself, reads most of my emails. We are not there yet." Um, and then, and then he continues, the interviewer says, "There's a, this is a huge vision that a lot of what you do in the workplace is busy work or status check ins or non decision oriented conversations, and you can automate that uh in some way or send a version of yourself to communicate, quickly communicate, whatever you need to communicate. You can go through your email in an automated way. This is a big dream. I can't assume a lot. I want to talk about how to get there and what you're building in Zoom, blah, blah, blah. Um, but for the average person is AI doing all that stuff?"
Emily M. Bender: So could I read the response here?
Alex Hanna: Yeah. Yeah.
Emily M. Bender: So the CEO says, "I think for now, the number one thing is aI is not there yet, and that still will take some time. Let's assume, fast forward five or six years, that AI is ready. AI can probably help maybe 90 percent of the work, but in terms of real time interaction today, you and I are talking online, so I can send my digital version. You can send your digital version. Again, not like an in person meeting. If I stop by your office, let's say I give you a hug. You shake my hand, right? I think AI cannot replace that." Okay, good. There's limits. We still need to have in person interaction so I can put my hands on you. Sorry. I'm editorializing as I go.
Alex Hanna: That's a fine editorialization.
Emily M. Bender: "That is very important. Say you and I are sitting together in a local Starbucks." Product placement. "And we are having a very intimate conversation. AI cannot do that either." So he's drawing this contrast between the meetings that are pointless that he wants to get out of, and some interactions that he feels are important, and they involve touching and Starbucks and intimacy.
Alex Hanna: Yeah, there's, they've got something like that. There's, they've got some kind of familiarization and he does talk kind of below about like the need for some kind of in person things. And I'm like, yeah sure. But it's, it's really not clear what the kind of thing that he's really trying to automate is, and I mean, and I guess it's, it's really, you know, the kind of idea of kind of the meeting Valley the valley as being very, I don't want to even say transactional, but very misguided, you know, it's like, okay, it's sort of like, I am informed.
You know, there's this framework that, um, that they use and in some, uh, some project management was, which is like the RACI framework. So the responsible, the accountable, the consulted and informed. And it seems like it's just like you have 200 people on a call and you know, you're informed, okay, you don't really need to be there, maybe. But I mean, there's a reason you're probably still there. Um, and so, you know, it's just, I think there's a kind of a real disconnect about what the kind of things that happens in meetings.
There's a great comment in the chat by SJayLett who says, um, "I can believe that the CEO of Zoom has had very few, if any productive meetings, possibly in years." And then another comment from Abstract Tesseract, who says. "It's like he read David Graeber's 'Bullshit Jobs' and fundamentally missed the point." And so, uh, for those, so for those who haven't read, you know, David Graeber's Bullshit Jobs, you know, there's a longer book on it.
Um, but, uh, there, there's an essay and we can post it in show notes, but the kind of idea is like, you know, a lot of the work is sort of fundamentally busy work, uh, you know, and we probably don't necessarily need those jobs, but it sounds like what he kind of wants there to be is those jobs, but just attended by AI avatars, which seems to yeah incredibly miss the point.
Emily M. Bender: Yeah. And there's way more and we'll get there, but I have to also lift up this rage inducing thing that Ragin' Reptar reports in the chat. They say, "My CEO said he wanted to end remote work because he wanted to hug employees."
Alex Hanna: Oyoyoy. Oof. Yeah. Great. Maybe shouldn't say that. I don't know.
Emily M. Bender: Ugh. Okay. So, um, we are now getting into, so this, this, this journalist is not asking all the great questions, but is on it for a couple of things.
And, and, and he's clearly like, what the hell with this, um, AI avatar thing. So he says, "The heart of Zoom is still the video conferencing product. That's how I think of it. And that's how most people think of it. Are you saying that Zoom, the video conferencing product will have AI avatars in it mostly, and it'll push us into more in person meetings?"
And, uh, Eric Yuan says, "I think two things. First of all, we are way more than just a video conferencing business. We have a lot of other new capabilities and essentially that's the entire Workplace platform." Workplace is capitalized. "It's a collaboration platform. That's one thing. But if you look at just video conferencing itself, I think we can leverage AI more and more. You don't need to spend so much time in meetings. You do not have to have five or six Zoom calls every day. You can leverage the AI to do that. You and I can have more time to have more in person interactions, but maybe not for work. Maybe for something else. Why do we need to work five days a week? Down the road, four days or three days? Why not spend more time with your family? Why not focus on some more creative things? Giving back your time, giving back to the community and society to help others, right? Today, the reason why we cannot do this is because every day is busy, five days a week. It's boring."
Alex Hanna: This is, this is really, This is really great.
This is, this is really good stuff. This is the real, this is the real like, okay, yeah. Sure. I'm up for a four day work week, a three day work week. Like are we, you know, and this is where I, um, you know, if maybe the wind blew in the right way, you know, you know, Eric Yuan would have been really on board with some, you know, French labor movement, uh, some French labor unions.
But now he's really kind of saying, okay, but you, we're still going to have that kind of busy work, but some way, some magic of the LLM is going to handle that." (laughter)
Emily M. Bender: Yeah. Okay. So. Uh, interviewer again, "So just to be clear, this is a great conversation already. You're saying Zoom is going to get us closer to the four day work week because AI will be taking Zoom calls?"
Eric Yuan says, "Absolutely. Not only Zoom calls, but also all other work we're doing every day. Chat and messaging, phone calls, emails, whiteboard, coding, creative tasks, manager tasks, project management, all of those things together with AI help and new applications, that's the direction. That's part of our Workplace platform. It's our 2. 0 journey." (laughter)
It's just like all in.
Alex Hanna: It's all in. Yeah. And then Nilay, who, you know, like misses the mark at some points, the journalist and, but is actually quite funny at other points. So he says, "I have a million questions. Let me ask this one first. Just thinking about your own calendar, you're the CEO. You have a lot to do. I'm sure you're the busiest." And this is my editorializing, I'm sure he's not the busiest. Um, "What would you hand off to AI today, if the, if the AI was capable?"
And he says, "I think many things, first of all, I can tell you, I hate my calendar. Every morning when I look at my calendar, oh my God, there's so many things. Even before I start, I know today I have maybe nine or 10 meetings. In between, I need to check emails, read messages and make phone calls. I'm not happy when I look at that. Second is, how did I get here? Because most of the time my admin--" And here's the kind of femininity, the feminine work of sort of prioritizing coming in. "--She had scheduled some meetings. I occasionally also schedule something in my calendar as well. Again, every time either myself or my admin, when you schedule a meeting, it's not just 30 seconds of work. You need to coordinate to do so many things. That's the second thing. How do we get here? The third thing is you look at all these meetings and decide which meeting you have to join and which meeting is optional. You also do not--"
And then the journalist interrupts, "And I'm asking you, which meetings do you look at and think you would hand off?" And he responds, "I started with the problem first. And last but not least, after the meeting's over, let's say I'm very busy and missed the meeting. I really don't understand what happened. That's one thing. Another thing for a very important meeting I missed, given I'm the CEO, they're probably going to postpone a meeting. The reason why is I probably need to make a decision. Given that I'm not there, they can't move forward. So they have to reschedule. You look at all these problems." I'm getting really dramatic in reading this. I'm very sorry.
"Let's assume AI is there. AI can understand my entire calendar, understand the context. Say you and I have a meeting, just one click, and within five seconds, AI has already scheduled a meeting." And it's just really like, again, what the fuck do you think happens in meetings? Like, is it just that you have to say yes or no to a thing?
And like that, like--
And
Emily M. Bender: here he's talking about scheduling the meeting as opposed to attending the meeting.
Alex Hanna: I, yeah.
Emily M. Bender: He's going back and forth, right?
Alex Hanna: I, he is. And he also says something like they have to postpone a meeting, gave the reason why is 'I probably need to make a decision.' And it's sort of like, okay, are decision making things the only point in which, you know, one needs to sort of attend meetings or is it, or is it sort of like, it's not in the ideation. It's not the kind of like mentorship. It's not the sort of career guid--I like, there's just so many things that like, it's so much about sort of like an action item at the end of a meeting rather than the content of the meeting itself, you know?
Yeah.
Emily M. Bender: So Abstract Tesseract says, "It's called Zoom because that's the sound the CEO's monologue makes when he wooshes past the question."
Alex Hanna: Mm, mmhmm. That's a good one, that's a good one.
Emily M. Bender: All right, so I'll continue the dramatic reading here. "At the same time, every morning I wake up and AI will tell me, 'Eric, you have five meetings scheduled today. You do not need to join four of the five. You only need to join one. You can send a digital version of yourself.' For the one meeting I join, after the meeting is over, I can get all the summary and send it to the people who couldn't make it. I can make a better decision. Again, I can leverage the AI as my assistant and give me all kinds of input, just more than myself. That's the vision."
So, I want to skip down a little bit after this, but there's a, there's an entertaining sort of mix of things here. So he's talking about the work involved in scheduling meetings, which is what he imagines his assistant does all day long.
There is the decision making happening at meetings, and then there's the information conveyed in meetings.
And one piece of this that's puzzling to me is, well, let's say you have gotten out of a meeting by sending your digital avatar, but presumably you also need the information from that meeting if you are going to in person, like at your actual self, attend some meeting in the future, so you are constantly then reading summaries of meetings that you didn't attend?
Alex Hanna: Yeah, or like, what if you are not asking the necessary questions or probing sufficiently about like what's happening in the meeting, right? You're not actually going to get the information you need, even if it was just a transcript of the meeting. Um, you know, that's not even considering what the summation product does.
Um, and it's just, and it's so, and I do, before we skip down, I do, there's two things I want to point out. So first off, this AI tool has to have some kind of metric to talk about what he needs to join and what he doesn't. And so it says, 'Eric, you have five meetings scheduled today. You do not need to join four of the five. You only need to join one.' And I'm like, why does he need to not join? Like, what's the metric? Is it just that the, I mean, there's kind of--
Emily M. Bender: The AI decided.
Alex Hanna: The AI decided on what grounds?
Emily M. Bender: Well it's an AI. It knows.
Alex Hanna: Yeah, it just knows. And it just, I'm like, okay, great. Um, and I do want to, I do want to just read this next sentence, uh, these two sentences because I just think they're very funny.
So the interviewer asks, "I'm assuming when you looked at your calendar today and saw the Decoder session, you were going to come to that on your own. What would you have sent an AI avatar to instead?" And he says, "I think an AI avatar is essentially just an AI version of myself, right?" And and then Nilay doesn't really investigate it.
He just says, "Sure."
And I'm just like, oh, so you could have sent the AI avatar to this interview, I guess, if it's supposed to be just, just an avatar, right?
Emily M. Bender: Yeah. Ugh. Okay. Um, let's, let's see. Okay. So I do want to get into this part. Um, so. Uh, the interviewer says, "How far away from that future do you think we are?" And Eric Yuan says, "I think in a few years we'll get there, but we're just at the beginning." And, you know, cue the drinking game that looks for all the like AI and it's in its infancy, this is early days yet tropes, right?
Uh, "The reason why is because of two problems. The first problem is today, look at the large language model itself. It just started. A lot of potential opportunities, but it's not there yet. Another thing we have to make sure is that you have a customized version. Essentially for every human being, you have to have your own version of LLM based on all the data, based on all the context around you. So you have your, your LLM. I have my LLM. I might have multiple versions of LLM. Sometimes I know I'm not good at negotiations. Sometimes I don't join a sales call with customers. I know my weakness before sending a digital version of myself. I know that weakness. I can modify the parameter a little bit."
Interviewer: "You think you would have a dial be like, be a better salesperson? Yuan: "Exactly. For that meeting, I say, 'Hey, tune that parameter to have a better negotiation skills, send that version and join.'" (laughter)
Alex Hanna: This is just absolutely wild. And it's just, yeah, go ahead.
Emily M. Bender: So they, they later on talk about like having multiple different versions and like, are there limits on how many you can send out?
And it's like, okay, so this is both an AI version of you and it's different. And this idea that like, okay you're better at sales but I'm still going to trust all of the decisions it's making?
Alex Hanna: Yeah. Right.
Emily M. Bender: Just. (laughter)
Alex Hanna: Yeah. And it's just, and there's something in the chat. So Homsar315 first, first time chatter on the channel, hey! It says, "I wonder how human behavior would change in meetings with the presence of AI avatars." Yeah. Yeah. Like, no kidding. Like if I'm a customer or like, you know, another, uh, enterprise customer and you're sending your AI avi to sell me on your thing. Yeah, that's I'm, I'm going to go no, I want to, I, I, I want Eric Yuan--
Emily M. Bender: Representative.
Alex Hanna: I want, you know--
Emily M. Bender: I want to talk to a real representative.
Alex Hanna: It's the same, yeah, exactly. It's the same reason people hate talking to chatbots, or they hate talking to automated systems. Like, no one enjoys that. Like, what, what kind of universe do you have to live in to think that that's kind of an acceptable replacement?
And I also want to raise this.
Emily M. Bender: He just wants to go to the beach. He's, that's his universe.
Alex Hanna: I'm at the beach, yeah. Wise Woman For Real says a hilarious thing in the chat. Um, where, um, she says, "The AI really wanted to talk to a special AI and has thus suggested that Eric not attend that meeting. Heart." And if anyone wants to write fanfic where they talk about, um, AI suggesting not going to a meeting because there's going to be another AI avi there, I would love to see that.
Post that to Archive of Our Own. Uh, I would, I would read it.
Emily M. Bender: Yeah, that, that, that's A R C H I V E, not the other archive. (laughter)
Alex Hanna: Yeah, yes, right. Although. A R X I V of Our Own would be hilarious.
Emily M. Bender: Right, we were talking about that a couple episodes back.
Alex Hanna: I think we were, yeah. Eventually we just make all the same jokes.
Emily M. Bender: Uh, callbacks, right? Let's get some continuity to the podcast.
Alex Hanna: Totally.
Emily M. Bender: So, there's this stuff about like, the tech underlying the 3D avatars and stuff like that, that I'm sort of less interested in.
Alex Hanna: Yeah.
Emily M. Bender: This part. This part is rich. This, this is the part where I, when I was reading this the first time a few weeks ago, I'm like, nope, we can't just Fresh AI Hell this, this, this is a rich text.
It needs to be a whole episode.
So the interviewer says, "This is a lot of stacked up technology problems to solve, right? There's a realistic 3D avatar. There's an LLM that you might be able to tune with different parameters that you can trust. I think a lot of people don't trust LLMs today. They hallucinate a lot." 'They' being the LLMs there. "There's everybody in the world being culturally okay with talking to a digital avatar. That's a lot of problems. How is Zoom organized to solve those problems and get to this vision today?"
And Eric Yuan says, "Even a few years ago, we talked about the vision at Zoomtopia--" Which oh my God, no, that's their--
Alex Hanna: That's their user con-- oh, I thought that was a fake thing.
Emily M. Bender: Yeah. And also the further on down, he refers to the employees as 'Zoomies.'
Alex Hanna: Oh, I saw that. I actually think that's kind of cute. But also very cringe and-- (laughter)
Emily M. Bender: Yeah, I felt bad for anyone who had that word applied to them, reading it. Okay. So at the user conference, "imagine a world where you and I live in Silicon Valley." Is that not the actual world? But okay. "I live in San Jose. You are in San Francisco. We may not be in the same place. Whenever you and I have a call down the road, it'll feel like you and I are sitting together. I shake your hand and you feel my hand. I give you a hug and you feel my intimacy as well."
Again. Ugh. Ugh. I'll take Zoom calls with this guy. Thank you very much.
Uh, "Plus, even two people who speak a different language, the real time translation will also work extremely well." Right? Cause we're doing all of this like citations to the future, perfect technology. "And if you and I don't want to meet, I sent the digital version for myself and you'll have exactly the same conversation. I think that's the vision we painted a few years ago. But how to get there? I think two things. First of all, luckily, I think we've already started. Look at the industry. I think there are two technologies that are going to help us start that. One is AI. Another is AR. Vision Pro, the Meta Quest 3. It's just starting. Look at today and all the generative AI products. It's just started. I do not think those technologies are ready yet, but they will help us get there."
Alex Hanna: Yeah, and it's just, I mean, yeah, we've already seen how well all the AR and like the metaverse stuff is going and I'm just like, yeah.
Emily M. Bender: So the journalist asked about like, okay, you've laid off 2 percent of the workforce. Like, how are you restructuring to be able to do this?
And, um, what I want to get to is the part where they talk about where in the stack this stuff is being fixed. Oh, but before that, just have to be angry. There's a bit where Eric Yuan says, "However, given that ChatGPT was born last year."
Alex Hanna: Yeah. (laughter)
Emily M. Bender: First of all, no, ChatGPT was released in 2022. Last year was 2023, but also it was not born.
Alex Hanna: Mm-hmm. Yeah.
Emily M. Bender: Okay. Dedicated AI teams, but wait where's this stuff about the stack.
Um.
Alex Hanna: I think it's down when it's when the interviewer is talking about Moore's law and basically, you know, he provides this--so it's a little more down.
Emily M. Bender: Yeah. Okay. Yeah.
Alex Hanna: Keep on going. Yeah, keep on going.
Yeah. Here. So "a a Moore's law type approach." And then he says, and he's talking about solving the problem of hallucinations. So scroll a little bit down.
And he says, um, so he says, "When you talk, say--" So this is Patel. "When you say the hallucination problem will be solved. I'm thinking about this literally in terms of Moore's law. Because it's the closest parallel that I can think about how, uh, CEOs talk to me about AI." Which first off bizarre. I don't know why CEOs are talking about this stuff in terms of Moore's law. Seems weird. Um, "I don't think you spend a lot of time thinking about transistor density on chips. I'm thinking you don't just assume that Intel or NVIDIA or TSMC and all the rest will figure out how to increase transistor density on chips and Moore's law will come along," and et cetera.
And then he's like, you know, it's right. It's a stack.
And so then Patel goes, "So is this model hallucination problem down the stack? Are you investing, are you investing in making sure that the rate of hallucinations go down?"
And he's like, uh, it's just going to be fixed. And--
Emily M. Bender: It's someone down the stack.
Alex Hanna: And it's just like, Oh, okay.
And it's just so interesting with the notion, like even the kind of, um, I don't know what to call it. The, like the epistemology of what like a hallucination is, like how we know what a hallucination is. Like it's it just is fundamentally sort of like a technical problem for Yuan, and surely Yuan is not the only one, but like the idea that like, yes, it's just gonna get solved. I mean, it's the same thing Mustafa Suleiman has also said, like, yeah, we're gonna, we're gonna solve hallucinations in like two, like two, two years.
As if it is not like fundamentally, you know, and again, we don't call them hallucinations, we call them just bullshit. Like, bullshit is not going to escape the, the, the architecture of the transformer model because that's what it does.
Emily M. Bender: Yeah, I want to, there's a part where he says, um, I don't see it right now, but, but Yuan says that, uh, you might need a new architecture to fix hallucinations. So to just back up a little bit, the interviewer says, "But I guess my question is, by who? Is it by you, or is it somewhere down the stack?" This is fixing hallucinations.
And, uh, Yuan says, "It's someone down the stack." And the interviewer says, "Okay." And then, (laughter) um, the, uh, Yuan says, "I think either from the chip level or from the LLM itself." Like, no, a different chip architecture is not going to change the fact that a language model is fundamentally a synthetic text extruding machine.
Like, this is not a hardware level issue. Um, so. Yuan goes on. "However, we are more at the application level. How to make sure to level the AI to improve the application experience, create some innovative feature set, and also at the same time, how to make sure to support customized, personalized LLMs as well. That is our work."
Alex Hanna: Yeah.
Emily M. Bender: It's like, you know. (laughter)
Alex Hanna: Yeah.
Emily M. Bender: So we're just going to make up technology that doesn't exist and then talk about how we're going to specialize that technology. Like it is so bananas.
Alex Hanna: Yeah. It's, it's just really the kind of indexing on a feature that's never going to come is really, I mean, it's, I mean, it's part and parcel of the AI industry, but you kind of see them for, uh, especially in the enterprise space. I mean, you see it from Google, you see it from Microsoft, Amazon. And Zoom is, is, you know, they want in on that pie. Especially since they're not, they don't kind of have the, um, the kind of, um, whatever the valuation and the capital to train their own models.
Emily M. Bender: It's, haaaa. (sighs)
Alex Hanna: Yeah, it's bad stuff on--
(crosstalk)
Emily M. Bender: Beyond that, that he's imagining--oh yeah. And Homesar315 says, "Bitcoin fixes this. We'll get there."
Alex Hanna: Oh yeah. We'll be there.
Emily M. Bender: We'll get there.
Alex Hanna: We'll be there.
Emily M. Bender: We have to be sure to get there. Um, but I wanna do a little bit on this part first. So, uh, Yuan is saying, "The future is really about personalized LLMs. I will have multiple variants of my LLMs. Every enterprise will also have their own LLM as well. That's the future. I do not think all of us will share the exact same LLM. That doesn't make sense. We can, but it doesn't make sense because the reason why my LLM truly understands me is I believe that LLM can resent me any, represent me anytime." So that makes no sense.
Alex Hanna: It's really--
Emily M. Bender: If you flip, if you flip the because it still doesn't make sense, but it fits with a trope. Right. So I think what he meant is, uh, the reason that I believe my LLM can represent me at any time is because my LLM truly understands me. It doesn't, and he shouldn't, um, but all right, but so, "But your LLM and my LLM will be very different in the future."
So you pointed out, Alex, that only a few organizations have the compute power to actually train these things.
Alex Hanna: Yeah.
Emily M. Bender: Right. Um, and we talked a lot about the environmental impacts of all that. So now we're going to have multiple ones of these for every person on the planet. Is that the idea?
Alex Hanna: Yeah, it's not clear.
And I mean, this connects to something he says a little down the way when he talks about data privacy. Um, so I don't, there's some stuff on anti, like antitrust, which we could try to, I think that's right here. We can sort of like skip, skip it, but there's something--
Emily M. Bender: Yeah. Should we get to the privacy part?
Alex Hanna: Yeah, there's, there's some important stuff on antitrust, mostly cause I think like if the shoe was on the other foot, he would not be you know, talking about antitrust. Anyways, but he's talking about data. Um, yeah.
So here it says, um, so, uh, Patel says, "When you think about growing that business, adding all these AI features into, it feels like there's, there's just a lot of reticence, particularly on privacy and security. We're going to put a lot of our data in these products somewhere. There'll be some training who knows on what. Who knows if it's appropriate. And then there'll be an AI assistant. There have already been some scandals, I would say, and controversy around Zoom AI features where the data is coming from. What's your approach today?"
What's that link to, "the scandals, I would say?" Is that some Verge reporting? I don't know what that actually goes to.
Emily M. Bender: Um, let's see. Yes.
Alex Hanna: Oh yeah. Yeah. Oh, this is, yeah, this is around the Zoom scandal where, you know, someone had sort of read the terms of service and said, Hey, they're going to go ahead and use your data for training and whatever. And so, you know, in this case, he is shoring up some of some of that, um, some of that controversy. So, you know--
Emily M. Bender: I think we talked about this when it was happening. So--
Alex Hanna: We did. Yeah, we did.
Emily M. Bender: August of last year.
Alex Hanna: Yeah. And so, and so, yeah, Patel. And so Yuan goes, you know, "When it comes to AI, you have to be responsible and accountable," which is just the, you know, the very common, uh, boiler plate, plate refrain from every executive.
You talk about responsibility, responsible AI, whatever, um, that means nothing at this point. Uh, it didn't mean anything when it came out. Um.
Emily M. Bender: Yeah.
Alex Hanna: Because let me tell you, being at Google during that time really, it really meant nothing. And so they say, he says, "Last year we made a commitment. Uh, we do not use any of our customers' data to train our own large language model, as well as third party large language models. We already made that commitment. However, AI is a new area. You also need to educate the consumer on what that means in case consumers do not understand. They thought, oh, we might be using the data. For some consumers, they want to opt in and help us tune the data. But no, that's not the case. Even to the consumers that want to opt in, we say, no, we do not need that."
So this is just very ridiculous. I mean, there's two things here. Originally, I wanted to point out the kind of idea of, okay if you have this idea of personalized LLMs, you need to use user data to some degree. Like you can't say you don't do any training, even if it's fine tuning existing models, like that is just kind of where do you think this quote unquote "avatar" will be a digital mirror for oneself? Um, even if that was at all possible.
And the second thing that I just find this to be terrible is 'you need to educate the consumer what that means.' Consumers, you're stupid. You don't actually know. We're not going to use any of your data. Okay, but you say you're going to have a digital copy of myself. Or how how would you? How would you not use the data?
Emily M. Bender: Yeah, that's and I don't think he knows. Like, I don't think he understands what he's talking about. But I want to, we gotta we gotta get to the Bitcoin thing. And then we got to get to the infuriating stuff at the end. Yeah. So, um, the, let's see, uh.
Alex Hanna: Oh, the Bitcoin here, thing is here.
We, and it's, it's the, uh, it's, he's talking about the deep fake stuff and then, uh, yeah, yeah.
Emily M. Bender: So let's, all right. So, so the journalist says, "Do you imagine this is all happening in Zoom's data center? So I log into my Zoom account, I've got my engineer digital twin and my designer digital twin and whoever else and I'm saying, 'All right, go off, go do stuff,' in the Zoom interface. Or do I own these and I'm connecting to Zoom?"
And Yuan replies, "I think the interface is Zoom's interface. However, to manage that is very different. That's the reason why I like crypto technology. It's more like fully distributed. I do not think you can store the digital twin of yourself to our server. You will store somewhere you feel very safe, like maybe on the edge, on your phone, desktop, or maybe somewhere you trust more like where you store your Bitcoin." (laughter)
Alex Hanna: Yeah, we had a little laugh about this
Emily M. Bender: Where I store my what now?
Alex Hanna: You know it's it's where you, yeah as if the common person is having Bitcoin, I mean and it's I there is something later an article which we'll get to that Patel's like, 'Don't you think Silicon Valley is kind of out of touch with all--'
Yeah. Yeah, where you store your Bitcoin? Who's keeping Bitcoin? Like there's, it's, it's, so I mean, it's just, you know, it's just like this kind of idea of, I mean, it's, you're almost hitting the, um, you know, the trifecta of bullshit here between crypto, AI, we just need some NFTs here somewhere.
Emily M. Bender: But we've got, we've got the metaverse stuff going on with the VR part, right?
So actually all three are in here.
Alex Hanna: Oh, that's true.
Emily M. Bender: Okay. Um, so he's, let's see, um, is this the Zoom fatigue thing? Um, is that where we're ready to talk about the, um, no, okay. So they're talking about how important it is to be in person. They talk about dogfooding their own product. What do you know about the history of that phrase dogfooding their own product?
Alex Hanna: I was wondering it, because it's a term that's used throughout the industry. It's a term that was used when I was at Google. Um, And I had never come across it. I'm really curious if anyone has a link to a social history of the term dogfooding, get in the comments, stealing the Brennan Lee Mulligan phrase, and let us know.
Emily M. Bender: So the, the only thing that I know about is that it, it comes from eat your own dog food, right? And then that turns into dogfooding and eat your own dog food is supposed to refer to, um, whatever products you're putting out into the world are also being used internally in the company so that you're not like telling your customers to use something that you wouldn't use yourself. But why dog food?
Alex Hanna: Right. That's the question. I mean, is it because dog food is like gross or you're, you are dogs or why--
Emily M. Bender: Customers are dogs and we shouldn't feed them something we wouldn't eat, or I don't know. Like.
Alex Hanna: Yeah, Abstract Tesseract says, "The proof is in the dog food." Yeah. Like why is it not, why is it not pudding or pizza or, you know, sushi or something a little more appeti--is it? Yeah. So I'm, if anyone knows the social history of that term, uh, I'd love to know it. Um, it's gotta be out there someplace. Someone's holding.
Emily M. Bender: Yeah. Yeah. All right. Okay. So, um, here's the thing you were talking about before. Yeah. So you want to read or do you want me to read?
Alex Hanna: Sure, sure. So, so Patel says, "One of the things I've been thinking about as we cover AI here at The Verge is that there's a gap in enthusiasm for AI between the companies, which are all investing in it like crazy and building it and rolling out the features and people. Our audience is very skeptical and in some cases, very angry about AI." And this is, this is me editorializing, I find this to be very funny because The Verge actually, is like a very, has a very technical audience.
Uh, "I don't know that the Valley has perceived that gap. I'll give you an example. I think the Scarlett Johansson story with open AI is a clear example of that gap. The Apple ad where they crushed everything in the iPad, which had nothing to do with AI. People were mad at it because they're just mad at AI, right?" Okay. And first off--
Emily M. Bender: We're going to stop and talk about that, yeah.
Alex Hanna: I want to stop there. This is where I did yell in the car, you know, cause where I was like okay. No, people were mad about that cause you're just smushed you're smashing physical media and flattening it to a digital interface. They don't, you know, it's not just, I mean, there's an AI component of that, but like, yeah, like you're, you're decimating arts for a monoculture.
Emily M. Bender: Exactly. And it was, it was so like such an on the nose depiction of what's going on with the like, so called art generating systems that of course people were mad and they were mad, yes, about seeing those, you know, like an entire piano and a guitar and all these things being crushed, but also mad because of what's happening with this large scale theft of art in order to create this, like, just really, you know, monoculture, as you say, awful stuff. Agh, so yes, people were mad at it because it's so clearly depicted what we're mad about, or one of the things we're mad about, about AI.
Alex Hanna: Yeah. And this is just infuriating. So then he continues, "That's a clear example of that gap. How do you think about that? Because it feels like we're at a point now where it's really time to wrestle with how regular people feel about the technology and how the industry feels about the technology."
And, you know, like, you know, all things with the iPad example, you know, well withheld, you know, I'm glad he said that.
Um, Yuan responds, "That's a great question. The way I look at it is this, is that for any new technology, in particular for revolutionary technology, it's human nature. You have to focus on those early adopters, improve the technology to be better and better every day until it becomes a mainstream service. On day one, if you're a new technology, you cannot assume every ordinary user or families or companies are going to embrace that. If that's your assumption, I think that's not right. You have to focus on early adopters."
And I'm just like, is that, first off, is that true? Second, like maybe for some things that, you know, but at the same time, like I cannot see like this notion of a breakthrough moment for AI tools. Like it's, there is this vision and I've seen that, um, I think it's the, like the hockey stick adoption kind of like thing, either that one or the one there was, I, you know, there's, you know, there's some writing now about like the AI bubble, I wrote about this in a newsletter, um, in a, in a post it was called 'the grimy residue of the AI bubble,' but it was sort of like, okay, we're getting over the hype cycle and then there's going to be kind of regular use cases, but it just seems like those regular use cases, it's just that that's not even the thing anyone wants.
Like, I don't see that like reflected in this, like there's something uniquely kind of awful about this that's just like, is that induces like a very, like, I think a very rightfully knee jerk reaction in people.
Emily M. Bender: Yeah. And I think that the, the idea of the early adopters being your sort of first customers and the ones you design for in this context means you're designing for the people who want to do all the awful things with this.
So you're designing for you know, the doctors who published that terrible article in JAMA about using AI to speak, to give voice to the voiceless by recording their voice in all of their interactions with their doctor, and like using AI to tell the doctor what kind of end of life decisions this person would have made so that no actual person in the room has to be accountable anymore, right?
So those, those are the kinds of use cases that are going to be the early adopters. And it helps them to be able, and this is what was going on in, and again, see the newsletter I posted about this this morning. Um, the, in the past for those listening to this as a podcast. Um, the, you, when we tell people, 'Oh, this is going to become something that everybody uses. We just haven't hit the other end of that hockey stick.' Um, then it basically gives cover to the people who are saying this is inevitable. It's coming. We all just have to live with it.
Alex Hanna: Yeah.
Emily M. Bender: And no, like. We could refuse.
Alex Hanna: We can totally, we can totally refuse. Yeah.
Emily M. Bender: So Scantron in the chat has Wikipedia information about dogfooding.
Um, so "In 2006, Warren Harrison, the editor in chief of IEEE Software recounted that in the 1970s television advertisements for Alpo dog food, spokesperson and actor Lorne Green pointed out that he fed Alpo to his own dogs. Another possible origin, he remembered, was that the president of CalCan Pet Food was said to eat a can of his dog food at annual shareholders' meetings."
Alex Hanna: I see. Well, I guess, I mean, that's an interesting, and I mean, there may be different social histories to this, as if like, you know, there's other histories to, um, you know, the, the, the, um, Lena article, or the Lena image, you know, that, so there might be some alternative histories, but why was it dog food?
Why was it, why wasn't it the hair club for men? Remember, do we recall the hair club for men where it was, hey, I'm not just the president. I'm a member. Why don't we hair club for men our own product. I don't know.
Emily M. Bender: Exactly. And the first one of these is Lorne Green pointed out that he fed Alpo to his own dog. So basically the employees and the customers are both dogs is what's going on here.
Alex Hanna: It's Lor wait, is Lorne Greene a dog? Have I been watching Bonanza wrong all these years? That is the, that is the Bonanza guy, right?
Emily M. Bender: I have no idea.
Alex Hanna: Yes, it is. It is. It is Pa. I know that. My dad, my dad loved Bonanza. That's the only reason I know this.
Emily M. Bender: Uh, okay. Um, so is there anything else? Oh, I wanted to get to the bit about the internet being born because that was just kind of hilarious. Yeah. Um, and I'm finding it by searching. So, um, It's talking about, um, this is the part that, uh. So, journalist asks, uh, "How are you rolling that out? Are you saying to everyone, 'look, we've got to do this.' Are you saying, 'Hey, the market is demanding this.' Where is that pressure coming from to embrace AI through the product?"
And Yuan says, "I think two things. Internally, we have to closely monitor technology. When we played around with ChatGPT early last year, I said, 'wow, that's huge,' like in 1995, when the internet was born and everyone realized, wow, there's huge potential. You have to embrace that."
And so. Alex, when was the internet born?
Alex Hanna: The internet was born way before 1995. I mean, if you're talking about, if you're talking about ARPANET and you're talking about all that, yeah, the internet is much older. I mean, 1995 doesn't even mark the first commercial internet? I mean, I'm pretty sure CompuServe and Prodigy were around in 1995.
Right,
Emily M. Bender: and it also doesn't mark the start of the web, which I think is what he's aiming for.
Alex Hanna: Yeah.
Emily M. Bender: But that's also a couple years earlier than that.
Right, yeah. So if you're talking about that, I mean, and if you're, I mean, the thing that's interesting about this comparison, I think where this fails, is that In some ways, the internet has this, you know, ARPANET has these kind of early military slash university roots that then sort of get subverted by, Not subverted, but used by hacker types.
Um, and then used for some kind of interesting counterculture purposes. But then, like, but there's like, not really a way for like, A AI is sort of like, only born and centralized. I don't know. I, I'm not, I'm not willing to like, bet the farm on this analogy. But it's, it's very, uh, it's very tenuous to say the least.
Yeah, exactly. And it sounds very much like the CEO saying, looking back into the past, I can see a moment where if you were an early actor, you made a lot of money. Yeah. So therefore I am going to assume that I have to be an early actor in this moment too.
Alex Hanna: Yeah.
Emily M. Bender: Yeah.
Alex Hanna: All right. Well, I'm ready for, uh, I'm ready to close.
Emily M. Bender: Ready to improv?
I'm ready
Alex Hanna: to close out of the Zoom meeting and give you a firm handshake by, I'm sorry. This is the worst transition. Yeah. Let's go into improv.
Emily M. Bender: Okay. So are we doing musical or non musical today?
Alex Hanna: I think I can music today. I think I can. Yeah.
Emily M. Bender: Um, and what genre do you want?
Alex Hanna: Hey, you know, I, I mentioned 'Brat' at the top of the program. So I think we have to go with like techno pop, electronic dance music.
Emily M. Bender: Okay. So, uh, Zoom has deployed its AI avatar nonsense to its main customer, which is the IT folks in AI Hell. And you are as usual, a demon in AI Hell who shows up to a Zoom meeting and works out that every single other, one of the things is an avatar.
And so, because you're just talking to avatars, you start singing to them in the style of techno or techno pop.
Alex Hanna: Oh gosh. There's so many layers to this.
Emily M. Bender: As always.
Alex Hanna: It's like, is anyone else there? Is anyone else there? (beatboxing) Is anyone there? Is anyone there? Is anyone there? There, there, there, there.
And, you know, it just repeats ad nauseum. Anyways, uh, you have my explicit consent to remix that and put that in your next club track. (laughterc)
Emily M. Bender: I love it. I also look forward to the day when we have the supercut of all of the improv segues. But the reason we were doing that was because--oh hold on, while you were singing I should have been fixing this. I want to make sure I got them in the right order and I don't display the wrong thing first, um, so let me just get to the right window, um, and yes, take us to the first one.
Okay, now I can share. Is anyone there? Is anyone there?
Alex Hanna: Is anyone there, there, there.
Emily M. Bender: Abstract Tesseract says, "This is a bop, to be honest."
Alex Hanna: Yeah.
Emily M. Bender: And Bunny Hero is giving us a little techno dancing cat, which is wonderful.
Alex Hanna: Yeah.
Emily M. Bender: Okay, so, uh, in a publication called The 74, we have a piece-- uh, I don't see the journalist's name. Uh Mark Keierleber? July 1st, 2024, uh, and the headline is, "Whistleblower: LA schools' chatbot misused student data as tech co crumbled." Subtitle: "AllHere, ed tech startup hired to build LAUSD's lauded chatbot 'Ed,' played fast and loose with sensitive records, ex-software engineer alleges." Who could have guessed?
Alex Hanna: Yeah, just nightmare scenarios. I mean, it's the kind of--I was talking last week with Adrienne Williams, who is a fellow at DAIR and, you know, she used to be a charter school teacher. And, uh, we've been talking a lot about student data and the kind of, different things that happen with student data, but there's just so little, you know, regulation. I mean, you kind of have FERPA, that only gets you so far, especially if, you know, these districts are setting contracts with third parties and basically no procurement guidance.
So education is just this thing where so much student data is being thrown around with really little oversight.
Emily M. Bender: Yeah. And it's just like, you know that's going to happen. Right? The, it's just, the, the data as, you know, a sort of toxic pile is too tempting. And, yeah. So, and of course, the, the other background to the story is that, that startup imploded too.
So, LA school district put six million dollars into creating this, uh, thing called Ed. Um, and not only are they getting nothing for it, but they're also getting violation of student privacy.
Alex Hanna: Hmm.
Emily M. Bender: Um, all right. Um.
Alex Hanna: I see the comments about the, about the, uh, the jitteriness of the stream. Sorry about that. Yeah.
Our producer's looking into that. Hopefully it'll clear up soon.
Emily M. Bender: Yeah. So still on the education beat. This is from July 4th, 2024 by Wilborn P. Nobles III in Axios Atlanta. Headline, "Morehouse to use AI teaching assistants this fall." Uh, and this is pretty short, right? Yeah. So, uh. "Why it matters: morehouse professor, uh, Muhsinah Morris says every professor will have an AI assistant in three to five years. Technology has taken off in the last 24 months faster than it has in the last 24 years." Number go up. "Meanwhile, baby boomers are leaving the workforce amid national teacher shortages and burnout." Uh, hey, guess what? There's a couple other younger generations who are looking for jobs.
Alex Hanna: Yeah. Absolutely.
Emily M. Bender: Um, all right. So, "How it works: Morehouse professors will collaborate with technology partner Victory XR to create virtual 3D spatial avatars. The avatars use OpenAI--" of course, "--to have two way oral conversations with the students." Are the students allowed to opt out of this?
Alex Hanna: And you gotta describe, there's, there's kind of an example on one of these avatars above.
Um, yeah, so it's got this kind of janky looking avatar, um.
Emily M. Bender: Looks like a metaverse that they sort of stuck some, metaverse person they stuck some pants on.
Alex Hanna: I gotta say, I gotta say that, like, if you're gonna put legs on a metaverse, those legs look like super jacked. So, like, I hope if I'm in the metaverse, I just have quads of steel like this.
(laughter)
Emily M. Bender: Yeah, alright, so this is just such a bad idea, right? Like, it's gonna give incorrect information, it's gonna be massive privacy violations, it's gonna displace the, you know, students who are hired to be TAs, like, it's just bad on every front. Onto the next bad idea. You want to do the reading of this one?
Alex Hanna: Sure. So this is from New Scientist. Um, and the title is, "Diet Monitoring AI Tracks Your Each and Every Spoonful. An AI that watches you while you eat can estimate how much you're consuming and could help people track their calorie intake." The journalist is Matthew Sparkes. The date, June 4th, 2024. Just really disordered eating in one article.
Just, absolutely terrible. Um, yeah. I don't think we have to read it. It looks like it's paywall. But like. Yeah.
Emily M. Bender: But this part's relevant. So, "Using AI to measure these facets of a meal isn't a new idea, with previous models able to take an image of food on a plate and provide an estimate." So what's different here is that now it's watching you eat.
It's like counting stuff as it goes into your mouth. Like, why? Like, disordered eating and surveillance and just no.
Alex Hanna: Yeah. There's no need for those. Yes. Uh, the next one is a tweet, um, so this is a thread from a teacher on AI in the classroom, "is beyond frustrating. I also find it discouraging more than a little terrifying.
CC, uh, Justine, justine Bateman, who is a, um, I think, I think a writer, a Hollywood writer. So this is from, I don't know, a platform, I'm assuming Twitter. Or, actually, it might be Mastodon. It looks like Mastodon. Um, so it says, "Still Orange, Crushed." So, this person says, "Weird interaction with a student this week. They kept coming up with weird, quote, 'facts,' and in parenthesis, 'Greek is actually a combination of four other languages,' uh, end quote. That left me baffled. I said, let's look this stuff up together, and they said, okay. I'll open up a search bar and they opened dot, dot, dot Ch*tGPT." And it's got like a star to be censored.
Um, and the next, the rest of the thread, uh, they say, "I was like, this isn't a search bar. And they were like, yes, it is. You can search for anything in here. The thing that made me feel crazy is like every kid that's using this as a browser is getting a new--is getting new bespoke false facts. This isn't a widespread misconception about X that says from how it's taught in schools, it's just each individual kid is now hooked into a nonsense machine. With the widespread misconception about X, you can start at a baseline. Like, okay, in 10th grade, we can all talk about X for a thing from history, and it leaves us with some misguided conceptions about X. But we can correct that as soon as we get broader understandings of the world. But with this, each child is getting unique wrong facts they are sure as--are correct because they did what we told them to do. They quote, looked it up, they got it from somewhere. It's not a kid, uh, making a belief on hearsay and assumption. It's something that they learned."
And I think there's one more thing on this, but we don't have to read it, but it's just like, it's just really, you know, like, yeah, it does point to that uh, uh, that, uh, that thing you just re upped Emily about how chatbots aren't a replacement for search. Who would have thought? Yeah.
Emily M. Bender: And so, yeah, so the, the, just to, to give you a break here and, and summarize these last few posts, uh, Still Orange Crushed talks about the kid being combative and like saying, decided I was simply a teacher who've made a mistake.
And the final post is, "It was so fucking rough. I did my best, but I'm one person trying to work against a campaign of misinformation so fast, so vast that it fucking terrifies me. This kid is being set up for a life lived entirely inside the hall of mirrors." Yeah. And like, really feel for the kid and for the teacher in this case.
And I appreciate them taking the time to, you know, write this story out, but yes, uh, so I have re upped on Twitter and LinkedIn, a selection of links to things that I've written, uh, largely with Chirag Shah on why search engines should not be replaced by chatbots. Like there's problems with search engines, but the synthetic text extruding machines don't solve them.
Um, so you can check that out and we'll put it in the show notes too.
Alex Hanna: Yeah.
Emily M. Bender: Um, okay. This one's a bit weird. This is a PDF, um, hosted at middleeast.EPFL.CH, so that's Switzerland, and uh, "EPFL and Israel report on the state of collaboration, prepared by the Global Ethics and Partnership Committee, committee in June, on June 25th of 2024."
And they're basically the looking at an overview of collaborations with Israeli organizations, um, and it says, "The aim of this document is to know in detail with whom EPFL collaborates in Israel. We look at institutions with whom we have structured relations in research, education, innovation, and focus in particular on the compliance with the current policies of the Swiss government. And we analyze possible direct contributions of EPFL researchers to Israel's military operations in the occupied Palestinian territories. Each identified collaboration is liable to receive a green, orange or red flag with regards to the potential proximity to Israel's military actions." So, so far can kind of understand why they're doing this, right?
Alex Hanna: Yeah. And I want to say that this was, it looks like this is a reaction. Um, EPFL is the, um, my, pardon my French, École Polytechnique Fédérale de Lausanne, it's in Lausanne, uh, I'm assuming. Um, and, um, in Switzerland. And there was, you know, an occupation and, you know, one of the demands I'm assuming was, uh, to, you know, accord with the, the, the, uh, boycott on Israeli universities.
So contextualizing that.
Emily M. Bender: So, all right, so far, so good. But then one of their methodologies uh, involved asking ChatGPT, like. So, "For the current additional evaluation, the 22 research groups have additionally been analyzed based on 1) the project abstract, 2) internet search and website of the PI of the partner institution from Israel. And for those cases where we considered that others might look for dual use potential, 3) with ChatGPT."
Alex Hanna: Yeah. So just absurd. Also want to, I think it was someone on LinkedIn flagged this for us. So I forget who, but, um, if it was you, thank you.
Emily M. Bender: Uh, thank you, listener.
Alex Hanna: That was, uh, thanks for flagging. It was really bizarre.
Emily M. Bender: Yeah, exactly. And, and yet again, ChatGPT is not a source of information about anything other than the distribution of word forms in text, and not even usefully about that, because we don't know what text.
All right. On a slightly lighter note, um, we were entertained by this headline in 404 Media by Emmanuel Maiberg.
Um, "AI hell is--" AI hell, as in Fresh AI Hell. "--is begging a chatbot for a $5 discount on a light fixture." Um, so this is, uh, an article where some company called Nibble is basically deploying chatbots that allow users to try to negotiate for a better price. As an alternative to going and digging up for the discount code that you saw maybe your friend had. Like--
Alex Hanna: Yeah. Just really, yeah, just absolutely just terrible for everyone involved I think um, and so you know, there's a Twitter user that quoted here. It says, "This is absolute madness. 80 pounds off for talking to a fucking AI." Twitter user george McGowan said, "Other posters on Twitter said the idea of haggling with a chat bot for a discount was quote 'dystopian,' quote 'gamification of mattress purchases,' and quote 'the dumbest timeline.'"
Emily M. Bender: Indeed the dumbest timeline.
And there's just, you know, the, one of the things in here when, and I'm not logged in, so I can't see it. But, um, from what I read it before, the founder of this company was talking about how much fun they had, um, negotiating at some market when they were somewhere, um, away from their home country. And I think, I think the person's in the UK and in general, people in the UK don't feel comfortable doing that.
So he wanted to provide that experience for lots of other people. And it's like, how about designing for anybody other than the tech bros?
Alex Hanna: Yeah. And I mean like, as someone who is you know a second generation immigrant who comes from a family of hagglers, there's something very culturally enriching with haggling, you know that comes from it. I feel like you understand a little bit more about like a particular kind of culture. Like and certain cultures have other like certain kinds of norms of haggling. It is kind of like it is kind of like, not to defend the consumer relation, but it is like at least a way to get to know someone in an interesting way.
This is just--
Emily M. Bender: And if you do that, you're having an authentic experience with a person.
Alex Hanna: Yeah. And this is certainly not it.
Emily M. Bender: No, absolutely not. That's all I have for Fresh AI Hell this week.
Alex Hanna: Oof. Alright, we got through it. Uh, we got through it. We're in the club. We're having a hot AI Hell brat summer. (laughter)
That's it for this week.
Our theme song is by Toby Menon. Graphic design by Naomi Pleasure-Park. Production by Christie Taylor. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us by rating and reviewing us on Apple Podcasts and Spotify. And by donating to DAIR at dair-institute.org. That's D A I R hyphen institute dot O R G.
Emily M. Bender: Find us and all our past episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/dair_institute. Again, that's D A I R underscore institute.
I'm Emily M. Bender.
Alex Hanna: And I'm Alex Hanna. Stay out of AI hell, y'all.
Is anybody there there there there.