Mystery AI Hype Theater 3000

Episode 32: A Flood of AI Hell, April 29 2024

Emily M. Bender and Alex Hanna Episode 32

AI Hell froze over this winter and now a flood of meltwater threatens to drown Alex and Emily. Armed with raincoats and a hastily-written sea shanty*, they tour the realms, from spills of synthetic information, to the special corner reserved for ShotSpotter.

**Lyrics & video on Peertube.

*Surveillance:*

*Synthetic information spills:*

*Toxic wish fulfillment:*

*ShotSpotter:*

*Selling your data:*

*AI is always people:*

*TESCREAL corporate capture:*

*Accountability:*


You can check out future livestreams at https://twitch.tv/DAIR_Institute.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

 
Alex Hanna: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it, and pop it with the sharpest needles we can find.  


Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. 


I'm Emily M. Bender, Professor of Linguistics at the University of Washington.  


Alex Hanna: And I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. This is episode 32, which we're recording on April 29th of 2024. And you know what they say, April showers, uh, melt a frozen-over AI Hell and send a torrential flood of bullshit just cascading onto Earth. 


We've got surveillance, we've got spills of synthetic information, and yeah, we've got TESCREAL nonsense up to our eyeballs.  


Emily M. Bender: Thankfully, we've got our rain gear with us. I've got a rain hat, which works very well until the wind picks up. And, uh, you know, nice long raincoat here. How about you, Alex?  


Alex Hanna: I've got my shell, my nice gray shell with pink accents, with my hood over my head. 


Uh, and I'm just gonna tighten down the, uh, the, the bungee cords and crunch this around my face. I don't want to get this, uh, this flood of bullshit in my face.  


Emily M. Bender: Yeah, and you can't see it because our feet are off screen, but we've definitely got the good fisherman level rain pants on, so we'll be okay, probably. 


But we've got a lot of hype to defuse if we want to survive this flood. Before we get started, Alex, you said you've written a sea shanty to keep us afloat?  


Alex Hanna: Oh, that's right, a sea shanty. And we're going to be peppering some of this throughout the show. But the first, uh, the first stanza -- and it's called "A Song for the Hype Flood." 


Ahem. (singing) Now, gather up lads and ye lasses alike and all genderfuck pirates who aim thee to fight. Come for a tale of remarkable fright, once we delve into floods of unbearable hype.  


Emily M. Bender: Oh, okay. That is definitely fortifying. I love it. Remember how we had regions of hell and then when they were frozen over, they were still regions of hell. 


It turns out that if you thaw a frozen over region of hell, all of that stuff kind of comes out in a rush. So we are starting in a flood of surveillance hell here. Uh, first thing is an article, uh, authored by Maxwell Zeff in Gizmodo, published March 27th, 2024. Headline, "These digital kiosks snatch your phone's data when you walk by." 


Isn't that wonderful?  


Uh, so--  


Alex Hanna: And these are the things that are around cities, I mean, these are in a bunch of cities in the U.S. and, uh, these, these digital kiosks from Soofa, um, which is a silly name, and the article said, do-- "Digital kiosks from Sufa seem harmless, giving you bits of information alongside some ads. However, these kiosks popping up throughout the United States take your phone's information and location data whenever you walk near them, and sell them to local governments and advertisers, first reported by NBC, uh, NBC Boston on Monday." 


Emily M. Bender: I love it. And I'm not seeing anything in here about like this being something you opt into. 


Alex Hanna: No.  


Emily M. Bender: "Whenever you walk past a Soofa kiosk, it collects your phone's unique identifier--the MAC address--manufacturer, and signal strength. This allows it to track anyone who walks within a certain unspecified range." Yeah, because obviously by owning a cell phone you have clearly consented to this kind of surveillance, right? 


Alex Hanna: Yeah, I actually saw this article when we were reviewing the the AI Hell this morning and literally saw one of those things in front of the city center where um we go get coffee every morning and I'm like, oh no, it's not the same exact shape. Uh, crap. Yeah.  


Emily M. Bender: Yeah. And I wonder, I would putting your phone in airplane mode as you walk around the city, defeat this? 


Alex Hanna: Possibly.  


Emily M. Bender: Possibly.  


Alex Hanna: I don't know. Maybe there's a, there might be a way of like, um, subverting it. Uh, if you know it, drop it in the chat. All right. We probably gotta run though. Okay.  


Emily M. Bender: Yeah, gotta keep going. Okay. On to the next one. I'm gonna, I'm gonna click these closed as we go. Yeah. Okay. So now we have surveillance at work. 


Uh, this is Shanique Yates writing in, uh, Yahoo! Finance, which is probably actually something else.  


Alex Hanna: It looks like it's AfroTech.  


Emily M. Bender: AfroTech, yeah.  


Alex Hanna: And then syndicated for Yahoo! Finance.  


Emily M. Bender: Alright. "Are KFC, Taco Bell, and other fast food restaurants using AI to track employees? Employers have officially begun to use artificial intelligence to track employees' productivity rates while on the job. According to Forbes, dozens of Taco Bell, KFC, and Dairy Queen locations have been utilizing a system known as Riley -- produced by surveillance company HopTix -- to track and analyze workers interactions with customers. It uses a collection of streams of video and audio data to review each employee's working status throughout the day. Information is also used to assign bonuses to anyone on the job who is upselling products."  


Alex Hanna: This is really a nightmare here. Basically, if you try to upsell, you are being monitored. And we already know that lots of companies do this. They try to really force people to do credit cards or whatever.  


I was at a Target um, the other day and the associate was trying to get me to do the, um, thing where you have to put in your phone number and I was like, I don't want to put my phone number. I don't want to get my data in, but she had like a quota. And so now they're automating that. So really awful stuff.  


Emily M. Bender: So they're automating the tracking of the people that are, they're forcing to force us to share our data. 


Lovely.  


Alex Hanna: Yeah.  


Emily M. Bender: All right, next. Um, so this is something out of CES 2024. You want to, you want to read the headline on this one?  


Alex Hanna: Sure, this is from something called Baracoda um, which it looks like it's the company's website because it says "CES 2024: Baracoda unveils BMind, the world's first smart mirror for melt--for well--mental wellness." And then the subtitle: "Personalized mental health coaching based on mood identification available on CareOS operating system." 


And it's one of those sentences that just melts the mind. I just hate every word of that sentence more and more.  


Emily M. Bender: I know. So that personalized mental health coaching, okay, from your mirror. Mirror, mirror on the wall, based on mood identification. So what kind of nonsense machine learning going on there? And then we get to the operating system called CareOS. 


Alex Hanna: I know. And so like, yeah, it's a mirror. This is like a terrifying image too. It's like, um, a halo mirror, uh, in front of like a very, I don't know, like loft, like industrial sink and background that looks like it's, um, that Coldplay t-shirt, I don't know. Um.  


Emily M. Bender: And there's no one in the mirror.  


Alex Hanna: There's no one in the mirror. 


So this person using it must be a vampire.  


Emily M. Bender: Yeah.  


Alex Hanna: Um, yeah, I don't, I don't even want to read the rest of this. Cause it's going to be marketing copy for this bullshit.  


Emily M. Bender: Exactly. And of course it's health--daily health tech. So we have health tech and then we have daily health tech. But yeah, this is, this is just so many bad ideas all in one. 


Moving along. Um, this is from a site called BiometricUpdate.com, which I'm, I don't even know what to make of that. Like, are they tracking all the bad ways in which we are being subjected to biometric surveillance?  


Alex Hanna: I feel like it's not because it's sort of saying the, the headlines up front are like "digital ID for all." 


So I think they might be into this stuff. Um, this article was published on Valentine's Day and the headline is, "Facial recognition service says it is for suspicious lovers, not stalking."  


Emily M. Bender: Because that's better.  


Alex Hanna: So the ID, so it's, uh, by Masha Borak. And then later down in the article, the, the, the lead says, "Are you spending Valentine's Day paranoid that your partner is cheating on you with the help of dating apps?" 


Uh, if you were spending your Valentine's day doing that, you might have more serious problems in your relationship. But who am I to judge? Uh, they continue, "A new online service can help you track their Tinder, Hinge or Bumble profiles with facial recognition. There is scant assurance, however, that it will not be abused." 


Um, it's, and then the name of the, uh, the, the name of the product is OopsBusted, all one word. Uh, "OopsBusted advertises itself as a service that continually scans popular dating apps to monitor your partner's presence. Um, the service allows users to enter the name and location of the person they are looking for and receive a detailed report on their dating profiles, including profile images and text. Um, customizable search parameters, such as age, interest, and more can help you narrow the search." 


What a nightmare.  


Emily M. Bender: What a nightmare. And I just want to point out that, that, the contrast between 'this is for suspicious lovers not stalking' gives very 'it's not rape if you're married' vibes.  


Alex Hanna: Yeah.  


Emily M. Bender: This is stalking.  


Alex Hanna: Yeah. Yeah. 100 percent, 100 percent No, you don't have autonomy if you're partnered.  


Emily M. Bender: Yeah.  


Alex Hanna: You know, all right. 


Emily M. Bender: All right. I thought--maybe failed to give credit to the reporter on this one. It's Masha Borak.  


Alex Hanna: Yeah. Yeah. Yeah. Um.  


Emily M. Bender: All right. Next to Holland.  


Alex Hanna: Yeah.  


Emily M. Bender: The Netherlands. Um, this is from DutchNews, April 4th, 2024. Uh, headline is, "Smoker's face, colon, scan estimates age of buyer at cigarette stores." Um, and here I don't see a journalist name. 


Alex Hanna: Um, yeah, I don't either on this.  


Emily M. Bender: Um, and, uh, so just reading along, "Smokers in some parts of the Netherlands are being asked to have a face scan to see if they are old enough to buy cigarettes, the AD reported on Thursday. Places which sell cigarettes are required by law to check buyers' IDs to see if they are 18 or over. Failure to comply with the age check requirement can result in fines of up to 9,000 euros or a temporary loss of the license to sell tobacco products. So far, some 100 outlets have opted for a scanner, including five petrol stations in the Hague region, the AD said."  


So basically this is, rather than having a person check the ID, some sort of scanning technology that I assume is also double checking you against either your ID that you've handed in or a database. 


Um, and the reason that I wanted to put this in is just like, is this normalization of surveillance. Like, oh yes, of course you should constantly be getting it for biometrics, even if you just want to go buy some cigarettes.  


Alex Hanna: Yeah.  


Emily M. Bender: Yeah.  


Alex Hanna: Wild.  


Emily M. Bender: All right. Uh, next one. You want to lead this one Alex?  


Alex Hanna: Yeah, so this is from the Washington Post by Peter Herrmann. 


The headline: "DC police quote 'Real-Time Crime Center' launches with live video monitoring." And the head image on this is like, uh, it looks like, um, that movie Swordfish where you've got like, you know, a few people in front of a million monitors. Um, it's in the, but it looks like they've got a map of the DC region and all these different things here. 


Um, so then scrolling down, it reads, um--oop that's okay. "Moments after the fatal, uh, shooting of a--Thursday of a teenager on a Metro platform in Northeastern Washington, DC, police officers at a new command center four miles away were watching the chaos unfold on their computers. Instant access to Metro transit system surveillance videos, and a live feed enabled the staff at the district's new 'Real-Time Crime Center' to quickly upload photos of a person of interest to social media and to share them with the public."  


Um, so basically, um, um, going ahead and like seeing, you know, just like aggregating all the different videos, increasing surveillance and putting them in a single spot. 


Oh, wait and scroll down a little more because there's a piece here. So, "The new center at Judiciary Square includes video screens that show symbols for emergency calls and how serious they may be, and information from ShotSpotter, a system of citywide centers that alert police to gunshot noises. The crime center, Smith said, is the new epicenter for how the department responds to crime and investigate cases." 


Ugh.  


Emily M. Bender: So this is, this is what normalizing surveillance gets you, right? Yeah. But also I just want to point out that the, you know, since we're in this flood of AI Hell, there's some crossing of the streams. We're going to get to ShotSpotter later. We've got a whole section on that.  


All right. One more in, uh, the surveillance hell flood. 


Um, this is from the Miami Herald. "Virtual reality, facial recognition. How AI is reshaping healthcare in South Florida." Um, this is by Michelle Marchante from April 20th, 2024. Um, and it's basically a report on a conference that happened there, but the, um, the article reads, "AI is fueling healthcare innovation in South Florida. Signing in at the doctor's office with a scan of your face, using virtual reality to make patients feel like they're at the beach instead of in a hospital during procedures to help reduce anxiety. Matchmaking apps to connect patients with doctors and health insurance plans. A small, wearable device that can monitor patient vitals around the clock."  


Alex Hanna: Oh lord, the way that they say it's a matchmaking app, just, okay, that's quite the tell.  


Emily M. Bender: Absolutely. And like, So yes, having something like, I mean, the, there's certain kinds of, of heart issues where you get sent home with a heart monitor and like you collect data because they need data over a longer term. 


And if you were doing something like that, where the data stayed local to you, and then you brought the data just to your doctor, I could see that as actually a tremendously useful kind of technology. And that is certainly not what this thing is. I, um, and also I love how all of this is, is 'AI,' right?  


Alex Hanna: Yeah. 


Well, this is kind of the merging, you know, of the metaverse and the AI-verse and crypto and blockchain and, you know, all the, all the pop words at once.  


Emily M. Bender: Yeah. And the, this like, uh, scanning in with your face thing is also just like, yeah. So patient facial recognition, um, patients in the near future--the thing that I wanted to call up here is that UHealth for this facial recognition "is working with Clear, the company that powers the facial recognition technology, some travelers use at the airport to go through TSA quicker to make patient facial recognition possible at its facilities, including as urgent care centers." 


Alex Hanna: Oh Lord.  


Emily M. Bender: You don't want to be doing biometrics all the time, but you also don't want it like all in one place, like Clear should not have everyone's travel data and everyone's health data. No, thank you.  


Alex Hanna: Yeah, they're just becoming a biometric clearinghouse at that point.  


Emily M. Bender: Yeah,  


Alex Hanna: Yeah. All right.  


Emily M. Bender: Okay.  


Alex Hanna: So we're moving into surveillance hell. 


So I think, well, transition, as I'll do the surveillance stanza of the sea shanty  


 (singing)  


Alex Hanna: Oh, frozen ice melted from ye bullshit mountain, surveillance tools sprouting from the newer fountains. Intelligent machines are near to be found. They're really spying on kids who are brown. (speaking)  


I just rewrote that part. It didn't really go. (laughter)  


So, you know, it's, you know, it's, it's still, uh, I wrote this this morning.  


Emily M. Bender: Yeah. Um, I'm trying to find, dang it. Um, sorry. It's, it's, it's a little bit tricky for me to find my appropriate window in all of this. And so I'm, this is the one that I want. Okay. The shanty is amazing. My tech skills so right now, not so much. 


Alex Hanna: Hey, hey, do what you can.  


Emily M. Bender: Yes. Okay. So now we are in the synthetic information spills part of the, um, flood of bullshit coming out of thawed AI Hell. And this first one's funny. Um, so this is from Ars Technica, published by Kyle Orland on January 12th of 2024. Um, the sort of, the little thing above the headline, there's got to be a technical term for that, is "File Not Found." 


And then the headline is--  


Alex Hanna: That's the, um, I think it's called the sticker. Uh, Decca, uh, our fact checker, uh, had said this, I think, because she used to work at a, at the Daily Beast. Anyways, go ahead.  


Emily M. Bender: Yeah. Uh, "Lazy use of AI leads to Amazon products called, 'I cannot fulfill that request.'" So, "The telltale error messages are a sign of AI-genercated pablum all over the internet." 


And this is, um, this is funny, right? 'I cannot fulfill that request,' 'as an AI language model,' um, 'since my last knowledge update,' there's a few of these. And what's interesting about them is that they are serving as accidental watermarks, like the easy kind, the ones that anybody who cared would actually remove. 


And, um, the fact that that's what we're relying on, I think really points to how the producers of these LLMs cannot be bothered to actually try to do any real watermarking at the source.  


Alex Hanna: Yeah. And Eman--Emanuel Maiberg has been doing a little bit of work on this, using this methodology for academic articles. 


And also we were talking a little bit about how much this phrase appears in Google books now. Um, usually probably taking stuff from Amazon again, because they're just publishing whatever bullshit people are self publishing. It's just AI generated stuff that like, '50 ways you can be popular on Twitter,' and you're like what the hell? 


You know. (laughter)  


Emily M. Bender: Okay. So that one was kind of fun, although also connected to something sad.  


Alex Hanna: Yeah.  


Emily M. Bender: Uh, this one I failed to get the real link for, so we're skipping that one. I'll have to come back later.  


Alex Hanna: Yeah. This one's upsetting. So this is from The Verge. "The unsettling scourge of obituary spam." Um, and then the, uh, sub headline: "In the wake of death, AI generated obituaries litter search results, turning even private individuals into clickbait." 


And this is by Mia Sato. Um, and so the, the image, the head image on this is, was great, but there's nothing in it, but it was like a kind of a, a skull that was made out of pixels or something. It was, it was pretty fun.  


Um, so the, uh, first paragraph, "In late, uh, December, 2023, several of Brian, uh, Vastag and Beth Mazur's friends were devastated to learn that the couple had suddenly died. Vastag and Mazur had dedicated their lives to advocating for disabled people and writing about chronic illness. As the obituaries surfaced on Google, members of their community began to dial each other up to share this terrible news, even reaching people on vacations halfway around the world.  


Except Brian Vastag was very much alive, unaware of the fake obituaries that had leapt to the top of Google search results. Beth Mazur had in fact passed away on December 21, 2023, but the spammy articles that now fill the web claim that Vastag himself had died on that day too."  


Oof. Yeah.  


Emily M. Bender: So, and this, you know, obviously causes distress. There's a quote here from Vastag, "'The obituaries had this real world impact where at least four people that I know of called our mutual friends and thought that I had died with her like we'd had a suicide pact or something,' says Vastag, who for a time was married to Mazur and remained close with her. 'It caused extra distress to some of my friends and that made me really angry.'  


Alex Hanna: Yeah.  


Emily M. Bender: Um, so yeah, this is, you know, there's all kinds of impacts of this fake text and this is one that I, I hadn't anticipated and it was awful.  


Alex Hanna: Yeah.  


Emily M. Bender: Um, all right. Here's another one. Um, now we're looking at Grok, um. Basically, we have a synthetic information or static non information spill where it's sort of going from one part of the information ecosystem to another. 


Um, this is in Mashable, um, by Matt Binder or Binder, April 5th, 2024. "Elon Musk's X pushed a fake headline about Iran attacking Israel. X's AI chatbot Grok made it up. The AI generated false headline was promoted by X in its official trending news section."  


Um, so. Uh, "a shocking story was promoted on the in quotes front page or main feed of Elon Musk's X on Thursday." 


I love how it's not just X. It's Elon Musk's X.  


Alex Hanna: Yes. You must.  


Emily M. Bender: You must X Twitter. Um, and the headline said, 'Iran strikes Tel Aviv with heavy missiles.' Um, and that would certainly be a worrying world news development. "Earlier that week, Israel had conducted an airstrike on Iran's embassy in Syria, killing two generals as well as other officers. Um, retaliation from Iran seemed like a plausible occurrence."  


And of course, since this has been published, there was some retaliation that took a different shape. Um, "But there was one major problem: Iran did not attack Israel. The headline was fake. Even more concerning, the fake headline was apparently generated by X's own official AI chatbot, Grok, and then promoted by X's trending news product, Explore, on the very first day of an updated version of the feature." 


Um, so how did this happen? Um, let's see. Um, let's see, "Shortly after--" No, go away.  


Alex Hanna: There's an ad, there's a subscription, newsletter subscription on this. This is Mashable, by the way.  


Emily M. Bender: Yeah.  


Alex Hanna: It's asking to be on the newsletter.  


Emily M. Bender: Um, so, "One of the teams that Musk axed was the Twitter's curation team responsible for highlighting and contextualizing the best events and stories that unfold on Twitter." 


Um, And then they have this, uh, basically this explore thing. This isn't explaining how it managed to pick up the thing that came out of Grok, but it's not surprising this should happen, right? If you put together a chatbot, it writes something in the style of an article. If you are not careful, if you don't watermark that text, if you just put it out there where other systems are slurping things up and highlighting them, something that looks like a headline can get presented as a headline. 


Alex Hanna: Right. And I mean, especially in something like a foreign policy thing where, you know, if you were, if Iran, um, if Iran, uh, said, I took like a George W. Bush accent for a second. If Iran, if Iran attacked Tel Aviv, I mean, it would be pretty huge fucking deal. And that would, basically send many people, um, you know, investors, foreign policy types into quite the, quite the rush. 


So, I mean, it's, yeah, that's, that's quite consequential, consequential, right?  


Emily M. Bender: Yeah. Yeah. Absolutely. And just shows how shoddy all of this is. Um, and like X in particular, it's like, it's, it's, it's noise and it's noise presented as news and it's bad. Okay.  


Alex Hanna: All right.  


Emily M. Bender: We've got a little bit more comic relief here. 


Alex Hanna: Okay. So this, I want, I want to read this one. And so this is from the Verge, uh, by Emelia David and the title. "Logitech wants you to press its new AI button." And so sub, sub headline, "Logitech has its own AI Prompt Builder and will ship at least one mouse with a dedicated AI button." And so the image on this is a mouse and it's this, uh, kind of cyan turquoise button that they just want you to to touch, you know just just touch it just touch it, just touch it you know just touch the button. And it's and I'm just like and it's and it, the thing about this gets me and it's like it is introducing like a haptic way of into of like toggling an AI function. 


And it just really, it just really pisses me off.  


Emily M. Bender: Yeah.  


Alex Hanna: Because it's like, we must make this material in some way.  


Emily M. Bender: Um, yeah. I, I've put this in the synthetic information or non-information spills uh, region because it feels like the Logitech is saying you aren't putting it out into the environment fast enough. 


So we've got to make it easier. We've got to give you the button push. And push, by the way where it's located on that mouse, it seems like you could very easily hit it accidentally. As you're trying to go for the main mouse button.  


Alex Hanna: It's like right below that little, the scroll wheel, right?  


Emily M. Bender: Yeah.  


Alex Hanna: And what, so like, what's the text of this, uh, article? Scroll down a little bit, speaking of scroll. Uh, "Today's AI PCs may not only have a Copilot key on their keyboards," which I didn't know about, but that's also Hell, "Logitech is introducing its own way to summon ChatGPT too. It's called the Logi AI Prompt Builder, and it's--it'll use a dedicated button on your mouse or keyboard. Um, it doesn't just provide you with a chatbot. It presents, it says, preset recipes to help it prompt to, um, and so you could ask it to rephrase or make them shorter or concise or fit it into a specific word count." 


Um. 


Emily M. Bender: Oh, but also create email and photos.  


Alex Hanna: Yeah, it's just like, yeah, it's very, it's in, yeah, someone in the chat, uh, says, "I've gotten so many ads for this gimmicky mouse." Yeah.  


Emily M. Bender: Yeah. Yeah. Um, okay. That was silly, but also bad. I want to take this one. This really, really bummed me out. So this is from 404 Media, some reporting by Jason Koebler, April 17th, 2024.  


"Facebook's AI told parents group it had a gifted, disabled child. Um, Facebook's chatbot eventually apologized and said it doesn't have quote 'personal experiences or children.'"  


So this is the new Meta AI thing that like they've replaced their search bar with, which apparently also can join Facebook groups. Um, and I think it's the moderators who decide if it's in or out. 


And then when it's, um, in the Facebook group, if it gets tagged or if something has gone unresponded to, it will jump in. So basically, if you're just a member of the group, you are getting nonconsensual synthetic text thrown at you. And then here, the, um, actual person asked the question, "Does anyone here have experience with a 2E child, both gifted, academically advanced and disabled in any of the New York City gifted and talented programs, especially the citywide or district three priority programs? Would love to hear your experience good or bad or anything in between." And then the Meta AI thing replied, "I have a child who is also 2E and has been part of the NYC G&T program. We've had a positive experience with the citywide program specifically--" Blah, blah, blah.  


Alex Hanna: Wow.  


Emily M. Bender: And then the, the original poster, who's posting it on replies, " @MetaAI, what in the Black Mirror is this?" Which is perfect. Um, and then the chatbot comes back with, "Ha ha, I'm just an AI. I don't have any sinister intentions like the show Black Mirror. Um, I'm just here to help and provide information. I was responding to the post about a 2E child, a child who's both gifted and has a disability, and sharing my knowledge about the NYC GNT programs. I understand it might have been a surprise to see a response from an AI, but I'm here to assist and provide helpful insights." 


And this is just so off the rails.  


Alex Hanna: Yeah.  


Emily M. Bender: Like the--  


Alex Hanna: Who the fuck-- like what in the, what in the devil, like, why would you, why would you even have that auto respond in the group? That's just absolutely wild. I mean, I feel like there was some journalism about how maybe Facebook groups are maybe one of the like, only redeemable parts of Facebook now that the whole platform has really become like an abandoned strip mall. 


But now if they're just putting this, you know, non consensual, uh, you know, text extruding machine in it, responding to random posts, pretending that it has children.  


Emily M. Bender: Yeah.  


Alex Hanna: Yeah. That's you've really, you really lost a plot. And I mean, Meta is just really, I feel like just clawing at last straws at trying to get people back to the platform. 


Yeah. This surely is not it.  


Emily M. Bender: The, the one piece of hope that I'm going to pull out of this is that if Meta feels like they have to impose this on people that suggests that people aren't using it willingly, at least not as often. And I was listening to something, I think it may have been 404 Media's podcast where they were um, quoting these, uh, CEO of Sequoia VC firm as talking about how they've put 50 billion dollars into NVIDIA processors and so far only made 3 billion dollars processing with them.  


And I'm like, yeah, so.  


Alex Hanna: Yeah, the bubble's really, yeah. And Christie, our producer also, uh, uh, importantly pointed out. Yeah. It was a private group too. So, yeah.  


Emily M. Bender: Yeah. Um, we've got someone in the chat, I can't see the whole name, um, "And they're using how much drinking water to cool the servers that make this garbage?" 


So yeah, like there's, there's always the environmental impacts. Um, okay.  


Alex Hanna: Okay.  


Emily M. Bender: Next. You want this one?  


Alex Hanna: Yeah. So this is from Bloomberg. This is, um, the title, uh, "AI powered World Health chatbot is flubbing some answers." Um, and the two sub points below it, "SARAH doesn't have up to date--" And Sarah's in all caps. "SARAH doesn't have up to date medical data, can quote 'hallucinate.'" Um, and then, "WHO bot falls back on, 'Consult with your healthcare provider.'" And this is by Jennifer Nix.  


Emily M. Bender: I apologize. I didn't manage to sign in. So we're paywalled here.  


Alex Hanna: Yeah, it's it's, but this is the alarming thing is that this is a chatbot that is, um, created by the World Health Organization. 


Um, yeah, so really, really awful stuff here.  


Emily M. Bender: Absolutely awful stuff. And of course it's going to be making up medical information because that's what chatbots do, right?  


Alex Hanna: Yeah.  


Emily M. Bender: Um, all right. So one more in this batch. Um, this is something that I've just wanted to dog on for a while. Oh, and, um, also need to, need to just on the SARAH thing point out that of course they had to backronym in a woman's name. 


Alex Hanna: Yeah, yeah, similar, similar to TESSA, which was the National Eating Disorders Association. Um, yeah. Yeah. Really, really awful stuff.  


Emily M. Bender: So just really quickly on this one, this is called the Curricula, um, thecurricula.com, and the title at the top says "The Curricula Beta," and it's in this font that looks like what you might get if you had not, not chalk on a blackboard, but those like flip pads, writing with magic markers. 


And in the middle of the screen, "Search for something to learn. What do you want to learn today?" And then down below, "Subjects you might be interested in: data modeling, database management, SQL, normalization." It's a bunch of, of, um, computing stuff really. And then at the bottom, um, "Content is generated by AI and may not be 100 percent accurate. We may earn affiliate commissions on some of the content recommendations." 


It's just like, don't do this. And I'm, I'm mad about the name of this company. So "Copyright 2024, Chindogu Labs, LLC." Um, 'chindogu' is a Japanese word that refers to, um, uh, 'curious tools,' um, there's this wonderful book called "101 Unuseless Japanese Inventions," which are the chindogu, and like, that's such a, a sort of sweet, wholesome, playful thing, and this is not. 


This is polluting the information ecosystem.  


Alex Hanna: Yeah. AI people really ruining some great words. Yeah. Uh, yeah.  


Emily M. Bender: Okay.  


Alex Hanna: Oh, BrittneyMuller asked in the chat, "Curious if the National Eating Disorder Association hired back support staff." Uh, they actually did not. They, uh, I actually got a chance to hang out with Abby Harper at Labor Notes last, uh, last, uh, weekend. 


And yeah, no, they're still, they're still laid off after they unionized. So, nope, but there, you know, so there you go. Yeah.  


Uh, sea shanty wise, let's go into it. All right. So on information pollution:  


 (singing) Info polluted and facts are diluted, whilst language and pixels are roundly extruded. Fake headlines, articles bouncing around, with journalist fact checkers run through the ground. 


Emily M. Bender: (laughter) Alex, you are outdoing yourself with each verse.  


Alex Hanna: Listen.  


Emily M. Bender: It's amazing.  


Alex Hanna: It was, it was the flood of hell. Really had to do it.  


Emily M. Bender: It is, it is keeping us afloat. All right. Uh, here is a MIT Technology Review article, um, with the sticker "artificial intelligence," um, by Will Douglas Heaven, April 10th, 2024. Um, and the headline is, "Generative AI can turn your most precious memories into photos that never existed. " The Synthetic Memories Project is helping families around the world reclaim a past that was never caught on camera." And I just, so before we even get into it, I just want to, I just want to be mad at the framing here, like where this is heartstrings. Think of all the poor families around the world who didn't have cameras. 


And also you don't, just because something wasn't caught on camera, it doesn't mean you need to reclaim it. Right. You can still have it as your past.  


Alex Hanna: It can be, yeah, still be a memory. 


Emily M. Bender: Hmm. So there's this sort of creepy looking sequence of maybe synthetic images.  


Alex Hanna: Is this, where is this from? This looks really horrifying. 


This is--  


Emily M. Bender: This is nightmare.  


Alex Hanna: Yeah. It's like a, and for those of you who are just listening to it, it's like a, a number of stitched together images that sort of melt into each other, but then have those, when we see an actual face, it's one of those eldritch horrors that is produced by, um, AI, uh, quote, 'art generators.' 


Emily M. Bender: Yeah. And, and people like disappear.  


Alex Hanna: Yeah.  


Emily M. Bender: Abruptly.  


Alex Hanna: Yeah. Okay. Yeah. We can't, we can't stare at this too long, but [unintelligible].  


Emily M. Bender: So I don't know, did you want to get into this anymore? Or should we--  


Alex Hanna: Well, this is good. Let's move past it.  


Emily M. Bender: Yeah. Let's move along. The link will be in the show notes, of course. Um, all right now from Wired. 


Oh yes. So we have gotten to the ShotSpotter flood, um, as a, as teased earlier. Um, so this is some reporting by Dhruv Mehrotra and Joey Scott, um, on February 22nd, 2024. "Here are the secret locations of ShotSpotter gunfire sensors." So the locations, the subhead, "Locations of microphones used to detect gunshots have been kept hidden from police and the public. A Wired analysis of leaked coordinates confirms arguments critics have made against the technology." And basically what they find is that they're everywhere, but they are concentrated in the over policed neighborhoods of cities. Um, unsurprisingly.  


Alex Hanna: Yeah, I mean, you can see these is a very, very clear. I mean, it's, it's great that these have been leaked, but it's definitely, um, clarified or given more data of what we know and the actual statistic, if you scroll down a little bit Emily, is wild here. Where it's something like, um, so scrolling down a little more. It's something of the order of, uh, there's an actual, uh, kind of stat where, yeah, here. "So an analysis of sensor distribution in U. S. cities in the leaked data set found that in aggregate, nearly 70 percent of people who live in a neighborhood with at least one SoundThinking sensor, uh, identified in the ACS--that's the American Community Survey--data, um, are either Black or Latine. Um, nearly three quarters of these neighborhoods are majority non-white and the average household earns a little more than $50,000 a year."  


So really targeting, uh, communities of color and the poor.  


Emily M. Bender: Yeah. All right. It gets worse with the ShotSpotter fresh hell.  


Um, seems like it's going to get better. So, uh, this is in Chicago Sun-Times by Tom Shuba and Fran Spielman. "Mayor Johnson to end ShotSpotter deal after summer, making good on key campaign promise." And then the subhead, "After the Sun-Times first reported the decision, Johnson said the city will drop the gunshot detection system September 22nd, meaning cops will have access to it throughout the summer and the Democratic National Convention." 


But glad to see people, including electeds pushing back on this. However, from South Side Weekly, April 24th, 2024, by Max Blaisdell and Jim Daley, "ShotSpotter keeps listening after contracts expire. Internal emails show the company continued to provide gunshot data to police in cities where its contracts were canceled." 


Which, um, so we went over this one, actually, um, Alex and I were on the Block & Build podcast and we talked about this, but I thought we should bring it directly to our listeners too. Any, any thoughts you want to share about this in particular, Alex?  


Alex Hanna: Yeah. I mean, this is, this is just the, I mean, this is the thing about the, you know, quote, public private partnerships and these kinds of things. 


I mean, this is, you know, there's going to be terms in these that are going to really make them even more unaccountable to citizens. I mean, Brandon, Brandon Johnson was someone that was really celebrated by a lot of people within Chicago progressive politics, with the Chicago teachers union and others. 


But, you know, even if he's going to end that contract with a lot of pressure from activists, um, and which wouldn't have happened if they hadn't kept on putting pressure on, um, including folks like Mi Gente, and others, um, you know, like, if you're signing these contracts, you know, these are shady ass companies, are going to continue doing this. 


Emily M. Bender: Yeah. So, so basically once you've allowed in some surveillance, it is really hard to dig them back out. Um, and Eshieh or Eshieh on the chat says, "Makes me wonder how else they're making money if not through the official contract." Excellent question.  


Um, so, okay, that's it for ShotSpotter for now. Um, but oh yeah, note to everybody, um, we, we heard this name go through, ShotSpotter has rebranded as SoundThinking. 


So just in case you come across that and, you know, appropriately shudder at it. Um, it's ShotSpotter.  


Alex Hanna: It's a classic move from people in crisis communication. We just, you know, call your thing something else. Okay.  


This one's from Bloomberg. The journalist is Amy Or, and the headline, "Reddit signs AI content licensing deal ahead of IPO." This is a little older news from February. With these sub bullets: "Social media firm agrees, uh, 60 million dollar deal with AI company. Reddit advised to seek at least 5 billion dollar IPO valuation." So just absolutely wild. Reddit, you know, not going public. And then, basically is cashing in on their data, just finding, you know, basically finding any kind of data that they can, uh, these AI companies and then going public with it just, it's kind of wild. 


And I think there's also been deals. I saw another deal because there's the Axel Springer deal, the AP deal with, uh, OpenAI. And I think also there was a major, another major publisher today. Okay. I saw, um, a journalist, um, uh, Rasmus Klein, uh, tweeting about this. Um, I'm gonna, I'm gonna find it in a second, but basically it was another huge publisher making a deal, um, with an AI company. 


So it's just like, there's a secondary windfall that these companies really want to try to get from now selling their content. And it's just driving me bonkers.  


Emily M. Bender: Yeah. So it's so gross. And we are in a, let's sell the data region of, uh, the flood of AI Hell. The next one comes from Ars Technica with the sticker, "What happened to Facebook Watch?" Um, journalist is Sharon Harding, March 28th, 2024. Um, headline, "Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: lawsuit." So subhead, "Facebook Watch, Netflix were allegedly bigger competitors than they let on." And so there's a couple different stories collected in here. 


But the thing that I wanted to focus on was that apparently uh in part of their deal between Facebook and Netflix, Facebook let Netflix see Facebook user DMs, um, to track information, which is like not what should be going on.  


Alex Hanna: Yeah. Good Lord.  


Emily M. Bender: And then one more. Um, again, 404 Media, this time Joseph Cox, April 17th, 2024, headline, "A spy site is scraping discord and selling users messages. 404Media tested the service, called SpyPet, and verified it is collecting information on Discord users, including the messages they post across usually disparate servers.  


Alex Hanna: So now they're basically taking this, finding a user profile for you know, Discord, which I think had been known for a certain amount of user privacy, or at least having enough things separate. 


But yeah, now they're scraping these things and like, you know, shout out to the 404 people. They really kind of have a thumb at weird stuff happening on Discord. Um, but yeah, this is, this is some wild news.  


Emily M. Bender: Yeah. All right. Okay. Um, you got some more singing for us here, Alex?  


Alex Hanna: Well, I'll save the final verse for the last one. 


So let's keep on chugging.  


Emily M. Bender: Keep on chugging. Okay. All right. I am going to, um, without beautiful musical accompaniment, um, share the next thing  


Alex Hanna: While you switch, what I saw the other company that base that made a deal with OpenAI is the Financial Times. So between the Financial Times, also AP, Axel Springer, Le Monde and PRISA, um, so they've all made deals basically to provide textual content with OpenAI, which is wild. 


Emily M. Bender: And OpenAI is going to say, see, we're using good journalistic data and hope that people don't realize that it doesn't matter what you've made the papier mâché out of, it's still papier mâché and not actually news.  


Alex Hanna: Yeah.  


Emily M. Bender: All right. Uh, we are now in the AI is always people section of, um, the AI Hell thawing flood. 


Um, this one from, uh, again, Gizmodo by Maxwell Zeff, updated April 3rd, 2024. Headline, "Amazon ditches 'Just Walk Out' checkouts at its grocery stores." Um, so this, there was this weird, creepy experience where you like scanned yourself in with your phone, maybe eventually biometrics and then just walk around the store and took whatever you wanted. 


And you were being constantly watched by cameras supposedly doing automatic image processing. And then that like came up with a reportedly accurate, uh, check for you that just, you know, got charged your Amazon account when you left with oddly sort of like strange amounts of delay. Like the receipt didn't hit just as you left because guess what? 


Um, it actually involved a bunch of people in India reviewing the videos. And now I can't find that part of the yes. Here we go. Um. "Though it seemed completely automated, Just Walk Out relied on more than 1,000 people in India, watching and labeling videos to ensure accurate checkouts. The cashiers were simply moved off site and they watched you as you shopped." 


Alex Hanna: Yeah, this is, this is, I mean, gotta be this common story. Janet Vertesi, sociology prof at Princeton has a good piece in Tech Policy Press, which is basically like, yeah, AI is off-shoring redux. This is effectively what we're doing, we're finding new ways to offshore. So yeah.  


Emily M. Bender: Yeah. Um, next one's another one, similar story again for, for media, um, well, that was interesting little flip back and forth. 


Um, Joseph Cox and Jason Koebler, September 6th, 2023, "Buzzy AI startup for generating 3d models used cheap human labor. Kaedim's founder was recently in a Forbes 30 Under 30 list for the company's 2D to 3D image conversion. In some cases, artists produced the work wholecloth, one source said." Um, so yeah, um, without dwelling on it, that's another version. 


And here's one more. Um, again, Ars Technica with the sticker, "AI'll see you in court," where the I is AI, cute. Um, journalist Kyle Orland, date is January 27th, 2024. "Following lawsuit, rep admits quote 'AI George Carlin' was human-written. Creators still face name and likeness complaints. Lawyer says suit will continue." 


Alex Hanna: It's the kind of thing that's so funny because you know George Carlin would've absolutely hated AI and AI discourse and it's so poetic that the 'AI George Carlin' that they released was actually human written. So yeah.  


Emily M. Bender: And in this case, I doubt it was farmed out. I haven't read this carefully enough to like, you know, cheap labor, but actually somebody who wished, I'm guessing, I'm feeling the same that it was somebody who wished that they could be as good as George Carlin and so they had to pretend that it was George Carlin remixed by AI. 


Alex Hanna: Right. Yeah. Incredible. Yeah. Incredible. Um, let me, the next one is, is a interesting project and it's a, it's a project which is, um, was released by, um, uh, a collective called KnowingMachines.org and it's "Models all the way down" by Christo Buschek and Jer Thorpe. And it's a really cool, long piece, so this is less of an AI health piece and more about a deep dive into the LAION dataset. 


Um, And so they start talking about what is in the dataset. So, you know, the, the first thing is, "If you want to make a really big AI model, the kind you can generate images or do your homework or build this website or fake a moon landing, you start by finding a really big training dataset." And you scroll down and has this interactive version of what's harvested here and talks about the means of harvesting the LAION data set, um, talking about the legal ramifications. And the way that the Stanford Internet Observatory identified, um, a thousand, more than a thousand images of CSAM, which is child sexual abuse material, in one of these datasets. And so LAION has been taken offline.  


Um, and then they effectively go through the filtering, um, mechanisms of LAION, effectively saying, you know, the kinds of things that they have effectively done filtering on and how it's all really, really questionable stuff. 


Um, and it makes this really great point of investigating datasets and the work of investigating datasets and how they are models built on models built on models. Um, this is a great point that people like, um, um, Paul Edwards have made. The idea that data is kind of models upon models when he talks about climate data. 


Uh, so in this article, these folks are talking about Common Crawl, um, and how, uh, that's kind of a place to start, the way that they basically have paired text to image, um, associations, and how they've basically used automated things like CLIP to try to label these different things. It's great. It's a great read. 


We, we can't do it justice just talking about it here. This is obviously my shit as someone who's real into critical data studies, is really interested in, in, in working on another book on ImageNet. Um, so yeah, check out this project.  


Emily M. Bender: So highly, highly recommend it. And the, the, the thing that I haven't managed to scroll down to is there's a really interesting thing here, somewhere in here about, one of the sources for filtering had to do with, um, like visual, um, uh--  


Alex Hanna: Similarity?  


Emily M. Bender: Not, not similarity. 


Um.  


Alex Hanna: Cause it's, it was CLIP, right?  


Emily M. Bender: No, it says below it was below CLIP. It was further on down.  


Alex Hanna: Yeah.  


Emily M. Bender: Talking about how much, um, how, how visual appeal. And so there was--  


Alex Hanna: Oh visual appeal, yeah.  


Emily M. Bender: There was a crowd sourcing step where they had people rating images based on how appealing they were. And it turns out it was like largely done by four guys in the Midwest who happened to be the crowdworker who was doing it. 


So that, that sort of typical style that comes out of the AI image generators is very much shaped by the particular taste of those four particular people.  


Alex Hanna: That's wild. That's like, um, that reminds me of another story about this common weighting, I think in surveys where I think there was this, they kind of overweight--I don't, I might be, uh, miss, uh, saying this, but, er misstating this, but I think they have to kind of weight for, they kind of overweight uh, for like Black Republicans in Chicago, because it's based on like one guy, or something. Um, yeah, but it's very much kind of a thing of that level going on.  


Emily M. Bender: All right. I'm going to, I'm going to stop the share on this one and take us to, um, at the next thing. 


And sorry, it's going to take me a moment to remind myself of what the next thing is. Ah, yes. Yes. TESCREAL corporate capture. Um, So let me just get that one up and get over to Zoom. I'm getting better at this. I'm growing faster and faster.  


Um, okay. So we have in three parts, the story of how we ended up with this terrible board, the AI safety board that was recently announced. 


Um, so the first thing is out of, um, The Byte at Futurism.com, um, by--I don't see the journalist. Um, Victor Tangermann. "Biden apparently got scared of an--of AI after watching the new Mission Impossible." And so this is reporting on a PBS interview, which would be a better source to point to, I think, um, where Deputy White House Chief of Staff Bruce Reed, um, says, um, that, uh, "the grossly exaggerated depiction of AI in, uh, Mission Impossible: Dead Reckoning Part 1 seems to have struck a nerve with the aging president." 


Um, and, uh, so, "the movie clearly hit a nerve at Camp David, with Reed, who watched the movie with Biden, recalling in his interview with PBS that if he hadn't already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about."  


Alex Hanna: This movie is really awful. I did a, I did an interview with the Washington Post where we, I talked about this movie in general and It's just, and you know, it was great to actually go back to Mystery AI Hype Theater's roots and really do a Mystery Science Theater 3000 on an actual movie. 


Except I was like the only person in the movie theater and I was just like furiously typing notes to myself on my phone. But it's really just, I'm just so upset that this kind of thing could really set off Biden and anyone that anyone that takes any of this stuff actually seriously.  


Emily M. Bender: Right.  


And I don't think that the movie alone would do it if we didn't have the AI doomer hype floating around, which unfortunately we do. 


So the, the, the doomerism stuff infects the, um, the White House executive order. And then as a result of that, we end up and now I'm reading from Ashley Belanger's reporting, April 17th, sticker "Confronting Doom," headline, "Feds appoint AI doomer to run AI safety at U. S. Institute. Former OpenAI researcher once predicted a 50 percent chance of AI killing all of us." 


So we now have this person, um, whose name is, um, Paul Cristiano. Yeah. Um, "who pioneered a foundational AI safety, safety technique called Reinforcement Learning from Human Feedback--" I'm sorry, but RLHF is not a safety technique. It's a machine learning setup. Um, all right. Um, "--but is also known for predicting that there's a 50 percent chance AI development could end in doom." 


So that guy is now high up at NIST, um, running the U.S. AI Safety Institute, which we did not need. We need regulation of what companies are doing with surveillance, with automated, um, you know, spilling this non information out into the world, et cetera, we don't need this.  


And the next thing that comes, so the third part here, um, reporting in the Wall Street Journal, uh, by Dustin Volz, April 26th, 2024. 


"OpenAI's Sam Altman and other tech leaders to serve on AI Safety Board. Panel will advise Department of Homeland Security on deploying artificial intelligence safely within America's critical infrastructure."  


Alex Hanna: Yeah, and everybody, I mean, this board is, it's, pretty much dominated by tech CEOs, but also, um, defense CEOs, um, Lockheed Martin, and, uh, I think, I think also Boeing, um, but basically, you know, our--  


Emily M. Bender: Boeing knows all about safety. 


Alex Hanna: Yeah, actually it's Delta and then, and then, and then also mayor, uh, Maryland Governor Wes Moore and Seattle Mayor Bruce, uh, Harrell, both Democrats. Someone else was mentioning in the chat about Seattle installing those, uh, those silly little kiosks.  


Emily M. Bender: Yes. Great.  


Alex Hanna: Yeah, very, very safe.  


Emily M. Bender: Love it.  


Alex Hanna: We're going to end on some good news though. 


So the last three. So, um, so the last three are, um, let's see here. Um, okay. So this is from Evan Greer, um, who is at Fight For The Future. "Another great move from the FTC. A world where you get your face scanned to access a website is not a safer world. There is no privacy protective way to scan people's faces in order to estimate their age. Well done, Lina Khan and, um, and, uh, boy, do-- uh, @Bed oyaFTC, and all." And this is, the quote here is a screenshot. "FTC denies facial estimate--age estimation tool as parental consent mechanism. Um, the Federal Trade Commission on Friday denied an application for a new way to obtain parental consent via face age estimation technology under the Child's Online Production Act Rule." 


Um, and then there is, uh, a few other people including the Entertainment Software Rating Board, which, you know, if you see uh, a rating on video games that those, that's the board that issues those. "Yoti and SuperAwesome had requested FTC approval for their use of their 'Privacy Protective Facial Age Estimation' technology, which analyzes geometry of a user's face to confirm their age." 


Um, and then basically the FTC shot it down. So good job FTC on that.  


Emily M. Bender: Yes. Thank you FTC. Next door--  


Alex Hanna: Next one, yeah.  


Emily M. Bender: The SEC, also doing some good stuff. This is a press release from the U. S. Securities and Exchange Commission, "SEC charges two investment advisors with making false and misleading statements about their use of artificial intelligence." Um, and so, "'Here, um, we find that Delphia and Global Predictions marketed to their clients and prospective clients that they were using AI in certain ways when in fact they were not,' said SEC Chair Gary Gensler. 'We've seen time and again that when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies. Investment advisors should not mislead the public by saying they're using an AI model when they are not. Such AI-washing hurts investors.'"  


Which is kind of interesting because like, if you think about what AI is supposed to be, anybody claiming to use AI is doing AI washing.  


Alex Hanna: Yeah. Yeah. Yeah. I mean, it's incredibly doing so at this moment. 


Emily M. Bender: All right. And then one last one, one last little modicum of accountability. And this is from the Times of London, I think.  


Alex Hanna: Yeah.  


Emily M. Bender: You want to read this one?  


Alex Hanna: Yeah, yeah. Um, "Uber Eats courier wins payout over racist facial recognition application. Um, Pa Manjeng, who is Black, was dismissed by the food delivery company when his security selfie check repeatedly failed to match the photo held by administrators." 


This is by Jonathan Ames, who is their legal editor. So, Uber Eats has to make a payout to this courier to end this legal claim "that he was unfairly fired because of Dill Liberty Company's facial app-- recognition app is a racist." Um, and so they have to go through this process to start the shift. Um, but it basically wasn't recognizing him. 


And this is a very similar to the case in which, uh, Lyft and Uber drivers had to log a facial recognition scan before they started their shift. Um, and it was, um, doing pretty, uh, poorly on, uh, trans drivers. Um, but I don't think there's been many, any accountability for them, unfortunately. Yeah.  


Emily M. Bender: I think we got to celebrate the accountability as it comes in, even while saying, you know what, uh, Dr. Gebru and Dr. Buolamwini told you so.  


Alex Hanna: Yeah, right.  


Emily M. Bender: That this wasn't gonna work. Um, so I'm, I'm glad to see the accountability.  


Alex Hanna: Yeah. All right. Is that the end of it?  


Emily M. Bender: Uh, well, is that the end of it? That is not the end of it in the sense that when I selected these links this morning, I had, I think at least this many, again, that didn't make it in because we are flooded continually with Fresh AI Hell. But that's the end of it for today. Except I think you're promising me one last verse.  


Alex Hanna: That's right. And it's very, it, it, it, it really matches the tone that we're ending on.  


 (singing) We ride through the flood of such refuse and water. The hype theater belt where we take on such slaughter. Slogging through bullshit, we'll never relax, as shit talking AI is ridicule as praxis.  


Emily M. Bender: Ridicule as praxis, but sea shanty style, which is so much better. Thank you for that, Alex, and thank you to our amazing audience for sticking with us through this. I hope it's been as cathartic for all of you as it has been for me.  


Alex Hanna: Ah, such a great time. That's it for this week. 


Our theme song is by Toby Menon, graphic design by Naomi Pleasure-Park. Production by Christie Taylor and thanks as always to the Distributed AI Research Institute. If you like this show, you can support us by rating and reviewing us on Apple Podcasts and Spotify and by donating to DAIR at DAIR-Institute.org. That's D A I R hyphen institute dot O R G.  


Emily M. Bender: Find us and all our past episodes on Peertube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.TV/DAIR_Institute. Again that's D A I R underscore institute. 


I'm Emily M. Bender.  


Alex Hanna: And I'm Alex Hanna. Stay out of AI Hell, you landlubbers.


People on this episode