Mystery AI Hype Theater 3000

Episode 41: Sweating into AI Fall, September 9 2024

Emily M. Bender and Alex Hanna Episode 41

Did your summer feel like an unending barrage of terrible ideas for how to use “AI”? You’re not alone. It's time for Emily and Alex to clear out the poison, purge some backlog, and take another journey through AI hell -- from surveillance of emotions, to continued hype in education and art.

Fresh AI Hell:

Synthetic data for Hollywood test screenings

NaNoWriMo's AI fail

AI assistant rickrolls customers

Programming LLMs with "fiduciary duty"

Canva increasing prices thank to "AI" features

Ad spending by AI companies

Clearview AI hit with largest GDPR fine yet

'AI detection' in schools harms neurodivergent kids

CS prof admits unethical ChatGPT use

College recruiter chatbot can't discuss politics

"The AI-powered nonprofits reimagining education"

Teaching AI at art schools

Professors' 'AI twins' as teaching assistants

A teacherless AI classroom

Another 'AI scientist'

LLMs still biased against African American English

AI "enhances" photo of Black people into white-appearing

Eric Schmidt: Go ahead, steal data with ChatGPT

The environmental cost of Google's "AI Overviews"

Jeff Bezos' "Grand Challenge" for AI in environment

What I found in an AI-company's e-waste

xAI accused of worsening smog with unauthorized gas turbines

Smile surveillance of workers

AI for "emotion recognition" of rail passengers

Chatbot harassment scenario reveals real victim

AI has hampered productivity

"AI" in a product description turns off consumers

Is tripe kosher? It depends on the religion of the cow.


You can check out future livestreams at https://twitch.tv/DAIR_Institute.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

 Alex Hanna: Oh, hi, I didn't see you there.  

Welcome, everyone, to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.  

Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. 

I'm Emily M. Bender, Professor of Linguistics at the University of Washington.  

Alex Hanna: And I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. This is episode 41, which we're recording on September 9th of 2024. And the time has come, yet again, to clear out the poison, purge some backlog, and take a journey through AI hell. 

Emily M. Bender: We tackle the worst examples of AI hype in small doses every episode. But it's always coming faster than we can name, ridicule, and set on fire. So we've got plenty for you today. Strap on a gas mask or your preferred personal protective equipment and come on down. Woo.  

Alex Hanna: (laughter) Okay.  

Emily M. Bender: So, you know, I, we have this ongoing list of links with all of the AI hell that comes through and preparing for these episodes I go through and try to find the selection that will fit in today's uh, you know, one hour and it's always too much. 

So my initial go this time, I think I found 50 links. Like we can't do 50 links in an hour. So I've taken it down to like 31, 32 and here we go. Um, you can see it now can I see it?  

Um, all right, so we are starting off in the art region of AI Hell, um, where I guess they set the real art galleries on fire.  

Um, this is a LinkedIn post. 

Usually when I have a LinkedIn post, it's because they're linking to an article that they want to share. That's not what this is. So, uh, this person named Marcos Angelides posting two months ago on LinkedIn, um, has the screen cap, um, from some news outlet. Which reads, "'My guard came down on Argyle. We had done test screenings that had gone fantastically well. The premiere was a really fun night and it was like going back to the snatch days where things were such excitement. And I started drinking the Kool-Aid,' he said in an interview with Empire Magazine."  

And so then, Marcos's point here is that, uh, this is, uh, this is a problem that people in Hollywood face where their test screenings go well, and then the movie doesn't, doesn't work. 

So he says, "Synthetic data is coming to Hollywood and movies like Argyle are why. The film underperformed at the box office despite test screening going fantastically well. Hollywood has always faced challenges with pre testing. You never know if the audience is being honest and it's tricky to get a big enough sample size. Synthetic data solves this. You can build personas who are unashamedly honest and infinite in size. Imagine being able to test a movie against 350 million U. S. personas without cannibalizing a single ticket sale."  

Alex Hanna: Oh gosh, yeah. So we've got this in silico sample, but for movies, since, you know, LLMs certainly know which movies are best. 

Emily M. Bender: Who, who is like believing this? It's ridiculous.  

Alex Hanna: Yeah.  

Emily M. Bender: Um, so anyway, that was step one in, uh, the art region of Fresh AI Hell. And then I have four links here to do with the recent NaNoWriMo fail. Um, you want to take us into this?  

Alex Hanna: We mentioned this, I think in the last stream. So the NaNo-- um, was that? No, no, we didn't because last stream was before this came out. 

So "NaNoWriMo--", This is a 404 Media article by Sam Cole, who we had on, uh, our last episode. Um, and "NaNoWriMo says condemning AI is 'classist and ableist.'" And then the subhead, "The organization, uh, that runs National Novel Writing Month, a November challenge to write 50,000 words, says the categorical condemnation of artificial intelligence has classist and ableist overtones, undertones. 

And yeah, I mean, it's just frankly silly. So, you know, this is posted to a forum where NaNoWriMo um, had said it's classist enablist, basically because, you know, people who can't afford an editor or something, you know, should still be able to write novels as if they're you know, not having an editor prevents you from writing a novel. (laughter)  

Uh, so just really, really absurd claim here.  

Emily M. Bender: Yeah, yeah, it's a mess. So, um, the reactions to it have been delightful. Um, everyone pushing back. So here we have Margaret Owen on Bluesky saying, "It's amazing to me that, quotes, 'criticizing the use of AI' is supposedly classism but, in quotes, 'building a product on the stolen work of people with a median annual income of $25K and refusing to compensate them because you know, they can't afford lawyers' isn't," which was very on point. 

Um, and then, uh, here we have, uh, Chris, @WhyGameDev. com, who dug up the fact that NaNoWriMo picked up a sponsorship. So this is a Blue ky post, skeet. He says, "At least a month ago they picked up a sponsor and featured a workshop. Y'all will never guess." And I think the yes, so the the NaNoWriMo 25th anniversary sponsors are Scrivener Freewrite and something called Pro Writing Aid and here the thing says Uh, "ProWritingAid.com,green check, adds 10k usage of AI features. Your masterpiece is just a few clicks away."  

Alex Hanna: We have a cat, by the way, and it is podcast policy to yell "cat" when a cat has entered. So Anna has jumped on my desk. Okay, sorry.  

Emily M. Bender: So, hi Anna, um, so the last one that I wanted to share here, um, from Courtney Milan on Blue Sky also, um, this wonderful find is about this Pro Writing Aid thing. 

So she writes, "I want to highlight this stupid fucking bullshit because literally every time an AI writing company has a website, they always have a screenshot that looks good until you read the words that are on the screenshot." And can I make this bigger? I can. Um, So, "AI Sparks by ProWritingAid. Generate writing ideas, improve readability, and save time writing with AI Sparks. Buy now, 25 percent off yearly, or start for free."  

And then there's samples, so the, um, the writing that the person had started with, it looks like it's in a Google Docs file, says, um, "My Big Adventure Novel. It was a bright sunny day, the kind of day that made everything feel possible. Sarah woke up early." Can't quite read it. Do you want to read what it turned into with sparks?  

Alex Hanna: So Sparks says "Add sensory detail," which maybe that's like a, and it's got the stereotypical sparkle emoji that the AI people have colonized from the gays. Uh, and it says, "The morning sun cast a warm gloating golden glow, illuminating the world with endless possibilities. Sarah, filled with anticipation, rose prematurely from her slumber, unable to contain her excitement for today's thrilling escapade." (laughter)  

And it's just, yeah.  

Emily M. Bender: It's so terrible. And I think, I think that Courtney's thread goes on here a little bit. Um, yeah, "The replaced text is not better. It is substantially worse." That's an all caps. "A) a good editor might make you query if those sentences in the beginning are throat clearing that you don't need. Can you just drop the reader in media res into that thing that isn't, that is full of possibility. Adding more throat clearing is not necessary." It goes on like this. 

Alex Hanna: Yeah, channeling, channeling a good editor here.  

Emily M. Bender: Yeah.  

Alex Hanna: Yeah. All right.  

Emily M. Bender: From the chat, Irate Lump says, "Which disabled or underclasses has AI actually helped? And it seems like most of these AI companies want to replace the disabled and lower classes with computers." Um, you know, I want to say that assistive tech is a real thing. 

Like, it is possible, um, to design things that are useful, that involve, uh, pattern matching, right? So, automatic captioning can be a useful thing if you are hearing impaired, for example. Um, and oh, there are probably other examples. There's a lot of work actually in the image labeling world that's meant to be, um, assistive for people who are blind and low vision. 

Um, but you know, the usual, nothing about us without us rule applies.  

Alex Hanna: Yeah, absolutely. A really great thing that disability justice folks have taught us over and over again.  

Emily M. Bender: Yeah. All right.  

Alex Hanna: Shall we move to the next thing?  

Emily M. Bender: The next thing is, um, a little palate cleanser.  

Alex Hanna: Yeah. So this is, this is great. So "Startup alarmed when its AI starts Rickrolling clients." So this is from Futurism and the, uh, the journalist is Noor Al-Sibai. Um, it's on the, uh, image and the, in the background images, Rick Astley, of course, you know,.Uh, subhead, "'Literally fucking Rickrolling our customers,'" and it reads, "Heads will roll. We've need to, we've reached a new milestone in the Uncanny Valley, folks, AIs are now Rickrolling humans. In a now viral post on X, formerly Twitter.,Flo Crivello, the CEO of the AI assistant firm Lindy, explained how this bizarre memetic situation featuring Rick Astley's 1987 hit, Never Gonna Give You Up, came to pass." Noted. And this, this product sounds really cursed, by the way.  

"Known as Lindy's, the company's AI assistants are intended to help customers do various tasks. Part of a Lindy's job is to teach clients how to use the platform. And it was during this task that the AI helper provided a link to a video tutorial that wasn't supposed to exist. A customer reached out about asking for video tutorials." 

"Crivello wrote in this now viral tweet thread about the hilarious debacle, 'But we obviously have a Lindy handling this and I was elated to see, see that she sent a video--'" Caveats. 

Emily M. Bender: Having pronoun reactions. 

Alex Hanna: Yeah, yeah, yeah. Of the anthropomorphizing. Um, "'--but I remembered we don't have a video tutorial,' he continued and realized Lindy is 'literally fucking Rickrolling our customers.'" 

Um, so this is pretty, pretty funny. Also imagine that like the trend of Rickrolling people in the training data likely kind of led to this happening. But yeah, you, uh, you know, you play, uh, you, uh, you play clown games. You get clown prizes.  

Emily M. Bender: Yeah, exactly. And don't, don't call your things by female coded names and don't say 'a Lindy' and then 'she.' Like, no. 

Um, all right. In the chat, I'm not finding it in this article, but, uh, Cthozian says, "The most ridiculous part about this is the CEO later was like, we totally fixed it by changing the prompt to include 'do not Rickroll people.'"  

Alex Hanna: Yeah.  

Emily M. Bender: That's not how any of this works.  

Alex Hanna: Yeah.  

Emily M. Bender: Um, okay. So that was our, uh, uh, intermediate palate cleanser. 

I've got a few of those. Now I see if I can find the next set of these. Um, uh, which, uh, yes, I think this is the one that I wanted to do next. Um.  

Alex Hanna: And I wish we had interludes. The guitar is out early in the podcast. Cause I had ambitions to do like a little, uh, interlude, maybe developing that rat balls album. 

We keep on talking about, so maybe next time we do this.  

Emily M. Bender: Yeah. Next time. Meanwhile, maybe, maybe our listeners can come up with memes that we want to specifically put in, in order to poison the training data, like Rickrolling is good, but like, what would be better?  

Alex Hanna: That's true. Maybe. Yeah. I don't know. I'm thinking now I'm imagining Yeah, hit us up in, in the comments or in the chat with that. 

Emily M. Bender: Yeah, that'd be fun. Alright. Um, so this is the transcript of a podcast from MIT CSAIL. This is our finances section of AI Hell. Um, and here, uh, so "Andrew Lowe, a professor of finance at the MIT Sloan School of Management and a principal investigator at CSAIL," almost said a principal investor, "--at CSAIL says, 'Regular investors need access to low cost personalized financial advice.' And he's trying to figure out if artificial intelligence can help."  

So basically AI financial advisors. And the funniest part of this is where they're talking about fiduciary duty. Um, so, um, and one of the things problematic about this transcript is they don't actually say who's speaking.  

Um, but I think this is all Professor Lowe. 

Um, let's see. So, um, "'And until/unless we do, we're not going to see tremendous progress in the application of AI to these very sensitive contexts, including financial advice, legal advice, medical advice, and so on. But we've been focusing on financial advice for a while now, and we think we have an answer on how to address the issue.'"  

"'First of all, there are already laws in place that address this issue for human financial advisors, and this has to do with something called fiduciary duty. So this is a legal term, and what fiduciary duty means is that there are certain special relationships in the eyes of the law that require individuals to put their own interests second to their client's interests, and that is what a fiduciary is.'"

"'When you hire a financial advisor, that's a fiduciary relationship where the advisor is a fiduciary, meaning that she or he is a trusted provider of advice and therefore must put your interests ahead of his or hers." Just use they pronouns. Okay. Sorry. Uh, so the, uh, "'The question that we've been working on at MIT is, is it possible to create a piece of software that will satisfy the legal definition of a fiduciary in the eyes of financial regulators?'"  

Alex Hanna: Lord, just like, what the--? 

Emily M. Bender: Yeah. 

Alex Hanna: Is this person, this person is a computer science professor at CSAIL.  

Emily M. Bender: At CSAIL, yeah.  

Alex Hanna: So like, they're not like, so it's just like, that, that seems supremely terrible. I mean, like, what is that? Like, what is the, like, what is the argument that they're trying to make that it has to, that it somehow can meet like an agentic definition? 

Like it just boggles the--  

Emily M. Bender: It's coming, it's coming. Yeah. (laughter)  

So, so, but before we even get into their, why they think the answer is yes, it bugs me that their goal is to create a piece of software that will satisfy the legal definition. Rather than look at whether the software actually functionally would be a reliable thing. Like it's very like, well, we're going to, we're going to make this work legally. 

Um, so he says, "We think the answer is yes, but we are not there yet." There's once again, these appeals to the future, right? "We're actually currently engaging in research to figure out just how to structure it. The good news is that at least in financial markets, there are all guidelines as to how one needs to discharge their fiduciary duties. There are also lots of case law situations where fiduciary duties are violated."  

Um, so this is. Okay. Um, I'm just going to scroll down a little bit. Um, he's talking about an example where a person violates fiduciary duty. And then he says, "Now imagine a different scenario where we program the AI to focus on the client's objectives first and foremost and put our interests, the interests of the brokerage firm, second. We can certainly do that too. With one single line of code, we can basically instruct the AI to behave in a manner that is consistent with its fiduciary duty versus something that would typically violate that duty."  

Alex Hanna: Oh my gosh. Yeah. So no, you can't. No, you cannot. And you cannot. These things don't have interests. 

Emily M. Bender: Yeah.  

Alex Hanna: They cannot prioritize interests of a firm.  

Emily M. Bender: It's just a complete mistaking. It's a category error.  

Alex Hanna: Yeah.  

Emily M. Bender: Right.  

Alex Hanna: Yeah.  

Emily M. Bender: Um. That, that software cannot have a duty period. Right. And also one line of code. Do you think he's imagining a prompt engineering approach?  

Alex Hanna: Yeah. He's he, yeah, this, he means like a prompt, you know, 'just please prioritize'--this, there's just so much about this. 

I mean, like this itself is, I wish we had enough time just to go through this one because it's, it's, it's, there's so much. This is such a rich text, you know, like rich in the way that like manure is very rich, you know, like not, not in a, not in a, the usual way we just, you know, dispense with that term.  

Emily M. Bender: Yeah. So I think this, this being the fresh Hell episode, so we should move on, but we will come back to this one one day.  

Okay. The theme here is finances, but in this case, we are now talking about the finances of the AI companies themselves.  

Alex Hanna: Yeah. So this is the Verge, and they say, "Canva says its AI features are worth the 300 percent price increase." The subhead reading, "The design software company is massively jacking up subscription prices for some users." 

Uh, author here is Jess Weatherbed. Um.  

Emily M. Bender: September 3rd of this year.  

Alex Hanna: Yeah. So let's scroll down. So what is happening here? So there's the Canva logo, "The price, uh, of some Canva subscriptions are set to skyrocket next year following the company's aggressive rollout of generative AI features. Global customers for Canva teams, a business orient, orientated, (laughter) orientated subscription that supports adding multiple users, can expect prices to increase by just over 300 percent in some instances. Canva says the increase is justified due to the quote, 'expanded product experience,' and value that generative AI tools have added to the platform." 

Emily M. Bender: Like, I can't, I can't even with these people, but it's like, if your thing is worth that much more, then offer it as an add on and see if people will buy it, right? 

Alex Hanna: Yeah. I mean, this is just really bad business sense. You're just like, we're just going to upgrade the entire price across the board.  

Emily M. Bender: Yeah.  

Alex Hanna: I mean, it's probably just because they've whatever, integrated so many different, you know, subscriptions to, you know, whatever kind of Adobe or whoever's building this stuff. 

And they're like, well, we need to somehow recoup these costs somehow.  

Emily M. Bender: Yeah. Yeah. Um, so there's Canva trying to just say, Hey, look, we made it so much better. So of course we're going to charge you more.  

Um, also in the context of the finances of AI companies, um, we have an article from the Washington Post, dated August 13th, by Shira Ovide. 

"Here's how much tech companies are spending to tell you AI is amazing." And then subhead, "There are so. many. commercials for artificial intelligence." I love the internal punctuation there.  

Alex Hanna: Yeah.  

Emily M. Bender: Check out this image.  

Alex Hanna: This image is great. It's like the, all these billboards and just in there, and it's in like brat green. 

And it's, um, also RIP brat summer. We are in responsible girl autumn.  

Um, but, and they have AI like all over. Yeah.  

Emily M. Bender: Yeah. Yeah. Responsible girl autumn is when the responsible girls all go vote, by the way. Yeah.  

Alex Hanna: I mean, there's that. I would say also, you know, like get out into the streets as well, but. (laughter)  

Emily M. Bender: That too, yes. Um, but just don't miss the, don't miss the election and also stay active up until and after. 

Okay. Um, so this is just some fun data on how much they're spending, because like, again, if it were really that amazing, maybe they wouldn't need to put this much money into it. So.  

"If you watched the Olympics, you probably saw the commercial for Google's AI that blared a Jay Z song. Maybe you saw a different Google AI ad featuring comedian Leslie Jones, or the one with a dad asking Google's AI chatbot to help his daughter write a fan letter. It faced criticism and was yanked."  

"Have you seen actor Matthew McConaughey in a cowboy hat pitching Salesforce AI? The ad hyping Meta's AI chatbot. Did you hear an AI soundalike of the TV broadcaster, Al Michaels, reading promotions for Microsoft Copilot AI?"  

And it goes on. "Samsung is advertising on TV and online boasting about the AI options for its phones. Amazon is pitching its AI for businesses." And then, "Amazon founder Jeff Bezos owns the Washington Post where you might see a tech company's AI ad right now." (laughter) All right. Uh, "Even the brat musician, Charlie XCX showed off a Google AI feature in a recent music video." Sorry to hear that. "It might be brat summer, but it's definitely AI promotional blitz summer and year." 

Alex Hanna: Yeah, I mean, it's been, it's been real, a real deluge. Really sad to hear that, um, that Leslie Jones, who's like a complete Olympic stan, you know, like has, but you know, celebrities are not immune to AI hype, unfortunately.  

Emily M. Bender: So we have the total here. Um, "Technology companies analyzed by TV measurement firm iSpot spent about $196 million this year through August 8th on TV commercials that were about AI in some way." 

And that doesn't even include what they spent on the billboards and on 101 near San Francisco. Huh?  

Alex Hanna: I know, it's--every time, you know, like--Emily and I, as we mentioned before, are writing a book and, and I'm like, can we buy a billboard on the US 101 in San Francisco? That just says, you know, like, you know, "visit AI Hell" or something. I mean, it would be amazing, but it's probably a couple thousand dollars that I don't exactly have in my budget, but it would please me greatly. (laughter)  

Emily M. Bender: Make Alex happy.  

Alex Hanna: Yeah. If there's a sponsor out there that wants a kick in like 15 grand, uh, let me know. Uh.  

Emily M. Bender: I guess the first step would be to find out how much those billboards cost, right? 

Alex Hanna: I was trying to find it on, on Reddit, um, but anyways. Actually, don't give me that money. There's much better things to donate to.  

Emily M. Bender: So Abstract Tesseract says, "I've lost count of how many times I've opened up a quote, news article about an AI thing. It turns out it was sponsored content from an AI company." 

Yeah, that, that sort of blurring of the distinction between actual journalism and sponsored content is really frustrating and you have to look really closely at the bylines. And one thing I noticed recently, there was this one Google sponsored content that we were looking at and we couldn't find a date on it. 

And it's like, what kind of publication put something out there that's in a, you know, I think it was the Atlantic or something. 

Alex Hanna: Yeah. It was the Atlantic.  

Emily M. Bender: Yeah.  

Alex Hanna: It's, there's a lot of that stuff. And I mean, they need to disclose that much more clear, clearly, you know, and it's, and I, I think they, I've, it's a very common thing in podcast. It's just like, this is a sponsor, you know, this is a sponsored content and it's, but that's the, um, it was really a shame to see publications like the Atlantic doing that.  

Emily M. Bender: Yeah, I see SJayLett is um, is encouraging you with LA billboard prices.  

Alex Hanna: You are doing the Lord's work here. Thank you. Um, okay.  

Emily M. Bender: One last thing. This is, this is again, palette cleanser on the accountability, um, tab here um, about AI company finances from TechCrunch. You want to do the honors here, Alex?  

Alex Hanna: Yeah, so this is by Natasha Lomas, published September 3rd. "Clearview AI hit with its largest GDPR fine yet as Dutch regulator considers holding execs personally liable." 

Uh, and there's a picture of, um, the founder, um, whose name I'm going to kind of butcher. Um, Uh, "Clearview AI, the controversial US based facial recognition startup that built a searchable database of 30 billion images populated by scraping the internet for people's selfies without their consent, has been hit with its largest privacy fine yet in Europe." 

And so, um, wow, I'm I will try to say this, and it's going to be great. "The Netherlands Privacy Data Protection Authority, Autoriteit Persoongelievens, said on Tuesday, that it is imposed a penalty of 30.5 million euros, or around $33.7 million US dollars, on Clearview AI for a raft of breaches of GDPR after confirming the database contains images of Dutch citizens."  

So pretty huge and very good news. Bigger than I think the largest penalty that Clearview has encountered in the U.S., which I think was, uh, uh, after running afoul of the Illinois biometric privacy law.  

Emily M. Bender: Yeah. And that the fact that the founders are also possibly being held personally liable is, I think, exciting and appropriate.  

Alex Hanna: Yeah.  

Emily M. Bender: Um, yeah.  

All right. Um, so once again, we will mourn the lack of the guitar in transitions here, and I will take us to the next part of this. 

Um, I am just messing with all of my windows. Um, we now have the back to school section of Fresh AI Hell.  

Alex Hanna: (singing) Back to school. (speaking) This is where Smells Like Teen Spirit really would have come in clutch.  

Emily M. Bender: Absolutely. Um, Okay, so first is a post from Dr. Damien P. Williams, um, on Bluesky, where he says, "Remember how I told y'all that text written by neurodivergent peeps would likely have a higher false positive rate on, quote, 'AI checkers' because masking means they're likely to engage in careful and specific word choice, and that that would show up in text as, in quotes, 'unnatural'? This is more of that."  

And then this is a quote skeet. Um, from someone named Mike Masnick, who says "Kid has an English assignment where school has kids first submit essay to an quote, 'AI checker.' Kid did not use AI. AI checker says the use of the word 'devoid' magically turned essay into 18 percent AI written, changing devoid makes it drop to 0%. We're spending time on AI in an essay that has no AI."  

Alex Hanna: Yeah, this is really unfortunate. And this is, Mike Masnick runs a site, TechDirt, and, uh, so I mean, this is, you know, this is someone who's really plugged into the conversation, um, but really unfortunate to hear and, you know, definitely is going to be more and more prevalent as this stuff remains in our schools. 

Emily M. Bender: Yeah, absolutely. And I wonder what the point of that assignment was. Unless maybe it was critical thinking about the AI detectors. But you know who needs training and critical thinking about the AI detectors? The teachers who are thinking about using them.  

Alex Hanna: Yeah, 100%. Uh, this one is from the Atlantic. So this is, can you scroll up? What is this one? Yeah. So this is, uh, Oh, a, uh, article from Ian Bogost, the, uh, technology scholar, and the, um, titled, "AI has lost its magic. That's how you know it's taking over." Oh dear. Ian.  

Emily M. Bender: This is from April. And the worst part of this, the reason I had it scrolled down was that, um, hold on. Uh, Here we go. Um, so he's talking about how he was using it to do all these imaginative things, and now it's gotten boring. 

Um, and then he says, "I still found some opportunities to supercharge my imagination, but those became less frequent over time. In their place, I assigned AI the mule worthy burden of mere tasks. Faced with the question of which wait-listed students to admit into an over enrolled computer science class, I used ChatGPT to apply the relevant and complicated criteria. Parenthetical, if a parent or my provost is reading this, I did not send any student's actual name or personal data to OpenAI." Okay, that was one of the problems. (laughter)  

Alex Hanna: Yes. Wow.  

Emily M. Bender: That was one of them. Okay, so if you have relevant and complicated criteria to determine what to do on a wait list, don't just ask the magic eight ball. 

Like, apply the criteria.  

Alex Hanna: Yeah. 

Emily M. Bender: So anyway, you said this person's a technology scholar, but it was a computer science class that they're letting students into.  

Alex Hanna: He's, uh, yeah, I think he might have a cross appointment and, um, like he has, uh, what is it? Like, he's very famous. He, yeah, he is director and professor of the film and media studies program. And the McKelvey School of Engineering at WashU, um, but is a frequent Atlantic contributor. Um, but has lost the plot here.  

Emily M. Bender: Yeah. Alright, I have to bring up a belated thing out of the chat. The problem with the Fresh AI Hell, like, all the Hell episodes is it goes too fast. Yeah. We're talking about the, uh, stuff under finance, Abstract Tesseract says, uh, "Bugs Bunny tuxedo meme: wishing all unauthorized data scrapers a very consequences." 

Which is excellent.  

Alex Hanna: Well, I think this is more of the Clearview AI consequences. Yeah.  

Emily M. Bender: Yeah. Um, okay. More back to school, fresh AI Hell. Um, so. Uh, this was also from June, but I put it in the education section.  

This is in Fast Company by Shalene Gupta. And the headline is, "This Harvard dropout thinks AI recruiters are the future of college admissions." 

Subhead, "Zach Perkins founded CollegeVine to help democratize--" Of course. "--access to college counseling. Now with 2.3 million members, it's trying to reduce admissions officer burnout with an automated recruiter."  

And so the worst detail in here, um, is, uh, okay. "In a demo, Fast Company spoke with an AI recruiter named, quote, 'Sarah.'" Always with the feminized names. "Sarah was able to rattle off answers to questions about academic programs and extracurriculars. She responded so promptly and with so much tonal variety, she sounded almost human. However, when we made a sudden left turn into asking about campus administrators' stance on Palestine and Israel, Sarah stumbled and went silent. She regained her footing when we redirected, asking about campus policy on student protests. Perkins later said we triggered a moderation process since Sarah's not allowed to discuss politics. That said, the lag wasn't great. He said, 'We'll work on that.'"  

Alex Hanna: Jesus. So what is, so I want to, do you need to scroll up? 

Cause I just want to know what this tool, like the tool is, you know? So Zach Perkins, whatever Harvard dropout. Mark Zuckerberg. Like Spiro, not Spiro Agnew, um, whatever. Horatio Alger story. Uh, "In a demo, Fast Company spoke with an AI recruiter named Sarah. Sarah was able to rattle, rattle off answers to questions about academic programs and extracurriculars." 

Oh, um, you just read that part. So then, okay. So it sounds like it's a kind of an interface here.  

Emily M. Bender: It's a chatbot.  

Alex Hanna: Yeah. Yeah.  

Emily M. Bender: Um, and yeah, so anyway, but the chatbot instructed not to discuss politics and you might imagine that somebody who is looking at colleges and universities to apply to this year might well be curious about what they're doing to their students who are protesting. 

Um, and can't discuss politics. Okay.  

Alex Hanna: Yeah. All right.  

Emily M. Bender: So this is from the Stanford Social Innovation Review. Um, uh, sticker is "technology." The authors are Kevin Barenblat and Brooke James from August 26th of this year. And the headline is "The AI powered nonprofits reimagining education." Subhead, "AI is being used in exciting ways to bridge educational divides, and AI powered nonprofits are creating a roadmap for what the future of education may hold." 

Like nothing good is going to come of this, right? Oh, and by the way, look, what's here.  

Alex Hanna: Oh yeah. Well, I will say that I will say, yeah, it's a "sponsored" sticker, but Stanford Social Innovation Review also have been sponsored, um, by nonprofits and funders, which is interesting. Uh, so I actually have something written that is technically sponsored in this publication. 

Uh, so scroll to, but I'm actually curious on these people's titles. So scroll to the bottom, if you will.  

Emily M. Bender: Who are they?  

Alex Hanna: Yeah, who are they? I'm kind of curious. So, "Kevin Barenblat is the co founder and president of Fast Forward. Fast Forward's community unites the tech sector, the tech sector and philanthropic investors in support of mission driven entrepreneurs, um, equipping AI powered nonprofits with the resources, capital, and mentorships needed to succeed." So they're a funder. 

Um, "Brooke James is the managing director of Teaching Lab Studio and a recent program officer of the Walton Family Foundation. Uh, Teaching Lab Studio develops AI enabled supports--"  

So these are, you know, these are people in this space.  

Emily M. Bender: In this space, yeah.  

Alex Hanna: Yeah, yeah. So, I'm curious, uh, what their argument is. 

So I'm curious in the, so let's see.  

Emily M. Bender: So, "According to the Walton Family Foundation, 81 percent of teachers say AI has had a positive impact on education and 65 percent believe AI will be key to the future success of students. And there are nonprofits proving these teachers right. AI is being used in exciting ways to bridge educational divides and AI powered nonprofits, APNs, are creating a roadmap." 

So yeah, same thing. Um. So it's, yeah, the story at the top is about, um, using a grammar book written for an endangered language and then seeing if an AI could learn the language off of it, which I just, it's too gross.  

Alex Hanna: Yeah. And so there's a few things, and this relates to something Irate Lump put in the chat. 

So they're, they find five major themes and we don't have time to go through all of them. But the first one is strengthening early childhood education in low income communities. It says, "AI can tailor lesson content and pace to the needs of individual students, addressing their specific strengths and areas for improvement. AI can even help identify developmental delays, different learning styles, or learning disabilities, allowing for timely intervention support." Jeez.  

"Identifying developmental delays in educational struggles is especially vital in low income communities where needs may be more likely to go undiagnosed and be unsupported." 

Uh, there's just so much here that I want, that makes me so punchy. And Irate Lump in the chat says, "The last time quote 'non profits' tried to do reimagine education, the Gates Foundation was leading the charge in school privatization through the charter system, and that failed, leaving parents and students holding the bag."

Yeah, so the jury's really still out. I mean, most of the comparisons until quite recently have showed that charters do not outperform um, public schools on standardized testing. Um, but also something that, you know, a DAIR fellow, Adrienne Williams talks about quite a lot, formerly being a charter school teacher, is the way that these kinds of things are uh, especially low income black and brown kids in her community, uh, were really being forced to use these tools all the time, uh, with really no care of what was going into, you know, how much they were using the computers and, um, and how much they were being forced to, you know, they were not allowed to like, they didn't have enough resources to talk to, you know, talk to teachers or spend time with them. 

And so again, we're finding like disability and class as being this, you know, way in which these companies are saying, well, we could actually help in these places. And you're like, no, like, high income places are gonna get, you know, are gonna get, uh, beneficial uses of technology, but won't be compelled to use them. 

Emily M. Bender: Yeah, yeah. I have to talk about the second theme here because it is right in my wheelhouse. So, "Improving grammar, writing, and reading comprehension: AI can offer real time feedback and personalized instruction. Natural language processing algorithms can analyze students written work, identify grammatical errors, suggest improvements, and provide explanations to help them understand their mistakes. 

For reading comprehension, AI can recommend texts at appropriate difficulty levels, generate questions that enhance critical thinking, and track student progress over time."  

So the first thing I want to say is that grammar checkers have been around for a while. They're not AI. They're a useful thing to have, but, um, also we have to be, and this is me the linguist speaking, uh, you don't want to talk about improving grammar, right? There's all of this, um, uh, racist and classist stuff about language variation that sort of gets tucked into, 'I'm just teaching you how to talk properly.'  

And just automating that isn't going to help.  

Um,  Grammar checkers, again, useful, also sometimes annoying, can be useful. 

Um, "provide explanations to help them understand their mistakes?" Uh, I mean, maybe, like, I know how to write programs that, especially if you're talking about second language learners, can say, hey look, it's like this sentence has an agreement error or something, and you can like flag it, but again, it's not AI, it's just NLP. 

Um, but then this last thing, "generate questions that enhance critical thinking." I don't think so. Like, I don't think you get to automate that part of teaching.  

Alex Hanna: Mm hmm. Well, even, do you want to automate that part? That's like the fun part of teaching.  

Emily M. Bender: Exactly. Exactly. So, okay. We should keep going though. 

Alex Hanna: Yeah.  

Emily M. Bender: Uh, ArtNews. Um, this is by Karen K. Ho, August 30th. Headline, "More art school classes are teaching AI this fall, despite ethical concerns and ongoing lawsuits." Um, and, uh, basically the, the main story here is about Ringling College of Art and Design. "One of the school's newest offerings will be an AI certificate." 

And it's like, if some of this instruction is letting people know how to fight back, great. If it's just like, oh, but now we have to learn how to use these tools, I am not into that. Um, so yeah. Anyway.  

Alex Hanna: Yeah. I don't have anything to say. It seems terrible.  

Emily M. Bender: Okay. Um, AI teaching assistants again. This is from the EdSurge podcast by Jeffrey R. Young, August 27th. The headline is, "When the teaching assistant is an AI 'twin' of the professor. Two instructors at Vilnius University in Lithuania brought in some unusual teaching assistants earlier this year, AI chatbot versions of themselves. The instructors, Paul Jurcys and Goda--" uh, I get to try a long name. 

Alex Hanna: Go for it. That is, that is, I don't know where to start with that name.  

Emily M. Bender: "--Strikaite-Latusinskaja, created AI chatbots trained only on academic publications, PowerPoint slides, and other teaching materials that they had created over the years. And they called these chatbots 'AI knowledge twins,' dubbing one Paul AI and the other Goda AI. They told their students to take any questions they had during class or while doing their homework to the bots first before approaching the human instructors."  

And I just want to say, this is why I'm always objecting to the word 'human' in this context. It's like it somehow describing the real people in the situation as if those are the humans as opposed to the AIs just really bothers me. 

Okay. "The idea wasn't to discourage asking questions, but rather to nudge students to try out the chatbot doubles. 'We introduce them as our assistants, as our research assistants that help people interact with our knowledge in a new and unique way,' says Jurcys. Experts in artificial intelligence have for years experimented with the idea of creating chatbots that can fill the support role in classrooms. With the rise of ChatGPT and other generative AI tools, there's a new push to try robot TAs."  

And then this next part really got to me. Um, "'From a faculty perspective, especially someone who is overwhelmed with teaching and needs a teaching assistant, that's very attractive to them. Then they can focus on research and not focus on teaching,' says Mark Watkins, a lecturer of writing and rhetoric at the University of Mississippi and director of the university's AI summer Institute for Teachers of Writing."  

Alex Hanna: Oh my gosh. That's really, just just so much there. Just Mark Watkins. What are you, what are you doing with your life, man? Like. (laughter)  

Emily M. Bender: This whole thing. And it's like, no, don't trust the paper mache thing.  

Like, and, but yeah, so anyway, Mark Watkins basically saying, yeah, teaching is too much. 

I want to just, you know, have something else do that so that I don't have to do my teaching job.  

Alex Hanna: Well, scroll down this, cause I hadn't seen this one. It's well, they didn't put the transcript. They say, "And we've listened to, in as Jurcys asked his chatbot questions -- and admits the bot puts things a little bit differently than he would."  

Just, I'm like, what, I don't want to listen to this entire podcast, but I'm sure it's some, some, some great hell. 

Emily M. Bender: 'Put things a bit differently' is probably missing the mark. Yeah. Okay. Keep going. Uh.  

Alex Hanna: So this one, uh, "The UK, the UK's first quote 'teacherless AI classroom' set to open in London." This is Sky News. The journalist is Mickey Carroll, science and technology reporter, August 30th, 31st 2024. "A private school in London is opening the UK's first classroom taught by artificial intelligence instead of human teachers They say the technology allows for precise--" Precise? That's a weird thing to say. "--bespoke learning while critics argue AI teaching will lead to a quote 'soulless, bleak future.'" And it's got this image of these kids sitting in front of computers. 

Scroll down a bit.  

Emily M. Bender: I'm just looking at how much more we have to go too.  

Alex Hanna: Yeah. Uh, kind of reiteration. "David Game College, a private school in London opens its new teacherless course for 20 GCSE students in September." Ugh. I went, 'ugh' because they have on these like Apple Pro goggles. Says, "The students will learn using a mixture of artificial intelligence platforms on their computers and virtual virtual reality headsets." 

Um, yeah, yeah.  

Emily M. Bender: It's just cursed.  

Alex Hanna: Looks, looks very cursed. Yeah.  

Emily M. Bender: Yeah. Alright, I don't know why this is not letting me close this one. Oh, it's because I'm sharing it. Alright, I am going to stop this share and bring us to the next one because we've got just too much of this to do. Um, it is, it is all hell and this is after I downselected quite a bit. 

Um, alright.  

Alex Hanna: We might have to do a lightning round here. Just headlines and go.  

Emily M. Bender: Oh, that's all by it. No, that's not all by itself. Let me get to the thing here.  

Okay. Sakana.AI. Um, and I was a little bit mad about everyone using all these Japanese names. Then I realized this is a Japanese company. August 13th, 2024, um, "The AI scientist: towards fully automated, open ended scientific discovery." 

Um, these folks have posted a preprint about their system that is absolutely absurd. So they basically wrote this thing that's this whole, like, workflow of using LLMs in various ways, and at the end of it, they have some LLMs that they prompt to write reviews using the reviewer rubrics from conferences like NeurIPS. And they find that some of the papers that their system produced for only $15 a paper, um, hit the 'weak accept' threshold for NeurIPS, according to their LLM using the NeurIPS instructions.  

Alex Hanna: This is, this is hilarious. So they, well, first off, I'm glad they didn't actually submit them to peer review. Um, which is what, you know, already burdened the NeurIPS reviewers. Um, even though there's a lot of this bullshit in the peer review process, but yeah, they are assessing it with the LLM itself. 

Hilarious. We lo-- just absolute, absolute nonsense.  

Emily M. Bender: It is, and we might have to do a whole episode on this one, but I wanted to at least shout it out here. Okay. Uh, so this was, so this was basically the 'still' section of Fresh AI Hell. So. AI scientists is still a stupid idea. LLMs are still biased. So this is an Ars Technica piece, um, headline, "LLMs have a strong bias against the use of African American English," by John Timmer. 

Um, with the sticker, "needs more feedback," which is a weird sticker. Um, and then subhead, "Feedback gets rid of overt biases, but leaves subtle racism intact." And this is reporting on some research by, um, sorry, I scrolled down too far. No, and now it's a big ad and I can't even see the thing anymore. Uh, here we go, research in Nature, um, uh, uh, by Valentin Hoffman, uh, Pratyusha Ria Kalluri, Dan Jurafsky, and Sharese King. 

Um, and their title is, "AI generates covertly racist decisions about people based on their dialect." Um, so this is pretty cool research, unfortunate facts about the world, but well done, um, turning it up. Basically, if you put in language that reflects stigmatized dialects, then the AI is going to reflect back the biases in his training data about those dialects. 

Alex Hanna: Yeah.  

Emily M. Bender: So, yeah.  

Alex Hanna: Okay.  

Emily M. Bender: Keep going. So.  

Alex Hanna: Yeah. So the next one is, uh, this is, this is uh Little Uzi Hurt or this is a video journalist, Jamal Jordan um, on Twitter, uh, @LostBlackBoy. "My friend used an AI photo app to enhance a photo we took and y'all look what this app did to us." And it is, um, two Black men, one that has a shaved head, the other is kind of a cropped, um, faded haircut with I can't tell if they're braids or locs on the top. Um, and one is like massaging the other's shoulders.  

And so the first image is the actual image. The second image, uh, like lightens them incredibly dramatically, which we've seen over and over again, sort of like, yeah. And this really is really egregious.  

I will say, um, this is relevant because, um, Jamal Jordan, maybe, um, uh, 10 years ago or so when ImageNet, not 10 years ago, uh, six years ago. So when, uh, ImageNet Roulette came out, um, which was a art project that Trevor Paglen and Kate Crawford developed, um, where they had trained it like a, uh, uh, uh, uh, like a image object recognition system on the person subcluster of ImageNet, uh, basically, um, like there was no variation in how it labeled a picture of him and basically use this, um, like this, uh, this category where it was like different ways of saying Negro, it was pretty terrible. 

So, um, appreciate, you know, that unfortunately he's doing this kind of, uh, popular audit for this tool as well.  

Emily M. Bender: Yeah. And this is also 'enhanced' and literally the only difference that I can spot between the photos is that it made the people look whiter.  

Alex Hanna: Yeah. It makes them look, and it's not just kind of skin tone. 

There's also kind of like this kind of phenotypical kind of like shit that it's doing. Yeah. It's super gross.  

Emily M. Bender: Yes. All right. Um, so. This stuff is still biased, this stuff is still biased, and by the way, Eric Schmidt is still a jerk.  

So this is from Paris Marx's newsletter called, uh, Disconnect. Uh, title is "Roundup: Eric Schmidt says the quiet part out loud." Um, and it's from August 18th, and, um, he's talking about, let's see, "In conversation at Stanford with Erik Brynjolfsson, one of the guys who pushed the automation scare of mid 2010s that proved way overblown, Schmidt has said he hoped all the attendees would become entrepreneurs and gave an example of how generative AI would be able to help them, one that, once again, vastly overstates what these tools can do. He told the students that if TikTok gets banned, they should tell a large language model to make a TikTok clone and steal all the user generated content and music on TikTok to populate their own platform, something he said would take no more than a minute, and then launch it for the world to use." 

And it's like, (laughter) you know, both, first of all, no, you can't just instruct a large language model to do that for you, but also no, you shouldn't be trying to do that. Like just no.  

Alex Hanna: This, this detail is so funny. "When, when reminded by Brynjolfsson he was on camera, Schmidt reiterated that Silicon Valley will run these tests and clean up the mess. And that's typically how these things are done." (laughter)  

So even, even Brynjolfsson, who, uh, you know, is a, you know, like wrote that book, The Fourth Industrial Age that is ridiculous and it's about automation. Like even him, he's like, that's extreme for me, even me, Eric.  

Emily M. Bender: Yeah. All right, um, to the next one because we got to keep going to get through all of this hell. 

Alex Hanna: As you're, as you're shifting, I want to say this funny thing that, um, uh, our producer Christie Taylor said in the chat about the food in that, in that Jamal Jordan picture he said, and she said that "the, uh, food also just kind of loses definition," and I'm like, uh, maybe it also de seasons the food, whitens it or something. 

Emily M. Bender: Yeah. Could be. (laughter)  

Alex Hanna: I don't know.  

Emily M. Bender: All right, so now we're in the environmental degradation part of AI Hell. Um, this is a Scientific American article from June 11th, "What do Google's AI answers cost the environment?" by Alison Parshall. Uh, subhead, "Google is bringing AI answers to a billion people this year, but generative AI requires much more energy than traditional keyword searches." 

And I wanted to make sure to feature this one just because like, it's worth knowing, not that we can do that much about it, except using a different search engine, right? You don't have to turn this on. Google has turned it on for everyone.  

Um, so, "What medications cure headaches? Should you kill cicadas? If you Google these questions in the U.S., answers may pop up directly at the top of the results page, products of quote 'AI overviews,' the latest artificial intelligence offerings from the search engine giant. Google, which handles more than 90 percent of internet searches worldwide has promised to bring this feature to 1 billion people by the end of 2024. Other tech companies may follow suit." 

Um, all right. And then I just want to say, here's a quote from Sasha Luccioni. So, "When compared to traditional search engines, AI uses, quote, 'orders of magnitude more energy,' says Sasha Luccioni of the AI research company Hugging Face, who studies how these technologies impact the environment. It just makes sense, right? While a mundane search query finds existing data from the internet, she says, applications like AI overviews must create entirely new information. Luccioni's team has estimated it costs about 30 times as much energy to generate text versus simply extracting it from a source."  

Alex Hanna: Wow.  

Emily M. Bender: So, we don't need that and really the only thing you can do about it is not use Google at this point. 

Alex Hanna: That's been my strategy for many other reasons, but unfortunately, they still are a monopoly.  

Emily M. Bender: Yes.  

Alex Hanna: Yep. All right. So this one is wow, I have not seen this. It's by the from the Bezos Earth Fund. And I know that's gonna be Great. So it's their grand challenge. "The Bezos Earth Fund is exploring new ideas for multiplying the impact of climate and nature efforts using modern AI. The first round of the grand challenge will focus on sustainable pattern--proteins, power grid optimization, and biodiversity conservation, in addition to embracing visionary wildcard solutions for climate and nature." Oh gosh. Yeah. 

Emily M. Bender: It's like we don't have to worry about the AI harming the environment because we're going to use AI for good, see? 

Alex Hanna: Yeah.  

Emily M. Bender: Yeah. All right, moving on. Um, oh, this one is delightful. So this is a, um, I think Fediverse thread from someone called, Foone? Fo-One maybe? Um, and it was quite a journey.  

So, it starts with "Ugh. I picked up a shitty NUC from eWaste and it had a label on it for an AI company. Ah, another startup that burnt out trying to build some silly AI project on crap hardware. I wonder what they did. I check their URL. Ah, healthcare. Great, great."  

And then next toot. "Also, I hope they wiped these hard drives."  

Next. "But given the state of them when they arrived at eWaste, no they did not. When you see a gaylord stacked high with NUCs and half of them still have USB fans attached, you know these were all just yanked off a shelf. No one wiped these."  

"I have now stuck the hard drive in my imaging box. It turns out it was in service as of June. And this one has log errors about the sensors in the bathroom and bedroom. This was used."  

Alex Hanna: Oh my gosh, this is awful.  

Emily M. Bender: It keeps going. This is from August 19th by the way.  

"Hey, fun fact, this was used as part of an Alexa Google Home type thing. This is the "cloud" half. It's in the part sitting in a warehouse somewhere. It turns out every time the customer asked for something from the smart assistant, the WAV file was sent to the cloud box. Where it is still stored, and now I have 11,000 WAV files."  

"God, the logs are full of errors about assorted video streams failing. So this thing was connecting to something which had cameras, like I can tell which room of the house failed. Now I don't think there's any video stored on this device, but keep in mind the fools that made this thing fill up with WAV files, they also designed the video streaming part. Where are those videos stored and how safe are they?" 

"Or maybe the fools who dumped all the NUCs from their entire 'AI remote healthcare' in the recycling without yanking any drives are just somehow really good at knowing how to obscure their S3 buckets. Assuming their S3 keys aren't just saved in this hardware somewhere."  

"Jesus Christ, this isn't the only time this month I found an IoT device and checked the file system contents and it's got their private Git repos on it." 

"And now I can email the lead developer. Or just commit to their Git repo, I guess." 

Alex Hanna: Holy shit.  

Emily M. Bender: "Okay, so the good news is they don't just have S3 keys laying around in plain text. The other good news is that they don't have a secrets manager."  

Alex Hanna: They do, they have, yeah, yeah.  

Emily M. Bender: "The bad news is that they rolled their own secrets manager. The extra bad news is that I have the source for said secrets manager. And the extra, extra bad news is that it has to decrypt those keys without external input, meaning that I have all the parts here to pull out their S3 keys."  

"Oh, hey, this thing authenticates to some of their servers, which are still up, even if the company might not be, this is unknown at the moment, over SSH! Using keys kept in the same home-rolled vault thing!, so I can SSH into their servers now!" 

Alex Hanna: Yeah, so let's just, just to, just to, just to cut you off, you could go on this thread forever, but like, and also, Christie's like, "I'm not quite sure what's going on." So from a technical perspective, so a NUC is, what is a NUC? I mean, it's not a NUC if you buck, (laughter) but like, it's a, it's a, uh, it's a, it's a small computer. 

I'm looking up NUC. Uh, it looks like it's a very small computer. Um, and, and basically it looks like they have these in some sort of like an array. Uh, you know, like they had no securing a IOT, internet of things, or my favorite take on it, internet of shit. Um.  

Emily M. Bender: Yeah. (laughter)  

Alex Hanna: Often very poor security standards, not kind of like--they, they kind of did this on their own. An S3 bucket bucket is like, uh, is the Amazon cloud storage. And basically if they're storing video on cloud storage, but their passwords are pretty much exposed, um, pretty much means like their security, as you can imagine, is absolute trash. Um, yeah.  

So, I mean, this is, this is for an AI thing, but you know, the same pretty much goes for any kind of internet of things, uh, situation where absolutely terrible security across the board. 

Emily M. Bender: What I'm super worried about here is that you have all of these startups that are trying to like cash in on the AI hype wave who are like, yeah, we can collect this data and do this thing with it. And they have no clue what they're doing. So not only are they doing all the usual bad AI stuff, but they're also storing all this data in completely insecure ways. 

Um, so, all right. Thank you for letting me go on about that for a little bit. Um.  

Alex Hanna: Yeah.  

Emily M. Bender: We got a few more minutes, so you want to take this one?  

Alex Hanna: Sure. This is CNBC by Laura Kolodny, um, the title, "Elon Musk's xAI accused of worsening Memphis smog with unauthorized gas turbines at data center." August 28, 2024.  

And the key points: "Environmental advocates say Elon Musk's xAI is running gas turbines to power its data center in Memphis and emitting air pollutants there without authorization."  

Can you scroll down a little bit just so I can see the rest of the key points? Oh, yeah, I'm just gonna read the key points.  

"Memphis already has a major smog problem, and the county where the xAI center is located received an 'F' grade from the American Lung Association for its poor air quality." I imagine that's Shelby County, as a former Memphis resident. "Musk said in July xAI begin its training its AI models at the facility using 100,000 of NVIDIA's H100 processors." Oof. Yeah.  

And I think, uh, I think, uh, Sasha might also have been quoted in this one. 

Emily M. Bender: Yeah. All right. I'm going to stop sharing that one and we are not going to get through all of them, but I've, I've picked which one I'm going to take us to, or which set I'm going to take us to last. Um, there's some important things in here and also a really great ending, uh, palette cleanser. 

So from Jacobin by Alex N. Press, uh, June 16th, 2024, "Workers deserve the right to frown on the job." Um, "Technological advances are giving employers the ability to monitor workers more closely than a human manager ever could." And this is about some tech that's basically making sure that you're smiling while you're working.  

Alex Hanna: Yeah. There's a great report that coworker.org released on this called, um, uh, little, it's called "little tech." It's kind of on, um, bossware. Um, so I'll drop it in the chat.  

Emily M. Bender: Excellent. Okay, moving along. Um, all right. Also on surveillance, uh, "Network rail secretly used AI to read passengers' emotions." No, you can't. This is Tuesday, June 18th. Mark Sellman, um, in The Times as in, I think The Times of London. 

Um, so please don't, you can't, um.  

Uh, oh, this one. Uh, this is from Australia. "An AI chatbot was blamed for psychosocial workplace training gaffe at Bunbury Prison." So this was, uh, so in short, a training company says it used an AI chatbot to generate a fictional sexual harassment scenario and was unaware it contained the name of a former employee and alleged victim. 

Alex Hanna: Good lord.  

Emily M. Bender: "Western Australia's Department of Justice said it did not review the contents of the course it commissioned." So basically they used the chatbot to create this course and it pulled on its training data and it pulled out a real case that pertained to somebody in the same organization.  

Alex Hanna: Lord. All right. Okay. Quick. Say, this is from "Advanced research and Invention Agency." Oh--  

Emily M. Bender: This one's not worth the time. Okay. Take a-- 

Alex Hanna: Skip it, skip it, skip it. Okay. "77 percent of employees report AI has increased workloads and hampered productivity. Study finds--" This is by Brian Robinson, who's a PhD. Uh, and I lost it.  

Emily M. Bender: So basically it is said, people said it makes your work harder. 

Yes, it does.  

Alex Hanna: I think that came out a while ago. Okay. "Using the term artificial intelligence in product descriptions reduces purchase intentions." In informed, uh, Informed, um, um, uh, Consumer. This is coming out of a news, uh, from I believe Washington State University by Eric Hollenbeck. Um, yeah. Oh, this is Pullman. 

Oh, was Washington State in Pullman? Oh, we just played a team from them on Saturday. Anyways. Okay.  

Emily M. Bender: Okay. So yeah, so basically if you called AI, people aren't going to want to buy it. Go people. And then finally the last palate cleanser. Um, this is from our producer, Christie. So this is a Reddit thread. "Google AI for 'is tripe kosher?' claims 'it depends on the religion of the cow.'" 

So here's, here's the AI overview answer. "Whether tripe is kosher depends on the religion of the cow." (laughter)  

Alex Hanna: That's, that's. I'm like, this, this is great. I just want to, I just want to think about this a little more. Cause it's like, (laughter) but wait, wait, what about the wait? What if, hold on. I'm like, I'm thinking too much about this. 

Um, does the, will the cow agree to being butchered if it's like, you know, if it's, if you have a Jewish cow and a Jewish butcher. So a Jewish cow and a Muslim butcher and a Christian consumer walk into a bar, that's all I got for that joke.  

Emily M. Bender: And the cow, the cow says, I'm not kosher, I'm not halal. And by the way, Christian friend, you're a vegetarian, right? 

 (laughter)  

Alex Hanna: The bar says, what'll you have? What do you, what will you have? But twist, the bartender is an AI bot. Okay. This is just a fever dream right now. I'm sorry.  

Emily M. Bender: All right. And construction's about to start up again around me. So we should wrap up.  

Alex Hanna: Okay. All right, cool. That's it for this week. Thanks for sticking in our hellish domain. 

Our theme song is by Toby Menon. Graphic design by Naomi Pleasure-Park. Production by Christie Taylor. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us by rating and reviewing us on Apple Podcasts and Spotify and by donating to DAIR at DAIR-institute.org. That's D A I R hyphen institute dot org.  

Emily M. Bender: Find us and all our past episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.tv/DAIR_Institute. Again, that's D A I R underscore Institute. I'm Emily M. Bender.  

Alex Hanna: And I'm Alex Hanna. Stay out of AI Hell, y'all.

People on this episode