
Mystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
The Anti-Bookclub Tackles 'Superagency', 2025.03.03
Emily and Alex read a terrible book so you don't have to! Come for a quick overview of LinkedIn co-founder and venture capitalist Reid Hoffman's opus of magical thinking, 'Superagency: What could possibly go right with our AI future' -- stay for the ridicule as praxis. Plus, why even this tortuous read offers a bit of comfort about the desperate state of the AI boosters.
References:
AI and the Everything in the Whole Wide World Benchmark
Militants and Citizens: The Politics of Participatory Democracy in Porto Alegre
Fresh AI Hell:
Parents rationalizing exposing kids to AI
Underage, sexualized celebrity bots
CalState faculty union opposes AI initiative
Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.
Our book, 'The AI Con,' comes out in May! Pre-order now.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
- Bluesky: emilymbender.bsky.social
- Mastodon: dair-community.social/@EmilyMBender
Alex
- Bluesky: alexhanna.bsky.social
- Mastodon: dair-community.social/@alex
- Twitter: @alexhanna
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.
Emily M. Bender:Along the way we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worst to come. I'm Emily M. Bender, professor of linguistics at the University of Washington.
Alex Hanna:I'm Alex Hanna, director of Research for the Distributed AI Research Institute. This is episode 52, which we're recording on March 3rd, 2025. As we teased on social media, this is a very special episode that broke both me and Emily's brains beyond repair. We picked book, a very horrible book, read it all, and are ready to tell you exactly why it is so terrible. A tale of magical thinking about what this tech can actually do. Some brute forcing made up norms about how we quote, "all need to embrace AI" and a little bit of badly disguised desperation from a guy who's already invested heavily in the least plausible applications of generative AI.
Emily M. Bender:And what book was it? Who is this guy?
Well, "Superagency:What could possibly go right about our AI future" by none other than LinkedIn co-founder and venture capitalist Reid Hoffman and his co-author Greg Beato. Congratulations to Matthew Henderson, who guessed correctly when we invited folks to try to figure out which of the many horrible AI books we were eviscerating today. Also, congrats to anyone who guessed it would be some CEO doing self-promotion. Kind of too easy, but we'll hand it to you. As Alex likes to say, we both took major psychic damage from our speed run of this book. My notes alone fill up 15 pages, so we will definitely not be able to rip apart everything, but I'm ready for that catharsis. Let us begin. Are you ready, Alex?
Alex Hanna:I feel like I need to roll for initiative. Nope, no joke. 19 on the die. Let's do it.
Emily M. Bender:All right. So, um, we have the book up on Kindle and I feel like we should say just a little bit about how we experienced this book. Um, we both heard it through bad text to speech systems because audiobook was the only way to get through it. In my case, it was all of my running and walking time for the past week. Um, and this is also, I ended up with having to go to Amazon because there was no actual audio book release. And so I had to do it through Kindle on their text to speech. And Alex, you got to do it while touching grass, right?
Alex Hanna:Well, yeah. I listened to it yesterday and I there was such a bizarre thing on Kobo where it had an audio book and the way the audio book was described was really weird because it was like oh, we had someone, we had someone read it, but also there was like AI feature like technology. I didn't really understand like what it was. It maybe, some guy like, you know, they kind of did a style transfer to this guy's voice. Bizarre stuff, but of course very much in the scope of, you know, what they want to do with this book.
Emily M. Bender:My favorite thing was that the text-to-speech that I was using, the, the the, the token AGI is in there only once and it struggles with all like acronyms and stuff, but it came out with Aggie for the AGI.
Alex Hanna:I love that.
Emily M. Bender:Alright, so here's their website, um, and a couple of things to note here. They've got these, you know, bestseller stickers. Um, but also "You can order a copy of this book with a personalized book cover featuring you," which like, who does that?
Alex Hanna:It's like that. It's like you are the actual person who wrote the book. I don't know. Or it's like he, I don't know. Bizarre.
Emily M. Bender:Yeah. Um. And Alright. Uh, the other thing that I wanted to note down here is there's other books, um, which seem to all be by Hoffman, not Beato. And the absolute worst one here is this one, um, that clicked through to Amazon. It looks like it did.
Um, "Impromptu:Amplifying our humanity through AI," by Reid Hoffman "with GPT-4."
Alex Hanna:Yeah.
Emily M. Bender:Why?
Alex Hanna:Terrible.
Emily M. Bender:Why would I do that?
Alex Hanna:Um, oh, so let me, oh, so actually Abstract Tesseract notes that actually, if you look at the site, it says, uh, "narrated by Scott Wallace and converted with AI to Reid Hoffman's voice." Sure.
Emily M. Bender:Great.
Alex Hanna:There's, it's actually great that they do it 'cause there's some parts of it that sound so, uh, robotic. I'm just like, oh yeah. It's about right. Yeah.
Emily M. Bender:I'm just realizing I forgot to close my window, so I am going to take care of that.
Alex Hanna:I hear, I hear so much screaming. It's great. Uh, so okay, let's get into the, the meat of this. So yeah, where do we wanna start? Well, let's, let's, let's begin from the beginning because--
Emily M. Bender:Well, no, we have, we have to talk about the authors and, sorry. Talk about broken brains talk.
Alex Hanna:Yeah. Talk about the authors. Totally.
Emily M. Bender:Yeah. So, um. This is, it reads very funny because it's written by Hoffman and Beato, but then you get into it and there's all these first person singular pronouns. All this "I" and "me" and "my," and eventually I decided what's going on here is that Beato is effectively a credited co-author. And, um, you've got Hoffman therefore, basically getting to say I and me and my, when he's talking about his own stuff, um, but then when they're doing, like, there's a few "we"s occasionally, and I kind of had the impression that basically anytime in the book that there's actual like citations to things in the, you know, the literature or, or news, that's Beato. And initially I wanted to say, 'Greg, blink if you're okay, do you need help?" And then I got to the acknowledgements and
Alex Hanna:Yeah.
Emily M. Bender:I, I lost my sympathy. Um, where, where are they here? Okay, so.
Um, here's Beato's acknowledgements:"Along with the endless conversations I had with Claude, ChatGPT and Gemini while drafting this book with Reid, I also benefited from--" And then he thanks some people.
Alex Hanna:Yeah.
Emily M. Bender:It's like, okay, fine. I'm not sympathetic anymore.
Alex Hanna:Totally.
Emily M. Bender:But also, do you wanna share who Hoffman is thanking?
Alex Hanna:Yeah. Okay. So yeah, Mustafa Suleyman, so we mentioned him a lot on this podcast. Sam Altman, Kevin Scott, Greg Brockman. Satya Nadella. Eric Schmidt, Bill Gates, Demis Hassabis, Fei-Fei Li. We've talked about all those people. Robert Reich is an interesting person. He is a Stanford professor. Um, I'm just gonna go Erik Brynjolfsson, we've talked about on this--
Emily M. Bender:Also at Stanford.
Alex Hanna:Also at Stanford. Uh, wrote "The Second Machine Age," very tech techno positive. Uh, Dario Amodei at Anthropic. James Manyika at Google. Um. Aza, Aza Raskin, who is like, uh, um, what is it? Um, freaking, um, uh, that's, uh, what's his name? Tristan Harris's buddy. Mm-hmm. Um, uh, Joi Ito, who I think--
Emily M. Bender:Isn't he famous for the whole, um.
Alex Hanna:He was at, he was at, he was at MIT Media Lab and he stepped down 'cause of Epstein stuff.
Emily M. Bender:Exactly. Yeah.
Alex Hanna:Yeah. Blaise, our, our, our fun, uh, Agüera y Arcas, our friend, we started this podcast. This, Toby Ord and Will McCaskill.
Emily M. Bender:Wait, wait, wait. We have to clarify. Blaise is not our friend. Blaise wrote the artifact that we got so heated about that we started this podcast.
Alex Hanna:I hope the sarcasm was apparent in my voice as the origin. Ord and McCaskill, longtermist, um, um, uh, entrepreneurs. Um, who else? Anne-Marie Slaughter's kind of weird to see here 'cause actually I kinda like her and then Ashton Kutcher. You know that Ashton Kutcher of That Seventies Show and whatever. I don't know much about cultural stuff, but like, like cult tv. But like, yes. If you go to Aston Kutcher's site, I think he signed on the, um, the, uh, the Amazon page his title's like father, actor, dad--
Emily M. Bender:And investor.
Alex Hanna:Investor, yeah. Very cool.
Emily M. Bender:Yeah. Yeah. And he also, of course, has a blurb for this book. Um, yeah. Yeah. Oh, Faster And Worse in the chat says, "Ashton Kutcher is also a crypto guy. Well, that tracks."
Alex Hanna:No surprise.
Emily M. Bender:Yeah. Okay, so now we can begin at the beginning. So let, let me get us to the beginning. You know, "always read the footnotes" includes always reading the acknowledgements, right?
Alex Hanna:A hundred percent.
Emily M. Bender:Yeah. So. We are gonna start at the introduction here. Um, would you like to read the first few words Alex?
Alex Hanna:I'm gonna read this. This is, so this is literally, if, this is, this is a meme. Because they literally started with,"Throughout history, new technologies have regularly sparked visions of impending dehumanization and societal collapse." I'm, ugh, just, I have, I have secondary psychic damage and trauma from, from like undergrad teaching. Don't start something with 'throughout history.'
Emily M. Bender:Throughout history. Uh, yeah. Okay.
Alex Hanna:I just wanted to start there. Yeah.
Emily M. Bender:Yeah. Okay. So now, so, and the introduction is not chapter one, there's also chapter one. Um, and in chapter one there's a couple things that we definitely wanna get to. The first part is actually under "Credit where credit is due." um, but this is also just, I hate, I hate the chapter titles. Chapter one, "Humanity Has Entered the Chat." Who, whose chat? What was, like, no, we're here, right? Okay. Um, so I wanted to find, here we go. They actually have a pretty grounded discussion of LLMs early on, um, which just seems to get entirely forgotten in the rest of the book. And this is why I was like, Beato, are you okay? Right? So, um, it says, "While developers apply various techniques to mitigate these issues, the fundamental limitations that underlies them all remains the same: as of yet, LLMs have no real capacity for common sense reasoning, no lived experience, and no grounded model of the world. They're always just predicting the next token in a sequence based on patterns they've learned from their training data." So. That's true, and they've said it and they know it, and yet we're gonna get all of this magical thinking about what this tech can supposedly do despite this being in the very same book.
Alex Hanna:I was frankly surprised that that was in here. I thought they were going to, I'm like, oh, this might be reasonable. But you're absolutely right, Emily. It goes pretty much out the window immediately because throughout the book they start assigning agency to the technologies, they, they start, um, you know, saying what it's going to do autonomously. They start really using anthropomorphizing language and you're like, wait, I thought that we said earlier on--okay.
Emily M. Bender:Yeah, yeah. Exactly. Exactly. All right. So the next one that I'm gonna take us to, you'll see there's more highlights here than we're actually gonna get to because there's just so much bullshit and we only have an hour. Um, but I think this is the one. Yes. Yeah. Alex, would you like to read this paragraph in yellow?
Alex Hanna:Yeah, totally. So, um. So this is, the context of this is basically, you know, the, this is where the, uh, the chat begins. So he is talking about OpenAI and so he says, uh, or they say, um, I should say,"Finally there was a, a highly accessible, easy to use AI tool that explicitly worked with you and for you rather than on you." So he is really comparing this to, um, a lot of predictive um, like predictive, algorithmic things. And so it's saying like, oh, we're moving from like facial recognition to like something that works with you, which is a bizarre line to draw. Um, because it's generative AI still works on you in many scenarios. But setting that aside, "This marked a critical shift in AI development and human empowerment. It's critical because it puts individual users at the heart of the experience and, just as important, gives them opportunities to have experiences that they've sought or designed. Instead of developing this new technology behind closed doors until a small cadre of experts had decided it was that it was performing in sufficiently effective and perfectly safe ways, OpenAI invited the public to participate in the development process. It described this approach as 'iterative deployment'--" And I wanna highlight this because this phrase iterative deployment is like the, um, what is it? It's kind of like the deus ex machina of the book. It's, it's this thing that he keeps on going back to this is like the thing that makes OpenAI so magical is you. Mm-hmm. And it deployment is effectively, if I was gonna put it in different words, like a wisdom of the wisdom of the markets.
Emily M. Bender:I think it's perpetual beta testing, is what it is.
Alex Hanna:It's, it's, yeah. It's both, that's per-- it's beta testing. It is experimenting on people with these tools. It's also this like, well, OpenAI is putting this out and like the market's deciding like, this is, this is really, this is really what people want. And like it's, you know, it's a really through line. I, I've never read anything by Reid Hopman before, but what I knew of him was that he was like, of course, founder of LinkedIn. Uh, I also knew he was like a major Democratic donor. Um, but then like you read this and it's infected from, from head to foot with like this free market ideology and that's how we kind of use technological development. Um, so yeah. This is, this is wild to read.
Emily M. Bender:Yeah, absolutely wild. And the idea that, that this is OpenAI inviting the public to participate in development. Not in the least, right? And I liked how it was like "developing this new technology behind closed doors," as if that's not what was happening. And as if also OpenAI were somehow out of their goodness of their hearts, giving the public early access rather than, you know, sitting on it and, um. I guess I would say they had a market or, or sort of a, a financial reason to put it out there so that people would get excited, but in fact they lose money on every query. So I don't really know which way that's going.
Alex Hanna:Yeah. Some really good stuff in the chat too, 'cause there's some people that are saying, so, um, so Triceratops90 says "It's free labor slash debunking slash rating too." And then, um. Faster And Worse, great username, says,"Iterative deployment equals 'throw shit at people and see what sticks.'" And then Elizabeth With A Z, first time chatter, hey, says "It seems like putting a really positive spin on cutting a QA department." Yeah.
Emily M. Bender:Yeah. Yeah, exactly. Um, okay, I'm gonna keep us moving 'cause we've got I think 11 chapters here. Chapter two is called "Big Knowledge." Um, and of course there's all of these references to Orwell's 1984, which for some reason the text to speech I was using consistently rendered as "one thousand, nine hundred and eight-four".
Alex Hanna:That's that's great. I love it. You should do that all the time.
Emily M. Bender:Yeah. Whew. It was a little, little exhausting. Um, and in this, there's just some ridiculous notions of privacy. So I'm gonna take us to 43 and read the thing. And then Alex, you can, I know you wanna riff on this, so I'm gonna, um, oh, I, this is not the usual thing that I'm manipulating us through. I don't love the Kindle app. Here we are. Okay. So."'Everyone has in a right to be left alone,' as future Supreme Court Justice Louis Brandeis put it in an 1890 Harvard Law Review essay that helped shape 20th century conceptions of privacy. But as a member of society, and especially as a member of a network society, it's not always the most productive right to organize your life around."
Alex Hanna:Just incredible. I mean, it's just, uh, and the, the, this whole chapter is fascinating. Like I listened to this chapter with kind of rapt attention, um, because this chapter was all about basically how, uh, we were afraid of Big Brother, but instead we have, um, big, big tech and, but that's okay. Uh, um, basically like your, that you've given up enough freedoms, uh, and enough privacy to have as much personalization. Um, it's literally like personalization. Yeah. You have, uh, a subhead on our notes here, Emily."Personalization as freedom in," in the prior, the the prior page.
Emily M. Bender:Yeah, it's--right, so this is you you have a right to privacy, but you'll be better off if you don't use it. You'll be freer if you don't, you know, sort of hold your privacy close. He also talks about, um, platforms as scaling trust. Yeah. Um, and I'm not gonna look for this in the text, but in the notes I've copied it over, "At heart, then, what LinkedIn--" And oh boy are there LinkedIn anecdotes in this book, but,"--LinkedIn and many other successful internet platforms do is scale trust. Think about how eBay, PayPal--" Sorry. PayPal. PayPal."--Airbnb, Uber and Lyft, to name just a few, use various innovative trust mechanisms to enable a broad new range of interactions, transactions, and behaviors on a global basis." And I'm like, I guess he's talking about helping strangers trust each other to enter into some kind of commercial transaction, but these are not sites that I would consider trustworthy.
Alex Hanna:Yeah, this is, this is, he's not original or they're not original in kind of coming up with this, this idea of scaling trust is like a thing. It's, it's a bit of a management meme that I've seen a lot, uh, just in, I remember reading a lot of early internet literature in the like 2000s and they're like, well, we need to, like, we're operating on a, a global scale. We need to scale trust. I'm like, is that what, is that what tr--what?
Emily M. Bender:Yeah.
Alex Hanna:Yeah. Um, and yeah--
Emily M. Bender:Connected to that. So connecting to the Uber and Lyft, um. He's talking about, so "While the internet has certainly created new opportunities for fraud and disinformation, its larger story, how it's functioned as an unprecedented trust machine." Yeah. Um, "By 2012, we were jumping into a random Toyota Corolla with a pink mustache on its grill after a night on the town to get a safe ride home."
Alex Hanna:It's, it's really wild that. Uh, that can be written with a straight face. I mean, so I did quick searches and by, so in a New York Times article in, published, and I can't see it all, it's paywalled, but it's about 2019 data. Uber admitted, they said there was 3045 sexual assaults reported in US rides in that year, and, uh, there was about a thousand in 2022. So, I mean, you know, like these are, okay, so you're, you're, you're scaling trust. But, but again, for whom? You know, to what degree, I mean, these are, these are things that are gendered, raced. Uh, and these things, gender and race makes some appearances here. Um. But mostly in talking about how we're gonna figure out bias effectively. Um.
Emily M. Bender:Right, right.
Alex Hanna:And so it's just really, um quite absurd and like really thinking about what it means to talk about trust, privacy, and surveillance at the margins, which of course this is not of interest to, to, um, Hoffman and Beato.
Emily M. Bender:Right. And I, I think Abstract Tesseract has won the chat for this episode already.
Alex Hanna:Oh yeah.
Emily M. Bender:"It's only surveillance if it comes from the surveillance region of, of France. Otherwise it's just sparkling trust scaling."
Alex Hanna:Yeah. Love it.
Emily M. Bender:Um, all right, so there's a couple other things in this chapter. Um, one is he goes on and on about, um, this fellow named Packard from the 1960s. Um, and so. Uh, Packard, who was opposed to this idea of a national data center. So apparently the US government was thinking about putting all of the data in one place on one set of computers and got massive pushback and didn't do it. And some of that pushback came from this guy named Vance Packard, who, um according to, um, to Hoffman here, would've had some opinions about LinkedIn too. So, "Had something like LinkedIn existed in the 1960s, Vance Packard would've probably found it unsettling. It is, after all, not just a national data center, but a global one. It amasses a huge amount of information into a single repository and makes it easy to search. Much of this information would no doubt strike Packard as fundamentally private and thus problematic, even if posted voluntarily. Much of the data the government and private sector players aggregated in the 1960s was voluntary disclosed as well." And this to me just sort of like erases the distinction between government surveillance that you have no choice but to participate in and something like LinkedIn, which at least is still a choice. Um, and also it's, it's icky the way, this happens multiple times in this book where he basically says, well, so-and-so would think this about it.
Alex Hanna:Yeah. Yeah. There's also a bit in here. Um, I mean, this is also, this is, this is, he hasn't gotten to it yet, but there's basically, like the, the China boogeyman is, is kind of threaded throughout. And the national security, well, we'll get more--
Emily M. Bender:We'll get there. Yeah. Um, one, but (crosstalk)
Alex Hanna:There's one part I wanna read just because I read it and died a little inside. So basically they're talking about the amount of data that's being used and why we need to like, have AI to read it for us. And he says. What this, "What this means, of course, is that even if you catch every episode of Kira Swisher's podcast and read most of Matt Yglesias' tweets, you're basically living in your own personal dark ages, ignorant of virtually all global knowledge." Listen, if I listen to every episode of Kara Swisher's podcast and read all of Yggy's tweets, I would die immediately. I would be eviscerated. I would like, ugh, like what? And I love, I love and hate that these are the two people who are his reference as smart people.
Emily M. Bender:Well, is that what it is?
Alex Hanna:That's, that's how I read it. It's like Kira Swisher, brilliant tech journalist knows, you know, heart of the industry. Yggy, you know, hard on federal policy. Like no, these are two fucking like terrible ass people, bad journalists who do nothing but suck up to power. And for Yggy, has been like absolutely red pilled by like being online too much.
Emily M. Bender:Yeah. And red pill dispenser too, right? I mean, that, yeah. So, and, but this whole thing here throughout is also like, um, we can't possibly deal with all the data that's being produced. Um, and so we need AI to do it for us. And there's, there's no, and we'll get to a library thing again in a, in a moment. There's, there's absolutely nothing in here about what does it mean to actually curate knowledge and manage knowledge and manage access to information. It's just like, look big numbers, too much. We need AI.
Alex Hanna:Yeah.
Emily M. Bender:Okay. That takes us to chapter three'cause we're gonna keep going."What Could Possibly Go Right?" Um, and this, I think, is, is this where he introduces the, the, um-- oh, did we miss the introduction of bloomers and all that stuff?
Alex Hanna:Oh, we might have mi--yeah, I think we, that was in chapter one. That--
Emily M. Bender:Chapter one.
Alex Hanna:--page 26. The "Doomers, gloomers, zoomers and bloomers."
Emily M. Bender:Yeah. All right. Um, so fine. We gotta keep going. Yeah, there's, there's, there's just too much bullshit here.
Alex Hanna:In a word, the doomers are X-risk. Gloomers are probably us. Zoomers are, um--
Emily M. Bender:Effective accelerationists.
Alex Hanna:Effective accelerationist, I guess. Even that, yeah. I wouldn't even say that. Like, but maybe. And then bloomers are, you know, uh, I guess him. Um, although bloomers, I think he says, like, he does say like "giant leap forward," and I'm just like, you're appropri, you're appropriating a lot of Maoist imagery for someone who's writing an anti-China book. But yes.
Emily M. Bender:yeah. I miss, I missed that. Yeah. Um, yeah, so, so the gloomers I think is meant to include us, but then he completely mischaracterizes what we would be asking for in terms of, of, um, and the other thing is that like later on in the book, the Doomers and Gloomers just get merged. These are the people who want the stop it all regulation according to him.
Alex Hanna:Yeah.
Emily M. Bender:Um, okay. So, oh, um, okay. So this one starts with, um,"Internet of Shit, But in a Good Way." So, "A new light ages? Because these days you can even buy AI toothbrushes that optimize your brushing style through real-time coaching."
Alex Hanna:Geez, yeah.
Emily M. Bender:Do not, do not need that. Um, and then of course it comes back to excited about OpenAI's iterative deployment approach. Um. Then he introduces this terrible term" problemism," so, "But technology is itself one of humanity's most proven levers for creating positive change at scale. That's why solutionism's inverse, problemism, is a real issue we face too. Problemism is the default mode of the gloomers who've used technology in general as a suspect force, anti-human and anti-human tinged with the new car reek of capitalism." I have to say I like "new car reek of capitalism." That's apt.
Alex Hanna:It's pretty good. Yeah. I mean, it's, but I mean, yeah. Well, capitalism, I feel like doesn't have a new car re-- I mean, hey, look, if you're in the chat, if capitalism ha or if you're listening to this online, hey, if capitalism has a smell, get us at, get at us in the comments. I think capitalism smells like, uh, smells, smells like. Um, the mitochondria bacteria that feed on the pond scum. It's like Hugh Grant's line in My Best Friend's Wedding, uh, when describing what Julia Roberts does to sabotage his wedding. Anyways.
Emily M. Bender:All right. Yeah, I mean, thing is, I personally hate new car smell, even though I don't mind getting to be in a new car. I hate new car smell, so that like really, really wraps it up for me. Um, okay. Um. So talk about solutionism. Um, "But AI can also help us address our most pressing global challenges. Whether we're trying to make progress on sustainable energy production, healthcare education, or cybersecurity, technology will invariably contribute 30 to 80% of any effective solution."
Alex Hanna:Yeah.
Emily M. Bender:Fact check please.
Alex Hanna:Well, like where do those numbers come from? That's just, it's--
Emily M. Bender:Citation needed.
Alex Hanna:--absolutely made up shit. You know, like, and it's, you know, I mean it's very much this kind of idea where you're just pulling anything outta anywhere, 'cause numbers don't really matter here even though he swears by, uh, data. Um, but yeah.
Emily M. Bender:Yeah. And this book has footnotes and there's no footnote on that claim.
Alex Hanna:Yeah. Yeah.
Emily M. Bender:So you wanna do the next one there?
Alex Hanna:Yeah, totally. So, you know, he, he talks about like. Issues a arise from AI, but then the kind of imagination for the good things is really absurd and really not, um, not, yeah, just like. Anyways, so one of them is, "Think for example, of an AI system that learns how to interpret and translate animal vocalizations, enabling humans to understand the needs of endangered species in ways never before possible and thus leading to more effective interventions to protect biodiversity." I, I, I was, I was in my garage at this point, and I just screamed. I I just screamed, "What the fuck?" Cause we don't, we don't need that.
Emily M. Bender:No, exactly. Exactly. It's like, okay, we know what's harming endangered species. It's habitat loss, it's climate change, it's loss of what they're going to eat. And we don't-- like, if we could get, you know, the whales to say,'Where's the salmon?' Like, right. But also this idea that somehow you just put the whale song into the AI system and the AI system will tell you what, like that's not how that works. Right? You have to have some other signal as to what this could mean. You can't just map it from the sound. Anyway.
Alex Hanna:Yeah. Um, producer Christie says "It's not like the animals could tell you about their habitat or the deep imbalances and predator, predator prey relationships."
Emily M. Bender:Right! Exactly. Ahh. And, and as always, the issue here is political will, and not lacking tech, but, okay. So there's this long, extended, terrible thing about chatbots for, uh, psychotherapy that we have to dig into a little bit. So he's, um, starts with--the heading is, "The existential threat of the status quo," and a couple paragraphs in, he says, "It's possible even these optimistic takes--" On using ChatGPT for therapy."--might underestimate how full this glass might be. Yes, using LLMs in mental health care may involve risks that no one should cavalierly dismiss. But mental health disorders and challenges have massive negative impacts on global wellbeing. An ongoing shortage of mental health professionals means that hundreds of millions of people go untreated. More automated services could dramatically expand access to care. It's also a field that is rooted in evidence-based practices--"
Alex Hanna:Oh Lord. Yeah.
Emily M. Bender:"EBPs," which my text-to-speech was saying "eebs".
Alex Hanna:That's great.
Emily M. Bender:Which was wonderful. Um, and so then there's this long thing about the, the Koko mess, and he is trying to defend Morris. Um, and then it gets, it gets absolutely absurd. So there's not enough access. At no point does he talk about what we might do as society to address the drivers of poor mental health. You know?
Alex Hanna:Yeah, yeah.
Emily M. Bender:Housing, poverty. Right? Um, or talk about putting more resources into mental healthcare so you could train more professionals. No. Um, we get, uh, this complaint that Woebot and Wysa use NLP, but they're not chat bots, so they're not flexible enough. And so instead, what you really want, um, and this actually, here's his words again, "What you want instead is an intelligent, emotionally responsive connection, available to you immediately for however long you require it. For most people, another actual human may be the help, ideal help provider here. But what happens when no humans are available or can't stick with you for as long as you feel you need support? Or what, if you like the idea of a conversational partner who will never inadvertently gasp at a disclosure you make or yawn at an inopportune time." But but the best part is this last little bit here. Do you wanna do it, Alex?
Alex Hanna:Uh, yeah. So he says, uh, so, "How does care change when you can access a clinically tested and proven therapist whenever you want, for two minutes or two hours for a flat monthly fee of $19.99? I mean, but you can't.
Emily M. Bender:Yeah. And I love how it's, of course, that's the business model too. Right?
Alex Hanna:But then, then he goes, and, "Eventually the platform--" In this case he's talking about like a therapy platform."--might offer a Spotify inspired quote'therapy mix' that curates a unique blend of different therapeutic approaches based on your engagement history." So just bringing kind of like a collaborative filtering, filtering, recommendation engine to your therapy because you know, everything needs to follow logics of recommendation systems.
Emily M. Bender:Yeah, yeah. And that we can make money off of.
Alex Hanna:Yeah. It's also interesting too, because, um he spends a lot of time talking about the, um, the, uh, um, was it the Koko example?
Emily M. Bender:Yeah.
Alex Hanna:Which we talk a lot, a little bit about the, in, in our book, "The AI Con," out May 13th, uh, pre-order at TheCon.AI. Um, and, um, so he spends a bit of time talking about, uh, about that and basically like, listen, the only thing, at first he says like, maybe Rob Morris, the CEO, you know, framed it poorly. Um, um, but actually the problem were the social media users actually, like the children are, are the ones who are wrong actually. The, the, the Principal Skinner, uh, meme of, no, the, the children are the ones who are wrong. Effectively saying like, not, did any of these people use Koko? How do you know? And I'm just like, reading this. I'm like, did you talk to any neurodivergent people or people like, who are the people that you know and, um, like are they the ones engaging this stuff?
Emily M. Bender:Yeah. Oof. Um, okay. We've got one last thing on this chapter because it was just so terrible. Um. Again, like suggests that inanimate chatbot will be a good, um, uh, you know, conversation partner when you're feeling down. He says, "Billions of people forward some of their most meaningful relationships with dogs, cats, and other animals that have relatively limited range of communicative powers." And so therefore, talking to ChatGPT is good. Like it makes no sense. Um, all right. Oh, and actually, okay, great comment in the chat, um, about like why even at first blush, this would never be a good idea, even if you could actually program in all the EBPs, right? Um, uh, Domodon says, "But I've only thumbs upped the AI therapists who co-sign all my nonsense."
Alex Hanna:Yeah.
Emily M. Bender:Right. Therapy is not, not about just repeating back something that you want to hear.
Alex Hanna:Yeah.
Emily M. Bender:Um, okay. Next chapter.
Alex Hanna:Okay. We got just, we have 12 minutes. I don't know how much we're gonna get through in here. Um. So, yeah--
Emily M. Bender:Pick it up a little bit, but yeah. Um, so, "The Triumph of the Private Commons," and he's talking about the commons and immediately forgetting that there's anything about shared governance of the commons. Um, but, uh, he talks and he also wants to talk about how using data is not extractive because you just make a copy and the data is still there and it's not doing any harm. Um. And is there anything in here that you wanna particularly get to Alex?
Alex Hanna:Well, I mean, the only thing is that basically he, he reads Shoshana, uh, Zuboff's um, "Surveillance Capitalism," basically as a terrible critique. It's effectively the critique of like, well, data is like, data is, you know, it's, you just make a copy, right? It's fine, as if there was no labor that went into that or this idea of like, and it, it's sort of the worst kind of discussion of data governance I've come across, or like the worst metaphor. Um, so actually instead of the exact, like the exact thing, um, on page 86, he says, I wanna get to this data agricultural thing. So he says, "So rather than--"
Emily M. Bender:Am I in the right place?
Alex Hanna:Yeah, yeah, you're, it's the page before. And he says, "So rather than quote'extraction operations,' we see something more akin to data agriculture. Instead of Big Other usurping value from users, we see a mutualistic ecosystem of developers, platforms, users and content creators whose interactions and contributions collectively enrich the lives of billions of people every day. In a sense, what we've seen in the commercial internet era is a new private, uh, a new kind of--" What he calls 'private commons,' which is a fucked up phrase."In the age of AI, it's on the verge of growing even more fruitful." And this metaphor of data agriculture is so bizarre. I'm thinking about the, um-- Luke Stark and Anna Lauren Hoffman actually have this great paper on, um, data metaphors, which is called "Data is the New What?" Um, and it's sort of like agriculture is a bizarre, as if it was a renewable resource. And knowing that, fuck, you know, agriculture is fucked up too. Between, between factory farms, CAFOs, Monsanto, I mean like maybe it's not the metaphor you want, but the idea--
Emily M. Bender:Yeah. Maybe we don't want to aspire to that. Yeah, so there's one other bit in here from a bit earlier in that chapter where he's paraphrasing Zuboff and, uh, claims that she says, and I think this is not wrong, that she said it."'Democracy optimizes, that is implodes into market driven totalitarianism.'" And I just gotta believe that is landing differently in 2025 than they thought it would when they wrote that in 2024. Okay. I think, I think we're done with this chapter.
Alex Hanna:Yeah, yeah.
Emily M. Bender:Oh no, there's one we gotta do. Wait, is this still the same chapter? Yes.
Alex Hanna:Which one are we?
Emily M. Bender:Um, so, he's talking about how all of these private commons are really beneficial for everybody. Um, and he says, "None of these various labels--" So gig economy, Web2.0, sharing economy, surveillance capitalism, none of those "aptly convey the emergence of free and near free life management resources that effectively function as privatized social services and utilities. The welfare state moving at the speed of capitalism."
Alex Hanna:Oh God. Yeah. And it's, yeah, the whole, that the whole setup here is absurd. It's sort of the idea that there's a, that big tech are offering platforms as a commons, which is a misreading of the Commons, if I've ever heard one.
Emily M. Bender:Yeah. Um, okay. We gotta get to chapter five.
Alex Hanna:Mm-hmm.
Emily M. Bender:Um, "Testing, Testing, One, Two, Infinity." And I was so mad when I saw this title because of course we have a "Testing, Testing, Testing, 1, 2, 3" in ours, but we're talking about evaluation and saying sensible things about evaluation. And he's in here just like getting it all wrong about evaluation. Um, starting with, "As a rule, AI development does not attract the kind of personality types who go with their gut, play it by ear or trust their inner voice. Instead, it's a domain largely populated by extreme data nerds who love testing things even more than a TikTok influener loves seeing a hot take go viral." And it's like, no, the AI bros actually have terrible evaluation practices.
Alex Hanna:Yeah.
Emily M. Bender:Um, and he's like presenting the Turing test as a good test. Right. Um, and then talking about how, he says, "Today, AI developers test model performance in hundreds of different ways. Then they invent more tests to measure the efficacy of those tests. Then they publish white papers documenting their findings." I was like--
Alex Hanna:Yeah, it's just, it's absurd. I mean, we've talked about, we've talked about benchmarking so much. We have multiple papers on benchmarking.
Emily M. Bender:Exactly, yeah. But just the fact that he's like white papers, that's a good thing. It's like, no, that's skirting peer review is what that is.
Alex Hanna:Yeah. I mean, you're effectively, I mean in, I mean there's a whole, we don't have to spend time, but if you wanna read that, check out "AI and the Everything Benchmark." Um, and then, um, there's a few things here. I mean, I kind of wanna go through this just because we, we, I want to get to some of the other stuff, especially like, but I do, one thing I do wanna mention, you want to get to something, uh, I do want to get, I want to talk about, there's just like a piece where he mentions Weizenbaum and I would at least like to just say how, yeah.
Emily M. Bender:Where is Weizenbaum? Page 123.
Alex Hanna:He's on page 123. Yeah, so he's talking about Jack Stil--uh, Stilgoe, um, who is a great STS scholar and something he says about, more about like, um, questions. But he is basically saying like, Weizenbaum himself. Um, he mentions Weizenbaum earlier here, 'cause he says by way, "by way of Weizenbaum." So I thought he talked about him a little earlier.
Emily M. Bender:So they said, uh, um. Uh, Stilgoe has this long thing about Weizenbaum, it's a quote.
Alex Hanna:Oh yeah. He says, oh yeah. So he says, "For society makes--" So Stilgoe, uh, Stilgoe actually has a really good, uh, thing on like AI accountability. He has a book on, on like, uh, Uber self-driving. Anyways for, but he says, "For society to make decisions about AI, we should instead look to another great um, late 20th century computer scientist, Joseph Weizenbaum. In a paper on called "On the Impact of the Computer on Society" in Science, Weizenbaum argued that his fellow computer scientists should try to view their activities from the standpoint of a member of the public." All right, basically goes like, okay, we need public deliberation. But I mean, the way that Weizenbaum framed the public and the way that um, Hoffman and Beato are viewing the public, like the public is effectively like users. Um, and you basically have to opt in to using the product to have any kinda say in it, which is not democratic deliberation.
Emily M. Bender:No. And also it sets aside the whole, sets aside, the whole fact that people can be impacted by somebody else using software. Right? Yeah. Yeah. Whew. Okay. So, um, I think that's it for chapter five. Um, I'm gonna take us into chapter six. Um, we need the title of the chapter,"Innovation is Safety." Right.
Alex Hanna:Yes.
Emily M. Bender:And, uh, sorry, I need to get my notes so I know we're going to page 138. And bear with me here. There we go. Okay. Um, so. Talking about the Future of Life's pause letter, um, which by the way, um, together with Dr. Timnit Gebru, Dr. Meg Mitchell and Dr. Angelina McMillan-Major, the, um, named co-authors of the Stochastic Parrots paper, we had a response to that Timnit was, I think just recently posting about again. Um, but in the Future of Life's letter, um-- this is a ridiculous, like X-risker letter and they say we have to stop this um, and ensure that systems adhering to safety protocols are "safe beyond a reasonable doubt." And then, uh, Hoffman and Beato say about this, "In the Future of Life Institute's bizarro world version of 'beyond a reasonable doubt,' they advocate locking up a technology because we can't be sure it won't do something bad someday, maybe." And it's like, okay. Due process is about people and the court system. And you know, I'm not in here defending the Future of Life Institute, but what Hoffman and Beato said there is ridiculous as well.
Alex Hanna:Yeah. I mean, you're effectively thinking about, I mean, he talks a lot about, he dogs a lot on the precautionary principle, which comes a lot from, um, environmental protection. Basically, if you're in a, and it's just like environmental protection in, under this administration. I mean, again, it hits different, different right now.
Emily M. Bender:Yeah.
Alex Hanna:Uh, very ridiculous. Uh, basically dogs on, uh, GDPR um for taking a precaution approach to data, but it's really silly how he frames it 'cause it's like that wasn't just precautionary. There was actually a lot of that stuff already happening and many people have hit, been hit with GDPR fines. Uh, so that's not even the right framing of GDPR.
Emily M. Bender:He says somewhere, um, if harms aren't yet tangible, if there aren't yet any tangible harms-- I'm like, there are.
Alex Hanna:Yeah, there are.
Emily M. Bender:Uh, for all the, that 1984 is cited in here, I love Abstract Tesseract's thing."Innovation is safety. Competition is regulation. Ignorance is strength." Indeed. And speaking of ignorance, I wanna talk about, um, this thing on page of, I gotta get chapter titles. Um, chapter seven, "Informational GPS." And he is got this whole ridiculous metaphor of, um, uh, GPS and sort of maps and on maps on demand allows us to navigate the physical world and our LLMs are gonna allow us to navigate the information world, which just like doesn't make any sense at all.
Alex Hanna:It's also great because like his, effectively his, the subhead of this, if we were gonna rewrite of this chapter would be, 'Maps help us to become gentrifiers and colonizers.''Cause it's, and uh, there's a piece you highlighted, Emily, that just, your sub, your note was just 'Colonizer much?'
Emily M. Bender:Yeah. It's got this, yeah. Anyway, um, I can't find this now. Oh, 'cause it's 163, but I'll just read it off of my notes. Uh, he writes, "Even more--" They write,"Even more than search or Wikipedia, LLMs can provide clear and easy to access starting points for information gathering. Instead of typing queries into Google and then trying to evaluate which links are genuinely helpful, you can just start having a conversation with an immediately responsive and informed guide." And like again, he does not understand how knowledge systems work. Yeah, the sense making is the point. That process of evaluating which links are genuinely helpful is what you should be doing. Um, and Magadin in the chat says,"Librarians are the informational GPS, um, that allow us to navigate the informational world." Um, without dehumanizing librarians, I would say librarians are the ones who know how to operate that informational GPS. Yeah. Um, yeah. All right, um, we, we've got, there's, there's-- Where is that big howler? Um.
Alex Hanna:Yeah. I mean we have enough time for--
Emily M. Bender:All right. I wanna do this one here on page 205. Yeah. And then you can pick one and then, yeah.
Alex Hanna:Uh, I'll do the, I'll do the Luddites one, which is--
Emily M. Bender:You do the Luddites one. Okay. And then, and then we gotta get to that one that happened in California.
Alex Hanna:Oh, what is that? Wait, what? What is that?
Emily M. Bender:In, in california in the 1800s, if you know what I'm talking about.
Alex Hanna:Yeah, yeah. I mean that's, that's sort of, is it that's in the same chapter or is it in prior?
Emily M. Bender:Is it? I don't know.
Alex Hanna:Yeah.
Emily M. Bender:Okay. Um, so this, I just gotta point out. So, so Hoffman is being lauded as like the one tech billionaire who didn't show up at Trump's inauguration. And yet he has this paragraph."In many ways, a system like this might be likened to America's Ballistic Missile Defense System or the National Airspace System, or even a big, beautiful border wall. Each of these is an instance of complex security infrastructure designed to protect the entire country from a certain type of threat." And it's like you don't get to do boot licking. Ironically, that doesn't exist. So like, what the hell?
Alex Hanna:Yeah, this is, this is some hedging, uh, uh, on and which I guess good thing that he hedged, um, to get, to get conservatives on his side.
Emily M. Bender:All, where are you taking us?
Alex Hanna:Uh, we gotta go to the Luddites, you know? So he has this long discuss-he's like, what if the Luddites had won? And he effectively has this long fanfic of like, well, if the Luddites had won like England would've been stuck in the, you know, in the Dark Ages. And we effectively would, they would've made some really, really nice blankets. Uh, and, and, and that's, that's a great thing. Surprisingly, cites, uh, Brian Merchant's history, um, which, um makes me feel like Beato is the one who read it. Just, just a thought. I don't know. Um, I feel like that's a, that's a thick book and uh, Reid Hoffman is, is doing 10 other things. Um, and so basically saying that like everybody else would've innovated around, so he makes kind of a nationalistic argument. And he is like, you know, and that child labor stuff, and that workplace protection stuff, you know, everybody else would've like fixed that. I'm like, how? I'm like, motherfucker, how do you think we get to those things? It is from labor struggle and like people dying in the process of that, that doesn't just happen.
Emily M. Bender:Yeah. Yeah. And, and it's also so narrow-minded about like, well the possi-- the options are, um, you know, what, a"permissionless innovation" was the phrase that he is using or absolutely oppressive you can't do it until you prove it's completely safe regulation. It's like that's never been the range of options available. Um, okay. So I think, I think we have to do just two last things. Um, one is, this is, this is so over the top?
Alex Hanna:Yeah. The, yeah. The Donner, the Donner Party.
Emily M. Bender:Yeah. Um, we, he characterizes, so the end of this recounting of the Donner Party is, um, "those rugged individualists in the Donner Party, who had to resort to the ultimate form of communism to survive a wrong turn they made on the untrammeled frontier."
Alex Hanna:Yeah, so, so the ultimate form of communism is cannibalism, if you didn't know. If you read Marx's Critique of the Gotha Programme, lower communism is just, you know, you just nibble on some fingers, but then like the higher form of communism, it's when you're just eating your bros.
Emily M. Bender:Uh, and so there's so much bullshit in this book. We didn't even scratch the surface. Um, I wanna point out one last thing without digging it up, which was that they talk about this, um, actually pretty cool sounding platform called Pol.Is, which is for doing sort of distributed discussions of, of, um, public issues. And they keep talking about it as here's how AI is gonna help make a more engaged citizenry. And if you go to the Pol.Is website um, guess what? They don't say AI anywhere. Um, they do have a little bit of machine learning and it has to do with how they cluster um things that were entered into the system, not based on any text processing, they do no text processing, but just based on how different participants are reacting to it. And so it's not AI, it's not sold as AI, and it's sort of like the one credible sounding thing that they've got in here. And of course it's not AI.
Alex Hanna:Yeah. And they've got like, one thing I did want to highlight, and I didn't get to look up some of this, but there's some really interesting stuff and some early internet studies literature on kinda like participatory, uh, deliberation platforms. Stuff that's like 20 years old. Um, and I think there's some mention of it, I think either in Erik Olin Wright's Envisioning Utopias or, um, I'm going to mess up his name 'cause it's Italian. But, um, Gia-- I think it's, I'm not gonna say it, but it's on participatory budgeting in São Paulo. If you know this off the top of your head, um, uh, then you can do it. But a lot of that stuff is not about like, it's, it's about access. I mean, if you wanna do that stuff right, you have to actually be in communities having those discussions and maybe there's an electronic component and AI ain't it.
Emily M. Bender:Alright, so Alex, you had a nice theory about what this book was, who it was written for. Like who their intended audience is. And you wanna share that?
Alex Hanna:To me, like what made this book palatable beyond the other, the ridicule that we usually go through is that this is like a kind of a death rattle. This is an attempt to get investors who are starting to get cold feet on board. We've seen that piece by, um, that person at Sequoia Capital who's basically like, we have to generate$600 billion for this to turn a profit. Um, there was that Goldman Sachs report. Um, Satya Nadella basically said, AI's not generating any, any, any, any productivity gains right now. Um, even if it was written under the, kind of like in a 2023, 2024 timeframe, you gotta think that, like Hoffman as an investor of this is like, listen, I'm doubling down on my OpenAI investments right now, and you should too. So to me This, this really yells cry, cry, cry for help.
Emily M. Bender:Yeah. Let's hope. Um, so musical or non-musical for the, um, uh-- And folks in the chat name some animals while Alex is deciding on musical or not musical.
Alex Hanna:Yeah, there we go. Yeah. Give me, give me a word. Uh uh
Emily M. Bender:So what what's gonna happen is hopefully people are gonna put some animals in the chat. Okay. Capybara is great. So you are the lead singer of a band made of capybaras who are singing their hearts out to tell the tech bros what they actually need to have a better food source and living conditions.
Alex Hanna:Oh yeah. So immediately, uh, I don't know how to say that, Faster And Worse. Uh, echidna, isn't that what Knuckles is from Sonic and Knuckles?
Emily M. Bender:I think it's echidna, isn't it?
Alex Hanna:Oh gosh. Um, uh, echidna, uh, I can't say echidna. Okay, now, now I have, but I can't what rhyme with echidna, but cap-- so, but anyways, capybara. All I'm thinking about are those capybaras that are like, that are that that one, like, um, steam bath in Japan, and they're with the lemons in them. Um, and then, and so I'm just thinking of like Reel Big Fish, real like, um, like jam, what is it called? Jam rock or jam band. Just like playing like a, a smooth beat. Just be, 'Hey man, all you need is love, sitting in our bath. Getting mana from above. Hey man, that's all we want. Stay outta AI Hell, all you gotta do is stop and think about how your inner child can yell.' That was my like, chill.
Emily M. Bender:I love it. We needed, we needed a chill thing this time. All right. Even though we went through a whole book of Fresh AI Hell, we still have the rapid fire ones for you. Um, why don't you take this first one, Alex?
Alex Hanna:Sure. So this is from The Guardian. Uh, the title is, "'I Want Him to Be Prepared': Why Parents Are Teaching Their Gen Alpha Kids to Use AI. As AI grows increasingly prevalent, some are showing their children tools from ChatGPT to DALL-E to learn and bond."
Emily M. Bender:Bond, ugh!
Alex Hanna:And yeah, gross. There's, uh, this by Aaron Mocks, um, published this, uh, past Saturday March 1. Um, really also, I want to point out, I got this from somebody in on Bluesky. They were also noticing that um, the image that they have here is like, um, it's like a kind of a collage, but they've got these, um, two, um, Black children wearing headphones and it's very, and it was constructed by The Guardian, and I'm just like, it's, it's just like the, yeah. To me this is like, this is really diversity washing. You're just like, oh, you know, technology is good, look at these, you know, these Black children using it. Anyways. So talks about this, uh, this guy Jules White, "who says he used to, he used to believe his 11-year-old son needs to know how to code to be successful." Uh, well, first off, that premise sucks. Uh, "Now though, the Vanderbilt Computer Science Profe--the Vanderbilt Computer Science Professor--" 'cause of course he is, "--says it's more crucial for James to learn a new more useful skill, how to prompt AI chat bots."
Emily M. Bender:All right, next, which gets even grimmer. If that were possible. Um, so MIT Tech Review James O'Donnell, February 27th, 2025 sticker is"artificial intelligence." Headline is "An AI companion site is hosting sexually charged conversations with underage celebrity bots. One chat bot on Botify AI that resembled the actor Jenna Ortega as a teenage Wednesday Addams told us--"'Told us,' so the bot output, the string, "--that age of consent laws are quote, 'meant to be broken.'"
Alex Hanna:Oh my God.
Emily M. Bender:This is so gross. And like over and over and over again too, like how many stories have we seen like this?
Alex Hanna:Yeah. It keeps happening.
Emily M. Bender:Yeah. Alright.
Alex Hanna:So this next one from TechCrunch."Y Combinator deletes posts after a startup's demo goes viral," by Charles Rele. Uh, posted February 25th, 2015, or sorry, 2025. Sorry. Uh, a demo from, I think this is pronounced Optifye, which is O-P-T-I-F-Y E.AI.
Emily M. Bender:I think we decided beforehand, Alex said it's pronounced Opti-fuck if I know.
Alex Hanna:Yeah. That, that was very good. Opti-fuck if I know."A member of Y--" So they're a member of Y Combinator's current cohort, "sparked a social media backlash that ended up with, uh, YC deleting it off its site." And, um, so the, um."The company says it's building software to help factory owners know who's working and who isn't in real time thanks to AI powered security cameras it places on assembly lines according to its YC profile." Um, and then there's a video and I mean, yeah, I mean we're, we're, we're in, we're doing, uh, we're doing Taylorism with this one. I mean, this is really going and taking scientific management to its extreme. Just extreme boss-ware. Um, yeah.
Emily M. Bender:What I love about the story was that y Combinator deleted this off of its social, so they got shamed. If someone managed to shame Y Combinator about this, which is--
Alex Hanna:If you're, if you're able to shame Gary Tan, like that's--
Emily M. Bender:That's pretty amazing.
Alex Hanna:That's incredible.
Emily M. Bender:Yeah.
Alex Hanna:More of that, internet. Thank you.
Emily M. Bender:And apparently it's, it involves a video shows this boss saying, Hey, number 17, what's going on man? You're in the red. Um, and like, yeah. So, okay. Um, Alex. I'm gonna take this one and then we'll save that one for you. All right. We've got two, two good news ones here.
Alex Hanna:Yeah.
Emily M. Bender:Um, because wow, today was heavy. Um, so this is, um, something out of New Zealand, February 28th, 2025. Um, uh, the New Zealand Herald is the name of the, uh, site and the headline is"University of Auckland students criticize introduction of artificial intelligence tutors in Business and economics course," by Raphael Franks and Benjamin Plummer. And basically this is just, the kids are all right, so, the bullet points at the top."Students at the University of Auckland have raised concerns about a decision to use AI tutors in a course. The university claims AI tutors are supplementary and won't replace in-person teaching, emphasizing AI's importance in marketing. Students argue AI is inaccurate and express concerns about learning quality and course fees." So go students.
Alex Hanna:Nice.
Emily M. Bender:"'Complete bullshit,' one student enrolled in the course said." Or complete bull beep in the newspaper article here. And finally-- Yeah. This
Alex Hanna:last one, this is on MSN, but it looks like it originally appear, appeared in the Monterey Herald. Yay local news. Um, "Faculty union opposes AI initiative being pushed by CSU," by Andrea uh, Valadez, um, published one day ago, so March 2nd. So, "Despite enthusiastic announcement by California State University--" Which I think we covered on this podcast."--management surrounding a first of this kind artificial intelligence platform, faculty are coming out saying that they were not consulted and do not necessarily support the top-down directive." This is from nearly 150 California faculty. Association members. Yay, CalFAC. Uh, I remember doing an event with them about, uh, 10 years ago and, uh, got to speak at a lot of CSUs and those kids are great and so is that union. Um, "Those members held an organizing and mobilizing conference at San Francisco State, uh, last weekend to share opinions and strategies on how to move forward following recent dealings with the CSU. One topic discussed was the faculty's opposition to its embrace of artificial intelligence on its 23 campuses. And the bargaining chair, Kevin Wehr says, 'We have some real concerns and some real questions. Faculty were not consulted and nobody asked our opinion.'" So yay to unions who have a major, major role in pushing back against this stuff in our schools and in our institutions.
Emily M. Bender:Yeah. Um, and so we ended with some, some pushback and good news, and I, um, wanna hear more of that.
Alex Hanna:Yeah. If you know what, if you, again, if you want, us to know what capitalism smells like.
Emily M. Bender:Yeah.
Alex Hanna:Uh, no, we won't end on that. Our theme song is by Toby Menon, graphic design by Naomi Pleasure-Park. Production by Christie Taylor. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Rate and review us on Apple Podcasts and Spotify. Pre-order The AI Con at TheCon.AI, or wherever you get fine books. Subscribe to the Mystery AI Hype Theater 3000 newsletter on Button Down or Donate to DAIR at DAIR-Institute.org. That's D-A-I-R hyphen Institute dot org.
Emily M. Bender:Find all our past episodes on Peertube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.TV/DAIR_Institute. Again, that's D-A-I-R underscore Institute. I'm Emily M. Bender.
Alex Hanna:And I'm Alex Hanna. Stay out of AI Hell, y'all.
Emily M. Bender:And no more books please.
Alex Hanna:No more books. Sorry. No more psychic damage. My, I, I dropped my die, but I would roll for, um, I would roll for a hit restoration. But there it is.