Mystery AI Hype Theater 3000

Episode 45: Billionaires, Influencers, and Ed Tech (feat. Adrienne Williams), November 18 2024

Emily M. Bender and Alex Hanna Episode 45

From Bill Gates to Mark Zuckerberg, billionaires with no education expertise keep using their big names and big dollars to hype LLMs for classrooms. Promising ‘comprehensive AI tutors', or just ‘educator-informed’ tools to address understaffed classrooms, this hype is just another round of Silicon Valley pointing to real problems -- under-supported school systems -- but then directing attention and resources to their favorite toys. Former educator and DAIR research fellow Adrienne Williams joins to explain the problems this tech-solutionist redirection fails to solve, and the new ones it creates.

Adrienne Williams started organizing in 2018 while working as a junior high teacher for a tech owned charter school. She expanded her organizing in 2020 after her work as an Amazon delivery driver, where many of the same issues she saw in charter schools were also in evidence. Adrienne is a Public Voices Fellow on Technology in the Public Interest with The OpEd Project in partnership with the MacArthur Foundation, as well as a Research Fellow at both (DAIR) and Just Tech.

References:

Funding Helps Teachers Build AI Tools

Sal Khan's 2023 Ted Talk: AI in the classroom can transform education

Bill Gates: My trip to the frontier of AI education

Book: Access is Capture: How Edtech Reproduces Racial Inequality
Book: Disruptive Fixation: School Reform and the Pitfalls of Techno-Idealism

Previously on MAIHT3K: Episode 26, Universities Anxiously Buy Into the Hype (feat. Chris Gilliard)
Episode 17: Back to School with AI Hype in Education (feat. Haley Lepp)

Fresh AI Hell:

"Streamlining" teaching

Google, Microsoft and Perplexity are promoting scientific racism in 'AI overviews'

'Whisper' medical transcription tool used in hospitals is making things up

X's AI bot can't tell the difference between a bad game and vandalism

Prompting is not a substitute for probability measurements in large language models

Yet another 'priestbot'

Self-driving wheelchairs at Seattle-Tacoma International Airpot


You can check out future livestreams at https://twitch.tv/DAIR_Institute.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Alex Hanna:

Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.

Emily M. Bender:

Along the way we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, professor of linguistics at the University of Washington.

Alex Hanna:

And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 45, which we're recording on November 18 of 2024. This week, we're returning to the classroom and the philanthropists and influencers who are using their big names and big dollars to hype up LLMs as solutions for educators. We've talked about this problem before, but I regret to say the bullshit continues.

Emily M. Bender:

From Mark Zuckerberg to Bill Gates, it's interesting, to say the least, how many people without education expertise are trying to, in quotes, 'solve education' as if it were a technological, quotes, 'problem.' Promising, in quotes, 'comprehensive AI tutors' or just 'educator informed tools' to address understaffed classrooms, this hype is just another round of Silicon Valley pointing to real problems, here under supported school systems, but then directing attention and resources to their favorite toys.

Alex Hanna:

Today's guest is DAIR's own research fellow, Adrienne Williams. She's also an organizer, a former educator, and has direct experience with how educational technology shapes classrooms, usually for the worse. Welcome, Adrienne!

Adrienne Williams:

Hi!

Emily M. Bender:

Thank you so much for joining us. I'm super excited to have your expertise on the show with us. Um, and I want to also shout out the chat. Thank you to those who are watching this live on the Twitch stream. People who've been here before know that at the end, we're going to do, uh, the Fresh AI Hell and we like to do, or rather make Alex do a little improv. So this time I thought we could do some audience participation. If you have a brainstorm along the way about what would be a great musical prompt for her, drop it in the chat and maybe we'll use it, but I can't say we'll use it for sure. Otherwise Alex is going to have too much time to plan ahead.

Alex Hanna:

Oh gosh. Yeah. Don't make me think I'll have contingencies. Make it a true improv.

Emily M. Bender:

Yeah, exactly. All right. So we are going to start with the first artifact here. This is an article on the ChanZuckerberg.com website, love how the philanthropic thing's a dot com, from October 11th of this year. Headline, "Chan-Zuckerberg Initiative Commits Funding to Help Educators Shape How AI Will Be Used in Classrooms." And the subhead is, "Head of Education, Sandra Liu Huang, announces three new grants focused on increasing collaboration between teachers and edtech developers." Alex, you want to take us in and we can hear from Adrienne about what this actually is.

Alex Hanna:

Yeah. So, so the byline, I mean, this is a press release, but it's written, you know, to appeal to actual press."Anaheim, California -- the Chan Zuckerberg Initiative, or CZI, today announced three grants supporting artificial intelligence, or AI, educational initiatives to empower educators as co creators of future technologies. The announcement came during a visit to Dale Junior High School in Anaheim Union High School District, or AUHSD, where CZI Head of Education and VP of Product, Sandra Liu Huang, spent the day learning about the impact of educator informed artificial intelligence tools in the classroom." This is a quote by Huang. She said, who says, "'Educator input is crucial for ensuring AI tools and education are contextually relevant, accurate and aligned with learning,' says Huang.'By integrating educators' expertise, we can design AI systems that enhance student outcomes and support tailored learning experiences.'" Uh, do you have any thoughts on that, Adrienne?

Adrienne Williams:

Too many. First of all, going back to what Christie said, they're dot com because CZI is an LLC. They say all the time they're a philanthropy, but they're an LLC so that they keep getting to lobby. So they're just, oh god, they're just full of shit every day, all day. Um, also, the idea of educator, like, informed, to me that just screams, forced labor. Like teachers during your teacher, you know, meeting today, we're going to make you do this stuff you would have to do anyway. And we're going to take it and we're going to make a bunch of money off of it. So I know--and full disclosure, the junior high I used to work for was Summit Public Schools, which was the CZI school they're always tallking about. It's their school. And, um, none of, none of it. Oh God, run. If you hear 'personalized learning', run. It's a crock of shit. If you hear, you know, educator-influenced, it's BS. When I quit, they lost six other teachers of color, Black teachers. I don't want to say teacher--they're Black teachers. And then the next year, Priscilla Chan did her yearly annual letter saying they're putting out grants now so that they can figure out how to train and retain Black teachers. And it's like, bitch, if you would just listen to us from the first place, we wouldn't have left. You don't need to train us. We're right here, but you don't listen. So they're full of it.

Emily M. Bender:

And I have to say, when I read this thing, my, my reaction was, there's kind of nothing here. They're not saying anything.

Adrienne Williams:

There never is.

Emily M. Bender:

Yeah. Except all I got out of it was, you know, oh, we're going to make sure that we get, you know, educator expertise on board in how to shape the AI. And there was no like room for the educators to say, hell no, we're not doing that.

Adrienne Williams:

Oh, those educators get fired or pushed out.

Alex Hanna:

Well, it's interesting just to see, like, who is this thing written for? I mean, it uses, it's just very funny that this, this person Sandra Liu Huang is like both the Head of Education and the VP of Product, you know, um, and so, you know, like, and so basically the kind of thing is, you know, who is this appealing to? I mean, to me, it seems like it's appealing to a set of funders. It's not--

Adrienne Williams:

Shareholders.

Alex Hanna:

Yeah, shareholders, funders, mean people because if you say--it's actually, I was looking to the side of this article, which it said related articles, Erud--Eruditus. I don't know what it is.

Emily M. Bender:

Eruditus, I would guess.

Alex Hanna:

"Eruditus secures $150 million series F funding led by--" Whatever. I don't care what the fund's name is, but it's basically the idea is like, okay, this isn't appealing to educators. Like educators are going to have to just basically, you know, they're just going to have to like just heel and take it, whatever they have. They're like funders, please jump on board on our, like on our, on our, on our technology or our technological tool that's going to just revolutionarize, revolutionize, I can't say that word. I'll stick with it, but it's gonna make, you know, completely transform education.

Emily M. Bender:

Yeah. Yeah. So, um, should we jump on over to Sal Khan saying effectively that same thing?

Alex Hanna:

Yeah. I mean, the Sal Khan--yeah, let's go into it.

Emily M. Bender:

Okay.

Adrienne Williams:

It's the industry talking point.

Alex Hanna:

Yeah.(crosstalk)

Adrienne Williams:

--everyone say the same thing.

Alex Hanna:

Yeah. So this next one is Sal Khan. It's the Khan Academy blog and, um and, you know, I don't, I don't know if we've mentioned Sal Khan on this podcast, but probably, um, so the title

is "Sal Khan's 2023 TED Talk:

AI in the classroom can transform education." And it's got the natural like picture of Sal Khan, uh, who is, uh like a South Asian man with a goatee wearing a blue shirt and khakis standing in front of the big red TED sign. Um, and he says, "Sal Khan believes that artificial intelligence has the potential to transform education for the better. Quote, 'We're at the cusp of using AI for probably the biggest positive transformation that education has ever seen,' he said in his 2023 TED talk. Um, yeah.

Emily M. Bender:

And so it continues here. Yeah, right."Get a glimpse of a new era in education. One where every student has access to an AI powered personal tutor and every teacher has an AI teaching assistant." This, this is just exhausting and I'm not even in K-12 education. Um, but we should go. So this, we aren't going to play the TED talk, but it'll be in the show notes if you want to go watch it. And it's just so much magical thinking and so much like this vision of a very atomized educational experience. But some of the, and then, and then, okay. Um, the first subhead here is the quote,"'Two Sigma Problem' and AI's Solution. Benjamin Bloom's 1984 'Two Sigma' study highlighted the benefits of one to one tutoring, which resulted in a two standard deviation improvement in students performance. Bloom referred to this finding as the "Two Sigma Problem," since providing one-to-one tutoring to all students has long been unattainable due to cost and scalability issues. Sal shared how AI has the potential to scale this tutoring economically and provide personalized instruction to students on a global level with the help of an AI powered assistant. During his talk, he gave a live demo of Khan Academy's new AI powered guide, Khanmigo.

Alex Hanna:

Thoughts on that Adrienne?

Adrienne Williams:

Yeah, I mean. My whole thought on this whole thing is that if you had every teacher had a aide and every principal had a VP and there were actually therapists in the classroom, you wouldn't need these extra tutors because that's what aides are for. And most of what I did as a teacher wasn't actually the learning. The kids pick up that learning very quickly if they're happy and they're comfortable and they've eaten food and they aren't being bullied. You know what I mean? If they're just there to learn and they like their teacher and their teacher likes them. They learn very quickly. But my main job when I was a teacher was like, how are you? What's going on? I know yesterday was rough for you. Like, how did that happen? How's your relationship with your best friend? I know you guys have been fighting. And then the kid feels better and they're like, okay, I can learn now, but when they have ten thousand other things going on in their head, the last thing that's going to help is some vacant AI bot just saying whatever, hallucinating whenever it wants to. And the kid can't make heads or tails of any of it. And nobody cares that they're frustrated or stuck.

Emily M. Bender:

No personal connection, right? That's just like everyone go into their own little sterile corner with their own little sterile copy of Khanmigo.

Alex Hanna:

It's also interesting--

Adrienne Williams:

And we need to talk about Khanmigo. I mean, that just feels like a weird, funky name.

Emily M. Bender:

Is it supposed to be like Khan Academy plus amigo?

Adrienne Williams:

That's what I think.

Alex Hanna:

That's my, that's my read of it. It's yeah. I mean, yeah.

Adrienne Williams:

They could just call it 'Con'.

Alex Hanna:

I mean, it's, it's really, well, it's interesting because in this, so, so I watched this Ted talk, he talks about this quote unquote 'two Sigma problem.' Which I mean is Benjamin Bloom, who I'm not sure his background. I'm assuming he's an educational psychologist. It basically says like okay, Ben Benjamin Bloom. So already first it's it's quite it Yeah, he's an educational psychologist. I think he also is very famous for like Bloom's hierarchy or taxonomy. I know you love that word, Adrienne. But like um, but like definitely categorizing different educational rules. But this is an old--it's a very old article. I mean, it's it's from 1984 and he's saying like it, you know that his study highlight the benefits of one to one tutoring, and it's really interesting how he talks about, he says there's a two sigma problem. There's two standard deviation improvements in students' performance. So to me, this is very, I'm like, first off, very old. There's been mountains of educational psychology research since then. Two, this is the kind of thing I referenced on the, uh, a few episodes ago, where Ali Alkhatib said, um, you know, this is like a computer science tourist thing. Like, we just dropped in, we thought something sounded cool. Like, two sigma, that sounds like Six Sigma lean certification or whatever. You know, Kaizen, I don't know. Um, whatever, uh, Trello board. I'm just throwing out every project manager word I know. Um, and then we're basically making it such that we're, you know, like, we're going to solve this based on very old kind of, um, stereotypes of educational psychology. Uh, yeah, I'm like, maybe not.

Emily M. Bender:

And on top of that, the graph that he shows in the, in the, uh, TED talk from that, uh, Two Sigma study is very like bell curvy, right? So you've got the basic bell curve, and then here's what happens with a certain kind of, uh, input here. So that's another kind, and it's a very vague graph. Um, like it's okay. What exactly are the axes? What are you measuring here? What's been boiled down into those bell curves, but also how do you not get itchy talking about students in terms of bell curves?

Alex Hanna:

Yeah. That is true.

Emily M. Bender:

So. In the next bit of this, uh, the next headline here, or the next subhead is"Khanmigo, a comprehensive AI tutor. Instead of worrying about students using AI to cheat, Sal said we should focus on the positive use cases. Khanmigo not only detects students mistakes, but it also identifies misconceptions in their understanding and provides effective feedback. It can help students with math and computer programming exercises and can provide context-aware help for video content." And this, I was so angry. It's like, it is not. It's not actually identifying misconceptions. It has no, there's no personal connection there, right? There's, there's no connection between student and teacher or student and tutor. It's just spitting out some text.

Adrienne Williams:

Yup.

Emily M. Bender:

And oh, cat alert, I've got Euler here, right nearby so we might get some purring on mic.

Adrienne Williams:

Can I also say that like when my kids were on the Gradient Learning platform when I was teaching, we really do our kid--well, they, I feel like do my kind of kids a disservice. Because I was in Richmond, California and it's perceived that these are just low income, don't want to learn, kids. It's not even that they think they can't learn. It's like, well, these kids don't want to learn and their families don't care about them. My kids--and I don't know why I was the teacher that they would tell everything to. Cause sometimes I'd be like, you know, I'm a teacher, right? Like I'm going to, I have to tell somebody, why are you telling me this? They would tell me like, oh, Ms. Williams, we cheat. We know how to cheat on this. All you gotta do is this, this, and that. And then we just cheat. Every technology that comes out, the kids, even in the schools that have been disregarded as being intelligent, the kids figure out how to cheat. The only way they don't cheat is when you say, we're going to have a discussion and I want to see who read. So let's just talk about this. It's actually not as complicated as they make it seem in all of these TED Talks and all of this stuff. Teaching to me, wasn't that complicated. It's like, here's what I want you to know. Here's how I get you in the mood for learning. And then we're going to just discuss it until I know that you know what's going on. I didn't even have very many tests in my classroom because I could tell which kids knew what they were talking about and which kids didn't. And then the ones that didn't, you just hold them back for a second. Come see me at lunch, the three of you, and let's discuss this. And why didn't you get it? And what happened? And many times, it's not that they didn't want to do it. It's that they have a ton of other stuff going on.

Alex Hanna:

Right.

Adrienne Williams:

And it's hard for me to, to kind of wrap my head around the fact that any of these dudes, who seem like a lot of trust fund babies, even have a concept of the type of stress and responsibility that kids like mine had to deal with. So yeah, of course they're distracted, but none of that is taken into consideration. And they're just like, we're going to make it cheat proof. I bet a hundred dollars on it. You can't make it cheat proof.

Emily M. Bender:

Yeah, yeah.

Alex Hanna:

Well, it's interesting. I mean, sorry, go ahead, go ahead.

Emily M. Bender:

Oh, I was just gonna say, they're, they are coming in. They are assuming the problem is people aren't high enough on that score, whatever that score is, and assuming that that is just because they don't have access to how they imagine education happens, and then trying to build a fake version of that. As opposed to actually getting in--and maybe actually not getting in at all. Maybe just standing back.

Adrienne Williams:

Maybe maybe going away.

Emily M. Bender:

Yeah, going away, paying more taxes so that we could actually fund the schools better.

Alex Hanna:

Yeah, yeah, I mean, the interesting thing about this too is like, yeah, I mean, I, I appreciate what you're saying, Adrienne, and the kind of idea of there's a particular vision of like, who is the right audience for these tools, you know, it's like, okay, we're gonna--these things and, and Gates, in the next artifact Gates talks about this in terms of like equity. So it's like, oh, we need to have like, it's going to be this thing where, you know, students who are, don't have access to one on one tutors are going to get that experience in these disadvantaged schools, you know. And we went to this disadvantaged school and these students got the kind of one on one thing that my kids would get, you know, and, and there's like this opposition to it. They try to make it relatable by basically saying like, my kid uses it all the time. And--

Adrienne Williams:

Lies.

Alex Hanna:

--You know, like, and I love tutoring my kids. I was like, Sal Khan, you, I'm, I'm sure like you spend a lot of time with your kids, but you know, you are not the main instructor in their lives. And so to be acting like such, and to act like your students don't have an immense, kind of immense privileges just by nature of you being this technological, you know, tech, tech Mongol right now, mogul, sorry. Mongol. That's a different--

Adrienne Williams:

You got all the words.

Alex Hanna:

I know I, I I'm bad at wording today. Tech mogul. Mongol is a Mongolian. Yeah. Like, um, yeah, as if you're a tech mogul that doesn't have access to just immense amounts of um, social and cultural capital.

Adrienne Williams:

I also think there's a misconception on like how technology is taught and utilized in really high, um, fancy expensive schools versus what they tell people in lower, um, income schools. I was looking up this school and I can't remember the name of it now, like Lakeview, Lakeside, something where Bill Gates' kids went to school.

Emily M. Bender:

It's Lakeside. It's here in Seattle.

Adrienne Williams:

Okay. There we go. And I'm looking up, it sounds fabulous. I mean, it sounds amazing. I'd send my kid there. But their way of dealing with technology is they teach all these things. Here's digital literacy. Here's how to build a game. Here's how to do this, how to do--like a real curriculum on how to use technology. Versus low income schools, it's, we're just going to use technology to basically deal with you. We're not going to teach you how to do anything with it. We're just going to use it to kind of police you and keep you in this box. And we're going to take all the data and the information. We're going to use it to build more products so we can get rich.

Alex Hanna:

That's right. Yeah. There's a, there's a book out that makes a bit of this point. Um, Roderic Crooks, who is at a UCI, UCI, I think UC Irvine and, uh, Roderic has a book out that's called, um, ooh, let me get it. It's ahhh, why isn't it on your website right away? But it's basically about surveillance in schools. I mean, effectively the way that, uh, I think it's called Access Denied. Um, but effectively, you know, like these tools when they're used in these neighborhoods have--the dimension of the tech has vastly different form and it relates something to what they talk about, what uh Khan talks about in the talk, where he's saying like well we're also going to allow this, you know, the teachers to like see what the queries are in Khanmigo, right. So you could really quickly imagine how--I don't want to even call it Khanmigo. You could imagine how this AI tool--it's such a stupid name, I'm so sorry.

Adrienne Williams:

It is a stupid name.

Alex Hanna:

It's such a, but it becomes a vector for, you know, for like, for surveillance, you know. Maybe the kids are asking, you know, things like, I don't know, like, they just want to watch, you know, something like, you know, there, there, there's a whole bunch of things or asking about like, some important life things or, you know, whatever. Um, and that just becomes another you know, another venue for surveillance. I mean, we don't have anything from Sal Khan about like, well, what's your privacy policy? Are, you know, can law enforcement access the things that kids are saying?

Adrienne Williams:

They can.

Emily M. Bender:

Yeah.

Alex Hanna:

Yeah. I mean, yeah, that was that was rhetorical because we knew the answer is yes. Right? Yeah.

Emily M. Bender:

So that kind of is. Oh, go ahead Adrienne.

Adrienne Williams:

No go ahead, go ahead.

Emily M. Bender:

So I'm just going to bring us to this next paragraph here because it talks about some of this stuff. So, "Positive and appropriate interactions" is the heading."We believe that AI can transform education, but we also understand that it comes with limitations and risks," because you've got to have like the little fine print at the bottom here, right?"That's why we clearly communicate these limitations to every parent, teacher, and child who uses Khanmigo." Yeah, right."We limit the amount of interaction individuals can have with the AI per day. Additionally, each child's chat history and activities are visible to parents or guardians and teachers so they can stay informed and involved in their child's child's education." There's the surveillance, right? Yeah. Um, "Our platform also uses moderation technology to detect inappropriate interactions. And when it does, it sends an automatic alert, automatic email alert to an adult." So, It's like, yeah, great. Not only is it doing surveillance, but it is like doing push surveillance, right? Even if the adults around are like, yeah, we're going to not spend a lot of time monitoring that, they're going to get this pushed into their inbox, decontextualized. Right. And who knows what happens next?

Adrienne Williams:

And some of those push notifications go directly to police stations. And so that's what makes it even scarier. I heard a story about, um, a kid who was looking up information because his mom had cervical cancer. And so words like vagina and things like that were popping up and it was a push notification about inappropriate content and looking up things that were inappropriate and it got all these bells where you had now police coming to the school and CPS coming. And meanwhile this poor kid is stressed because his mom has cancer.

Emily M. Bender:

Yeah. Oh.

Alex Hanna:

I could imagine--

Adrienne Williams:

There's no human in there to say, no, this needs nuance.

Alex Hanna:

Well, I know that there was another thing, that I know Alejandra, um, Caraballo had written about, about there was this potential for basically like queer words being flagged. Um, and imagine that happening in school districts in states that have anti trans and anti LGBTQ legislation. And I mean, under the incoming regime, Trump regime, I mean, that's going to be another aspect in which--that could be an access to surveillance for undocumented students. Right. I mean, you know, Richmond is an area that has something like, it's like 60 percent Latinx. I mean, it's something of that nature. I mean, it's, it's just East Bay and more generally. Right. So there's huge vectors here of privacy implications that go beyond just the AI, but it's just about, you know, contact management systems and everything else.

Adrienne Williams:

Yeah, it's scary. I know when I was teaching, I had a little girl who was dealing with her dad might get deported and the whole family had to go down to his court. It was all the way in San Diego and these are things she's sharing with me as her mentor because they have a mentor program, which is just a way to get around having real counselors. But we were supposed to type all those notes into our work issued computer. So if you're trying to flag students who have, you know, family members who are illegal, I don't like that word, but who are undocumented. Um, you could just go, they could just hand it over. Here you go. CZI hands it right over because you know, they're into favors from the government. So here you go. Or, talking about abortion. I've had kids come to me talking about sexual violence, having abortions. Okay. So we're going to hand that information over. So what are we going to arrest a 15 year old now and put her in jail because she had some horrible experience?

Alex Hanna:

No, it's it's nightmarish. Yeah.

Adrienne Williams:

It is.

Alex Hanna:

One thing I want to touch on before I leave this, you know, to the Bill Gates article is if you watch this video, it's like, it's real--near the end. It's like this last thing,"Your role in the future of AI." And I want to read the text and then reiterate something he says in the video. So he says, "Sal emphasized that everyone has a role to play in shaping the future of education. Quote, 'We all have to fight like hell for the positive use cases,' he said." No, I don't want it.

Emily M. Bender:

And this part was infuriating because he basically says, you know, some people are excited about AI and some people think it's gonna come kill us, all so we have to fight like hell for the positive use cases. And I'm like no, we have to fight like hell against this nonsense and redirecting what resources there are for education into the tech LLCs' pockets. Like that's what we should be fighting.

Adrienne Williams:

Right. And there should be some regulation around forcing you to not even be able to implement it until your positive use cases are the majority. Right now they're like, we don't care if only one of y'all wins. Look, we did it. Wow. It's really scary.

Emily M. Bender:

And then on top of that, data minimization. Because one of the things that is very clear from what you're saying, Adrienne, and very unclear to Sal, apparently, is that kids bring their whole selves to school. Their whole lives are going on. And so if you are setting things up so that conversations are being captured all the time, you are going to be putting them at enormous risk because things like data should not be captured, except like we know it's being captured and everybody involved is aware and it is in a space that's not preventing them from bringing their whole selves to school and dealing with what they need to deal with.

Adrienne Williams:

Yeah, and I certainly wanted one, wouldn't have wanted people to capture my junior high thoughts and ideas.

Alex Hanna:

Definitely not. And he makes this point. He says--this isn't in the written account, but he says something like, he basically makes this anti regulation argument where he says, look, we're gonna, we're gonna come up with a good use cases. But like, those bad actors are gonna--and he cites like hackers and authoritarian governments, as if, you know, as if there are not people in the, in the U.S. who want to target minoritized populations--like they're going to use this in a bad way. And we need, and he's like, you know, and he, he says like, you know, we need smart regulation--and that for me is like a, it's, it's a, it's a dog whistle for effectively saying like we don't want regulation--to restrict the quote unquote positive use cases. And he's just using this platform like this 'we're one of the good ones', as if the whole project wasn't fucked from the beginning.

Emily M. Bender:

There's a chat here from Kaylee Champion that I want to lift up. Um, they say, "I'm not a Rawlsian per se, but where's the headline about how Lakeside--" That is the school that Gates sent his kids to."--will be replacing its teachers and tutors with AI. If it's so good, presumably the top rated school in the state would be all over it."

Adrienne Williams:

Ding, ding, ding. None of them have it. Exactly. It's only good for our kids. Their kids--there's actually an article I read not too long ago about how the richest among us are having their kids go to schools that actually have an absence of technology. They're writing with pens and pencils. They're going outside in nature to learn about science. They read books on paper, not on tablets. And it's a big push to get away from technology, and the more they push for their kids to get away, the more they're pushing for the technology to be in our schools, in our kids faces.

Emily M. Bender:

Yeah. Yeah. Alright, so speaking of Lakeside and Bill Gates, also Euler's back, um, let's take a look at, um, Gates Notes, "The Blog of Bill Gates". You know, I'm, I like blogs. I think blogging was a good innovation. I don't know that the world needs Bill Gates's blog.

Alex Hanna:

Bill Gates's blog is like, um, it's just a series of press releases.

Emily M. Bender:

Yeah.

Alex Hanna:

And it's just, just really dressed up like, uh, like a blog.

Adrienne Williams:

The same as CZI. They do the exact same formula.

Alex Hanna:

Yeah.

Emily M. Bender:

Yeah. So the, uh, the sticker here is "School of Thought." Har har. Headline, "My trip to the frontier of AI education." And then subhead, "First Avenue Elementary School in Newark is pioneering the use of AI tools in the classroom." And I just want to flag the, um, colonialist language there in both headline and subhead. Frontier and pioneering is gross. Um.

Adrienne Williams:

I agree.

Alex Hanna:

Yeah.

Emily M. Bender:

So, ah oh. Then he starts with this heartwarming story about how he nerded out at the World's Fair. Okay, fine. Um, let's get down to where he's actually talking about what he saw in the school. Um, "In May I had the chance to visit the First Avenue Elementary School where they're pioneering the use of AI education in the classroom. The Newark School District is piloting Khanmingo, sorry, Khanmigo, an AI powered tutor and teacher support tool, and I couldn't wait to see it for myself." So, um, he talks about how he's written a lot about it. Um, "I was blown away by how creatively the teachers were using the tools. Leticia Colon, an eighth grade algebra teacher, explained how she used AI to create problem sets about hometown heroes the students might be interested in. In February, Khanmigo helped her develop equations that incorporated Newark boxer Shakur Stevenson's workout routines so her students could practice math skills while learning about a real world role model."

Alex Hanna:

This is, this is, this is, this is hilarious. I'm just like, I'm just--ah, man, Euler is so loud.

Emily M. Bender:

Sorry.

Adrienne Williams:

I was wondering what that was.

Emily M. Bender:

She's my cat. All right, I will stop encouraging her.

Alex Hanna:

Yeah, no, it's great. I love, I love Euler, but also very loud. Uh, yeah. And then there's also other aspects of like hooks on assignments. Where it gave her a hook for a generic story about a fruit stand, and then she edited it to be about Pokemon cards and Roblox. And then, and then, and then it says--and this is a funny quote from the like from the teacher, which I read as kind of begrudging. Where it says, Khanmigo gives me the blueprint, but I have to give the delivery." It's sort of like I can't believe we're using this shit

Adrienne Williams:

Well when you're teaching you get so much stuff thrown at you. And it's the most bizarre thing because it'll be this really complex program, or this really complex like set of ideas and it'll be like meeting on Wednesday. Here's everything. We want it implemented tomorrow. And you're like, huh? And it happens at public schools, charter schools. I don't know about the private schools, but it happens all across the board. So I'm sure she is thinking that like I guess it gives me a blueprint for something, but I still have to figure out how to do it. And this whole like oh, we used a local town hero in my, what math problem you could replace anybody in a word problem that doesn't take AI.

Emily M. Bender:

No, exactly exactly.

Alex Hanna:

Yeah, and it's, it's very, I don't know, like, and there's a, there's a comment in the chat, there's two comments I want to raise. So, Irate Lump says, "I'm excited for Bill Gates to ruin education in America a second time." And then, and then, um, Tom E. Mullaney, who I know is a, pays attention to K through 12 education, um, a lot, says, "Bill Gates should only be treated with ridicule and scorn by anyone who cares about public K through 12 education." And that's so much of the story, right? I mean, one thing I don't think we've talked about connecting all this, and Adrienne, I'd love for you to talk about more because you've been focusing a lot on this too, is the way that these foundations, Gates, you know, primarily Gates, but also CZI now, have been getting into education so much and reshaping it and in a way that's very much towards, you know, for in their image and things that they think Is the best for for students. Yeah?

Adrienne Williams:

Yeah. I mean I've been trying to scream this from everywhere I can for like the last seven years that the corporate charter movement in itself is trying to take over education completely. And people look at me like I have a tinfoil hat on. But now you look you have project 2025 and they're like yeah, we're gonna take over education and probably get rid of it. Um, and Bill Gates started doing this way back in like 2008 when he was working on promoting a company called In Bloom in New York. And In Bloom was just like we just want to collect your student data. They were very out front with it, this is just about collecting student data. And Bill Gates was like, let's do it, and somehow we'll revolutionize education, but he didn't have any like real way to do that. c the parents and the teachers and the kids kind of blew that out the water. And Gates went underground and popped back up, giving money to CZI, supporting Summit Public Schools and really pumping money into the corporate charter movement and all of these edtech products. Because again, the end goal is not to educate your kids. If that was the case, it'd be at their kid's school. The end goal is to get as much data as possible, to use it for whatever products they can come up with, even if they're dumb and they hurt society.

Alex Hanna:

Yeah. And I think some of the, the discussion--there's another book in, you know, I like to recommend books. Anybody who listens to the pod--

Adrienne Williams:

Our queen of resources.

Alex Hanna:

I am a walking syllabus. There's a book by, um, Christo Sims. I'm dropping it in the chat. It's called Disruptive Fixation. And it's really, you know, it's a, it's a book basically where, you know, Christo is a, he's a science and technology slash comm studies professor at, um, UCSD. And it's basically like about this educational reform and about these kind of digital tools developed in the first wave. And it was a quote 'school for digital kids.' They want to start these new schools and it was very much funded by, they don't name--

Emily M. Bender:

Wait, digital kids. What are digital kids?

Alex Hanna:

I don't know. That's what that's, it is a quote, 'school for digital kids'. I guess it's kids that just live on Roblox or LiveJournal. Something of that nature. And then yeah--sorry, in the chat Ragin' Reptar says "Euler ASMR." Drop 10 hours of Euler of Euler ASMR. Yeah, but basically like Gates is not named in the book but it's effectively like gates is the main protagonist here and you know, basically, like, even when stuff goes incredibly wrong, like there's no accountability, they're like, oh, but we really want to really want to try this. And, you know, it is gathering of data. It is trying to reshape schools to look a lot different from the public school, um, regime that we've had for, you know, you know, decades in the U.S.

Adrienne Williams:

I think it should be also named that, like, he has been trying to do this--in 2009 the government let him experiment on 8 percent of American schools. So that's like millions of schools. And then he created Common Core, which he himself said was a failure. Yet almost every state still implements it in their district. And he came up with the conclusion that, well, you know, this idea I had for smaller classes and this and that and the other. It didn't actually change any outcomes because on the side of it, what he said was, poverty, race, immigration status, sexual orientation--all these things that bring extra stress to people's lives because they have to figure it out because life is complicated--none of that has anything to do with the ability of a student to learn. None of that should be brought into it. It just needs smaller class sizes and we can make this happen. What he realized is not only did it didn't work, but it actually made it harder for a lot of kids because when classes, when schools are so small, you can't have sports, you can't have a chess club, you can't have any kind of extracurriculars. And so students who may thrive because they got to go play basketball and now they know they got to keep up that grade in order to play the thing they love, all of that disappeared. And he even said, oh, after nine years or whatever. Oh, it was a failure. You could have spent two years at two weeks at a continuation school and realize that that wasn't going to work. But he's so in his own head that he's like this, you know, bringer of life that he doesn't even realize he doesn't know what he's doing. And everybody's so hypnotized by him. They don't realize that he doesn't know what he's doing.

Emily M. Bender:

And no one asks for proof up front, right? This is what you were saying before we recorded, Adrienne. Like, the, so I'm looking at this Gates Notes thing. There's a part where he says, "In other words, my visit to Newark showed me where we are starting from with AI in the classroom, but not where the technology will end up eventually." So it's all these promises of how great it's gonna be. And yet, where's the due diligence before we give this to kids who are doing their schooling?

Alex Hanna:

Yeah. Abstract Tesseract--.

Adrienne Williams:

They don't want you to care about that. I know the school I worked with, uh, worked at, they would doctor the data so it'd be like a hundred percent of our kids are accepted into college. They don't ever say how many of them are, are, you know, graduating from college. And I was told by a teacher that worked at a different Summit that she was like, I've seen, you know, admin pull their own credit card out of their pocket and say, no, you have to apply to this, this particular university because this particular university has a 100 percent acceptance rate. And then we go and we say we have a 100% college acceptance rate.

Alex Hanna:

That's wild. Yeah. Abstract Tesseract in the chat says,"Yeah, I remember when Gates got so defensive when Lindsay Layton, who's a journalist, asked Gates ah regarding Common Court about whether he has too much power over education." Yeah. Yeah. And I mean that's, I mean, you know, there's no accountability. You basically say, well, we need to let this play out. Meanwhile, you've messed up a whole generation of kids on these tools.

Adrienne Williams:

Exactly. And they're all working together because every celebrity and tech CEO and VC firm owns a school. So you have like Reed Hastings helping him make Netflix documentaries, even though Reed Hastings owns Aspire charter schools. They all work together to just ruin everybody, and I would love people to think about like when we talk about democracy, what does democracy look like when corporations are in child in charge of every single child's education?

Alex Hanna:

Yeah. It's a horrifying thought.

Adrienne Williams:

It is.

Alex Hanna:

There's, so there's one more point in the text that is really like a knee slapper. So, "This technology is far from perfect at this point. Although the students I met loved using Khanmigo overall, they also mentioned that it struggled to pronounce Hispanic names and complained that its only voice option is male, which makes it clear how much thought must be, must still be put into making technology inclusive and engaging for all students." Um, yeah, and then, and then the, on the teacher's end, "In an ideal world, the AI would know what the students in Ms. Drakeford's class are into--" Uh, no, that's terrible. That's, that's a nightmare panopticon. No, that's not ideal at all. Then, "--so she wouldn't have to do any editing. And Ms. uh, Colon told me it would took, it took her several tries to get Khanmigo to give her what she wanted." And I'm just like imagining the kind of manufactured, like, manufactured thing where like Gates actually enters a school and you have to like meet with this huge funder and like give him feedback, like.

Adrienne Williams:

What a turd.

Emily M. Bender:

So so Christie our producer is pointing out in the chat that uh 'conmigo' is a spanish word meaning 'with me', although I don't actually speak Spanish enough to know that I've said that remotely correctly.

Adrienne Williams:

That's an actual word, yeah.

Emily M. Bender:

And she says that maybe khan academy was trying to get like a pun with a Spanish word and so the fact that it can't pronounce Hispanic names

Adrienne Williams:

it's even funnier.

Alex Hanna:

That's, that's great. Uh, Christie, your, your, your irony censor is unmatched. Yeah. Incredible, incredible stuff.

Emily M. Bender:

So, so, you know, one of the things that I think happens here sometimes is in the tech world, if you build something and like, you're, you're trying to get your dog fooding or whatever, as you were talking about a few episodes ago, and it doesn't work, oh, well, we'll build something new. Right. But Adrienne, I'd love to hear you speak to a little bit of what happens when it doesn't work and it's in the schools.

Adrienne Williams:

Well, you just, they don't listen to you. You say it doesn't work and they're like, okay. They keep going. They just don't, they say either--so I know with, um, Summit, they were giving out their online learning platform to any public school that wanted it. And there was a lot of headlines, um, a couple of years ago, right before COVID about, um, in Brooklyn, they protested and walked to Mark Zuckerberg's office in New York, saying we don't want this technology in our schools. And there was another one in the Midwest with the same thing. We don't want this. It doesn't work. The teachers are like, our kids are online too long. They're not learning. And Mark Zuckerberg says it's always, it's your fault.'Well, you're not implementing it right. Your kids shouldn't be online more than 90 minutes a day. You don't know how to use the tools' but I worked at an actual Summit school. And my kids were online five and six hours a day. Their eyesight was breaking down. They were suffering from back pain, neck pain, migraines. And they were not learning anything. When I left, I had a large majority of my mentees left as well. And I had parents calling me saying, holy shit, I didn't realize how behind my child was and now I have to hire a private tutor to get them caught up just to be in West Contra Costa. Now I don't know how many people are in from California, the Bay Area, but West Contra Costa is not like the gold standard of school districts. So the idea that you have to get them caught up to be at West Contra Costa, that's saying something about your tech and your system.

Alex Hanna:

Yeah.

Adrienne Williams:

They just like, it's, it's very cultish. But when I started it was like, we're going on this camping trip and we're all going to play these dumb games in the dirt and we're all going to like, talk about how much we love this place. And if you had a dissenting opinion, you were just like, yeah, well, you don't know what you're doing, every time.

Alex Hanna:

Yeah. Um, yeah, there's, there's one piece. Um, there's one piece in here that I want to read because like, there's kind of some reading between the lines here, which is, um, "The educators I met in Newark are true pioneers." Again, kind of a revisiting of the colonial language."Some were on the cutting edge, constantly looking for new ways to use AI in the classroom. Others were using it in a more limited fashion." And just like if you're--obviously, this is not a blog which is straight from the mind of Bill Gates. Probably has to go through some editing for like the lead of his the comms lead on his foundation. But to even put like the kind of critique in there as 'some were hesitant,' is like, I'd imagine that trip was pretty contentious. Like, get this shit out of my school. I don't want to use it. Why are we doing this? You know.

Adrienne Williams:

And it seems to be just adding more work for teachers to do. Teachers go to--their, the way they talk about it is that like teaching isn't a profession, that people don't go to school and get masters and PhDs and how to be good teachers. Like it's just this thing. You're just a camp counselor. Anyone can do it. And what they're doing is just adding more work to a professional's plate. Teachers know what they're doing. They know how to get kids where they need to be, but around every corner, somebody's clipping their wings and saying, no, we don't want you to do that.

Emily M. Bender:

Yeah. Yeah. We don't want you to do that. And, but by the way, we're also going to collect all the data. So we're watching you and we're watching your kids and yeah, it's, ugh, wow.

Adrienne Williams:

I feel like teaching, you feel like--remember when you were, well, I know when I was a little I'd watch like those cartoons like Looney Tunes and stuff like that. And if you'd be in a boat and there'd be like a hole and they'd stick a finger in and then another hole's there and you stick a finger in and then another hole, you're sticking a toe in and that's how I feel like it is trying to be a teacher in this day and age. Like you just constantly trying to fill these holes while doing the job, you know how to do.

Alex Hanna:

That's right. It feels like these AI companies are just the ones that are making these new holes even.

Adrienne Williams:

Oh, they are.

Alex Hanna:

Or they're rather giving you an incredibly complicated hole filling device, which actually doesn't fill the holes at all.

Adrienne Williams:

It fills it with lead and makes you sink.

Alex Hanna:

Yeah, it's actually a lead depositor or something. I don't know. This metaphor has really gotten away from me, but I stand by it.

Emily M. Bender:

Alright, so we didn't get any, um, suggested prompts in the, chat that I saw anyway. Okay. So you need to tell me, we're doing musical this time, right? What--

Alex Hanna:

We, I didn't agree to that for sure.

Emily M. Bender:

Oh, well no, it doesn't have to be. So you want music or non-music?

Alex Hanna:

Let's do musical. Sure.

Emily M. Bender:

Okay. Do you got a style in mind?

Alex Hanna:

Uh, you know, uh, you know, Chappell Roan, lesbian. Pop.

Emily M. Bender:

Okay. So--

Alex Hanna:

Chappell Roan, sorry. I'm like--

Adrienne Williams:

I was gonna say, who's that?

Alex Hanna:

Chappell Roan, not Chapelle Roan, pronouncing her name like Dave Chappelle. Forgive me Indigo Girls for I have sinned.

Emily M. Bender:

All right. Chappell Roan lesbian rock. You are singing a song about the day that you were teaching a class, and it turns out all the students were replaced with chatbots.

Alex Hanna:

Okay. All right. Um, I'm having wicked dreams way down into from Tennessee, but all the students, I cannot even see. They've got their Zoom windows blacked out. I can't see their eyes. I must only surmise that they're all AIs. Oh, what have you done? You've taken all the students and they've gone all away, oh mama, ohhhhhhhh. I can't think of how to finish that, but--

Adrienne Williams:

That had a very Johnny Cash feel to it.

Alex Hanna:

Yeah. I mean, it's got that, it's got that, it's got that vibe, you know.

Adrienne Williams:

Yeah.

Emily M. Bender:

Yeah.

Alex Hanna:

Chappell Roan's from Tennessee, Missouri. Yeah.

Adrienne Williams:

So you nailed it.

Emily M. Bender:

All right. Thank you, Alex. We are now in Fresh AI Hell. Um, starting with something totally on topic for today's theme. This is a, uh, a skeet on Bluesky by Dr. Ian McCormick. Um, who's handle by the way is@WokeStudies, which is kind of fun. Um, and he says, "I thought this was a parody, but apparently it's authentic advert--it's an authentic advert for AIDemia.co, seemingly part of ChatGPT. I'm deeply troubled by these brain scam images. Are you?" Adrienne, do you want to tell us what we're looking at here?

Adrienne Williams:

I have no idea. I'm probably the wrong one to ask because none of this even makes sense to me. I mean, AI should not be anywhere near education and it seems to me like maybe with AI you lose brain matter on that left side.

Emily M. Bender:

That's what it's looking like. Yeah.

Adrienne Williams:

It's like to me like you were smarter without AI. It seems like they're saying it.

Alex Hanna:

Yeah, it's yeah, it's like a line that is more squiggly and a line that's like less dense with AI. But it's, I mean, the signal is supposed to be like, you're more scattered without AI, but yeah--

Adrienne Williams:

That's called thinking.

Emily M. Bender:

Yeah, so it looks like there's less going on in your brain with AI than, yeah.

Alex Hanna:

Right, Radium, RadiumSkull, and RaginReptar in the chat both had the same joke, which was basically,'Smoothify your brain with AI.'

Adrienne Williams:

That's a good hook, that's a good, like, ad.

Alex Hanna:

Very good.

Emily M. Bender:

Alright, so, really did not read the room there, um, in that ad.

Alex Hanna:

Sorry, if you see the cats just terrorizing behind me, we have to point it out. There's two kittens behind me just destroying the couch.

Emily M. Bender:

Meanwhile, I had a Euclid freaking out in the background. Hopefully it was far away, didn't make noise.

Adrienne Williams:

I need an animal.

Emily M. Bender:

Okay, this next one's a little bit involved, but I really wanted to share it. This is a paper from last year's EMNLP, which stands for Empirical Methods in Natural Language Processing. And the title is "Prompting is Not a Substitute for Probability Measurements in Large Language Models." And it's by Jennifer Hu and Roger Levy. Levy? Levy? I'm sorry, Roger, I don't know which way you do your name. And what they've done is they've basically, there was a, a nice example that I can scroll down to here. They have taken several large language models, and with a large language model, you can actually open it up and see the probability distribution over words at any given point. So they're doing that. And they're comparing what happens if they put in "a butterfly is a flying insect with four large" and then look at the word distribution of what comes next, to what if they said instead, here is a sentence, "'a butterfly is a flying insect with four large,' what word is most likely to come next?" And what they find, not terribly surprisingly, is that the vocabulary distribution is different between those. In other words, you can, there's a fact of the matter, of how probable each word is in the model to come next at any given point. But if you try asking the system, what word is likely to come next, it's not going to give you that information, because that's not how it's built. And this paper is solid. I think it's good research, but also I am so frustrated that it had to be done because obviously we shouldn't be asking the synthetic text extruding machine for information in the form of words because all it has is these distributions of which words are likely to come next and it obviously can't even answer those questions directly if you pose it as a question rather than just looking at the numbers that are inside.

Alex Hanna:

Did they say which one, did they say which one is more, I guess they don't know because they don't really have access to the model, but like which ones, which one is more correct to the actual word distribution?

Emily M. Bender:

So in order to, to know that, you would have to look at the training data, which we don't have, but I suspect that the actual one is pretty correct, because that's what they're, that's what they're good at is what's likely to come next, depending on how much additional fine tuning had been done. Right.

Alex Hanna:

Right. Interesting. Yeah.

Emily M. Bender:

All right. Thank you for letting me vent about that.

Alex Hanna:

Yeah, totally. Next, next one, which is, so this is a kind of an older one. Well, this is a month ago, so this is at Wired. The title is, "Google, Microsoft, and Perplexity are promoting scientific racism in search results," by David Gilbert. The lede, "The web's biggest AI powered search engines are featuring the widely debunked idea that white people are genetically superior to other races." So this is, you know, um, AI stop doing race science challenge, uh, parentheses impossible. Um, you know, basically it's mimicking all this stuff. And so, I mean, it's, and so then the, the first two paragraphs, "AI infused search engines like from Microsoft, uh, Google and Perplexity have been surfacing deeply racist and widely debunked research around race science and the idea that white people are genetically superior to non white people. Patrik Hermansson, a researcher with UK based anti racism group Hope Not Hate, was in the middle of a months long investigation into the resurgent race science movement when he needed to find out more information about a debunked data set that claims IQ scores can be used to prove the superiority of the white race. He was investigating the Human Diversity Foundation--"

Emily M. Bender:

Skip down, actually. It comes here. This is, this is the, yeah.

Alex Hanna:

Okay. Great. Logged onto Google. Well, Human Diversity Foundation, hateful organization, um, was looking at IQs. He typed in Pakistan IQ."Rather than getting this, he was presented with Google's AI overviews, which, confusing to him, was on by default. It gave him a definitive answer of 80." Um, so AI overviews is outputting straight up uh, race science.

Emily M. Bender:

Yeah, and, and you, at, it's on by default, you can turn it off now. They've finally added something where if you add minus AI to the end of your search query, you don't get the overviews. But, not only is this turning up the race science, it is just like shoving it at people. Now, hopefully most people are not asking this question, like Hermanssen had a pretty good reason for going in and doing this search, but like anybody else who's doing this search actually was going to be more, far more susceptible to believing that output if they're asking that question in the first place, which is terrible.

Alex Hanna:

Yeah.

Emily M. Bender:

All right, more terribleness. This is an AP article from October 26th, headline is, "Researchers say an AI powered transcription tool used in hospitals invents things no one ever said," um, by Garance Burke and Hilke Schellmann, um, and, uh, so this is about the transcription tool Whisper from OpenAI, um, and it makes shit up, of course, and these things are called hallucinations, which is a problem. And there's a difference, so, uh, if you were using a regular automatic transcription system, you are sometimes going to get incorrect things transcribed, right? It's always entertaining to see what happens, entertaining in a sort of sad way, to people's names, for example, in a text to speech sorry, speech to text system where, if it's an unfamiliar name, if it's not in the system's training data, then what's gonna come out is something that kind of matches the sounds. And you can sort of see just how poorly the systems have done in terms of being trained on representative data when that happens. But Whisper does something different because it's a large language model and so it will sometimes just like keep going and put out stuff that nobody said. And so there's this study, um, by, where'd it go, um, by, well, University of Michigan researcher, um, said he found hallucinations in 8 out of every 10 audio transcriptions he inspected before he started trying to improve the model. So basically, um, third developer found hallucinations in nearly every one of the 26,000 transcripts he created with Whisper. So this is constantly outputting nonsense into transcriptions that are being, this thing is being used all over the place in hospitals. And so this like fake stuff is going into people's records. I am upset about this one too. Sounds good, doesn't it?(crosstalk)

Adrienne Williams:

--it's going to the hospitals.

Alex Hanna:

Yeah. Yeah. But it's, it's, it's hard to avoid these AI powered transcription tools now because medical transcriptionists are not in-house anymore, and they've either been replaced by kind of remote transcriptionists or now they're trying to completely eliminate those transcriptionists with AI. Or like, automatic speech recognition stuff.

Emily M. Bender:

Yeah. Ugh. Okay. Um, I got some funny stuff coming. This one's, not quite as funny as the next two, but--

Alex Hanna:

This one, yeah, this is Business Insider, it says "The Rise

of GodGPT:

My surrealist conversations with Cathy, the Episcopalian Church's new 'PriestBot'", in quotes. And then this is by John Brandon from November 3rd. And it starts, "'Cathy, what should I do about my social anxiety?' An acronym for Churchy Answers That Help You, Cathy is a new AI chatbot that answers faith based questions from the perspective of a friendly, knowledgeable Episcopalian. Despite its feminine name, the quote'priestbot', as Cathy sometimes calls itself, is genderless. There are no ornate flowing robes or croziers either, but like a wise cleric, Cathy jumped right in with an answer."

Emily M. Bender:

I think we don't need to see Cathy's answer because no more synthetic text.

Alex Hanna:

But I do, I do want to read this down here where it says, it says, quote, what, when, when the journalist asks, why, "'What about when I feel like panicking?' I asked," and then it responds, quote, "'It's generally best not to do that,' Cathy said." Oh, okay.

Emily M. Bender:

And so this is, uh, Reverend Lorenzo Labrecha says,"Cathy represents our innovative approach to leveraging technology in support of spiritual exploration." You know what? The innovation I really want to see is people finding different ways to say hell no. That's, that's what I want. All right.

Alex Hanna:

It's interesting that the church, I don't know. Go ahead. Yeah. Yeah. Like it's, you know, every institution, this is, we're getting into some institutionals all converge to bullshit technology kind of thing. But anyways, go ahead. Yes.

Adrienne Williams:

This looks like the nail in the coffin for religion, though, if they started doing this.

Emily M. Bender:

Yeah.

Alex Hanna:

Yeah. Well, I mean, at least for going to church. I mean, if you're just giving me a chat bot. What do I, what do I go to your church for?

Emily M. Bender:

Okay, so this one was funny in a, yeah, so, "Alaska Airlines introduces self driving wheelchair pilot program at SeaTac Airport." So these are these little self driving wheelchairs where you enter the gate you want to go to and it takes you there. This is a picture of somebody from like the chest down and sitting in one of them. And the funny story I want to share is that a friend of mine was there at SeaTac airport a couple weeks ago and one of these things had like driven up to the bank of waste bins. So, you know, you have trash, recycle, compost or whatever. And its uh voice is saying, "Please move. You're in the way. Please move. You're in the way." To the trash can. Anyway, ridiculous. And then what was this last one? Oh yeah, Alex, you want to do this one?

Alex Hanna:

Yeah, so, uh, this is from Engadget."X's AI bot is so dumb it can't tell the difference between a bad game and vandalism." Um, so "Shooting bricks in basketball and throwing bricks at people's homes are not the same thing, Grok." This is by Sam Rutherford. And, yeah, there's like an image of a basketball game on like a phone. Um, so what is the text of this? Let's see."Last night, the Golden State Warriors guard, Clay Thompson, had a rough outing, shooting zero for ten, and a loss against the Kings, ending the team's chances of making the playoffs." Really? This year?

Emily M. Bender:

This was, this was in April.

Alex Hanna:

Oh. Are they out? Wow. I didn't realize.

Emily M. Bender:

Last season, not this season.

Alex Hanna:

Oh, last season. I didn't realize they were bad. I, and then, I think he got injured too. Anyways, but then, also, "Almost as if to add insult to injury, X's AI bot Grok generated a trending story claiming Thompson was vandalizing homes in the area with bricks." I mean, I don't know, if I was having a bad game and we didn't make the playoffs maybe I tossed a few bricks in some Sac homes. Especially the people in Sac that claim they're part of the Bay Area, but I digress.

Adrienne Williams:

He just left. He's not even with the Warriors anymore.

Alex Hanna:

Really? I didn't know that.

Adrienne Williams:

He's not.

Emily M. Bender:

Okay. All right. Well, that was our full tour of AI Hell for today, although we do have to do a, a Fresh AI Hell, a full all hell episode sometime soon, because I've got so much that we haven't gotten to.

Alex Hanna:

It doesn't stop.

Emily M. Bender:

Yeah. All right. Let me get back to the other window that I need. Okay. That's actually it for this week. Adrienne Williams is an organizer, a former educator, and a research fellow for the Distributed AI Research Institute. Thank you so much for joining us, Adrienne.

Adrienne Williams:

Oh, thank you. This was fun.

Alex Hanna:

Yeah, glad you had a good time. We're excited to have you on. That's it for this week. Our theme song is by Toby Menon. Graphic design by Naomi Pleasure Park. Production by Christie Taylor, and thanks as always to the Distributed AI Research Institute. If you like this show, you can support us by rating and reviewing us on Apple Podcasts and Spotify. And by donating to DAIR at DAIR-institute.Org. That's D A I R hyphen Institute dot O R G.

Emily M. Bender:

Find us and all our past episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.TV/DAIR_Institute. Again, that's D A I R underscore institute. I'm Emily M. Bender.

Alex Hanna:

And I'm Alex Hanna. Stay out of AI hell, y'all.

People on this episode