Mystery AI Hype Theater 3000

Episode 17: Back to School with AI Hype in Education (feat. Haley Lepp), September 22 2023

Emily M. Bender and Alex Hanna with Haley Lepp Episode 17

Emily and Alex are joined by Stanford PhD student Haley Lepp to examine the increasing hype around LLMs in education spaces - whether they're pitched as ways to reduce teacher workloads, increase accessibility, or simply "democratize learning and knowing" in the Global South. Plus a double dose of devaluating educator expertise and fatalism about the 'inevitability' of LLMs in the classroom.

Haley Lepp is a Ph.D. student in the Stanford University Graduate School of Education. She draws on critical data studies, computational social science, and qualitative methods to understand the rise of language technologies and their use for educational purposes. Haley has worked in many roles in the education technology sector, including curriculum design and NLP engineering. She holds an M.S. in Computational Linguistics from the University of Washington and B.S. in Science, Technology, and International Affairs from Georgetown University.

References:

University of Michigan debuts 'customized AI services'
Al Jazeera: An AI classroom revolution is coming
California Teachers Association: The Future of Education?

Politico: AI is not just for cheating

Extra credit: "Teaching Machines: The History of Personalized Learning" by Audrey Watters

Fresh AI Hell:

AI generated travel article for Ottawa -- visit the food bank! 

Microsoft Copilot is “usefully wrong”
* Response from Jeff Doctor

“Ethical” production of “AI girlfriends”

Withdrawn AI-written preprint on millipedes resurfaces, causing alarm among myriapodological community

New York Times: How to Tell if Your A.I. Is Conscious
* Response from VentureBeat: Today's AI is alchemy.

EU on the doomerism


You can check out future livestreams at https://twitch.tv/DAIR_Institute.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

ALEX HANNA: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find. 

EMILY M. BENDER: Along the way we learn to always read the footnotes. And each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, a professor of linguistics at the University of Washington. 

ALEX HANNA: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 17, which we're recording on September 22, 2023. And it's back  to school season, but the AI hype machine never seems to learn. So we're going to tackle some of the recent very overblown claims about AI in the classroom. From the ways in which institutions are deploying generative tools as potential additions to the classroom, to the suggestion that mathy-maths might be able to replace the classroom entirely. 

EMILY M. BENDER: And here to help us today is Haley Lepp. She's a PhD student at Stanford University's Graduate School of Education. Her work draws on critical data studies, computational social science, and qualitative methods to understand the rise of language technologies and their use in education. 

She has valuable experience from many roles in the education technology sector, including curriculum design and NLP engineering. Welcome Haley!  

HALEY LEPP: Thanks, great to be here. 

EMILY M. BENDER: We're so glad to have you join us. Um we've got some terrible hype to work through today. Let me share that and um get our first specimen up. Okay the first thing  comes from the University of Michigan. This is the University Record is the name of the publication, um dated August 21st 2023. Very proudly announcing um, "ITS debuts custom artificial intelligence services across U-M." And I'm just noticing that this headline spells that U-dash-M. I guess otherwise people would read it as 'um.' [ Laughter ] 

ALEX HANNA: True, yeah. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: University style guides are very detailed. 

EMILY M. BENDER: Right we have to remember that it's The Ohio State University, for example, the "the" being part of it.  

Um okay-- 

ALEX HANNA: I will never not troll Ohio State because that's where both of my sisters went. 

EMILY M. BENDER: Okay. "As it prepares to start the 2023 fall semester, the University of Michigan has launched a suite of custom tools for working with generative artificial intelligence that emphasize equity, accessibility, and privacy."  

Interesting claims there. They do substantiate the accessibility later on down in this. Privacy I remain quite skeptical of um but I'm going to skim a little bit ahead. "The university is initially is launching three AI services -- U-M GPT, U-M Maisey, and U-M GPT Toolkit -- which will be available across the Ann Arbor, Flint, Dearborn and Michigan Medicine campuses. Generative AI has the potential to be one of the most disruptive technological advances in history."  

We're talking about potential, we're talking about disruption, that is par for the course. "While many higher ed institutions are offering AI education programs, U-M is believed to be the first major university to offer a custom AI platform for its entire community." 

So this is like claiming to be out in front.  

Um and so. Ravi Pen-Pendse, VP for IT and Chief Information Officer says, "'We passionately believe that everyone at U-M should have access to the most powerful technology tools available. At the core of our GenAI services is the commitment to provide tools and technologies that enhance support and augment humanity.'" Thoughts? 

ALEX HANNA: This is this is such a this is such a funny thing I mean in here. Uh I mean addition to the claims that they make about the kind of uh responsibility--and I mean there's there's a whole kind of discourse around the use of the word 'responsible' and what that means if anything, it seems to be an empty signifier--but the way in which um this this CIO is saying we're going to support and augment Humanity through these tools. 

And so it's you know I mean I don't know how to situate this uh this um newspaper article  amongst the sort of discussions uh within that happened at U of M but certainly there seemed  to be some very enthusiastic players to get all these tools off the ground. Um--

HALEY LEPP: Yeah I feel like these verbs too, you see them pop up a lot um in the description of AI applications for education. So 'enhance,' 'support,' and 'augment.' It's it's sort of like defensive to say we're not going to say 'replace,' we're not going to say you know eliminate humans. It's like very--these these verbs show up a lot in a way that sort of like it again it comes off as almost defensive. 

We're not going to try and you know disrupt anything too significantly, but also we're going to be very disruptive. 

EMILY M. BENDER: Yeah. And and it also echoes the test grillist ideology right so um you know this is making better humans. We are not supporting students learning, but we are augmenting humanity or enhancing humanity. It's a very strange perspective to be taking I think. 

HALEY LEPP: And you can see that in the the other tab here which we'll talk about in a second but um specifically around teachers. It will enhance teachers. 

EMILY M. BENDER: Yeah. All right so-- 

ALEX HANNA: Love to see cyborg teachers, yeah. I I do want to point out I mean there's some great pieces here, I mean they talk here about the--before we get to the fun the real kicker on this one uh, they they talk about the data classification uh which is called--and they say, "The AI platform has been approved for use of data classified as quote 'moderate sensitive' um at this time. Use for data above a moderate-sensitivity classification is not allowed. For example this currently prohibits the use of protected health information within AI tools."  

Um I'm kind of curious on what the moderate sensitivity is, if that is um kind of a flag that that would allow FERPA compliance, um but it's it's an odd kind of thing in and of itself. They also don't say anything in this article to--in in my first reading on this--about how they're doing any kind of protection of feeding sensitive data into ChatGPT. Because at least one of the tools is basically just a kind of a layer on top of it. Um so yeah. Questionable. 

EMILY M. BENDER: Yeah. So they they say um uh, "The ITS AI Services also have been designed and tested for accessibility--" Good. "For example, U-M's GPT service works seamlessly with screen readers, which do not work when using ChatGPT." All right, good job. You thought about accessibility. And then, in the same paragraph, which is apparently about accessibility, they say, "And all data shared with U-M's AI Services is private and will not be used to train AI models." But then as you say Alex it is um, they say, "The first new service, U-M GPT, lets users engage with popular GenAI models like ChatGPT and other large language models." 

Which means sending data across. So yeah you know, "will not be used to train AI models," okay maybe they've got an agreement with OpenAI that none of this becomes training data, but  they're not *not* transmitting it, because otherwise you know what's this doing? 

ALEX HANNA: You're not doing it and they're not hosting any of the ChatGPT or GPT-4 locally, or at least that's the assumption.  

EMILY M. BENDER: Yeah um no of course not. OpenAI is not gonna let anybody host that locally. 

ALEX HANNA: No. 

EMILY M. BENDER: That's that's not a thing. Um. 

HALEY LEPP: I think another contrast with OpenAI too is that they kind of highlight, okay it's going to be provided at no cost to the community to celebrate the launch, but then three  sentences later, "will be available at no cost until September 30th." So questioning what is--how does you know cost play in and also too accessibility, which I think is something people are wondering about.  

EMILY M. BENDER: Yeah. Right. So, "and will be available at no cost until September 30th, with usage limits based on capacity." So right, they're going to have people build curriculum around this and then like charge some unknown amount after that? Like this is-- 

ALEX HANNA: Right. 

EMILY M. BENDER: It's bad. 

HALEY LEPP: It goes more into this and I think one par--yeah the next paragraph as well. 

There's a toolkit, "will come with a fee structure." Um so depending on how that's going to be used. 

EMILY M. BENDER: Right um, and then the real kicker here, "The platform, which has been in beta release for several weeks, is already attracting positive attention from the U-M community. 'I'm already a fan. I'm working on tracking down citations for my book and I turned to U-M GPT and got the info in seconds,' said Ann Gere, Arthur F. Thurnau Professor, Gertrude Buck Collegiate Professor of Education and professor of education in the Marsal Family School of Education," whew, "and professor of English language and literature in LSA." So this person with all these accolades does not understand that GPT is not an information source. Right? Like. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: Is she then checking and finding that these things are all made up? Or what? 

ALEX HANNA: Right and if if y'all have have have forgotten, we have our episode I forgot the number but the episode on Galactica, um Meta's uh science-like extruder machine um that you know was used to structure papers and was taken down after three days. Um famously citation-making-up-machine uh alongside ChatGPT.

Um the the one I think I want to also say that's also the kicker here is this um this sentence, this quote from Robert Jones who's the ITS executive director of Support Services who says, "Generative AI is here to stay," and led the development team for this AI platform.  

Um so it really shows this kind of fatalism that I think has infected a lot of university  administrators. That this is a thing we have to deal with, and their way of dealing with it has  been to swing entirely in the other dimension, and supporting a whole infrastructure for it,  which is um unfortunate but is also finding its ways in so many different areas of education and  education administration. 

EMILY M. BENDER: Yeah. Haley are you seeing that kind of fatalism about this around you? 

HALEY LEPP: Yes, I am definitely this um, 'are we going to encourage it and engage with it because it's coming, it's inevitable.' Um and sort of inevitability is framed as um also often an excuse. 'Maybe maybe this doesn't sit well with our team but it's inevitable anyway so we're gonna keep going.'  

Um so I think that that also seems to motivate research. Um I wanted to bring up another thing I  thought was interesting, was just the name of the bot um 'Maisey.' Um Stanford also has just released a bot called 'Cardi,' um and I think that this again these feminized names are something that we have you know known and are well documented in in these bots but um kind of looking at the roles.  

Not just feminized but also you know names, names that might be more common among people of color, um there's something in these sort of service style roles that that we're seeing replicated.  

EMILY M. BENDER: Yeah. 

ALEX HANNA: Yeah that's a great point that's been discussed a lot with uh Siri and Cortana and  um all the other ones. 

EMILY M. BENDER: It's the one thing that Google got right by naming their thing Google  Assistant, which is completely ungendered. Like and you get you get a few of them that are  um gendered as masculine but that tends to be because they're aiming to be very knowledgeable, like there's no there's no neutral way to do this, like it needs to be an 'it' if it's going to be at all accurate. 

HALEY LEPP: That's actually really interesting I have seen math tutors, automated math tutors that are that are masculine--masculinized? I don't know the right word--but I mean that that fits with that pattern. 

EMILY M. BENDER: Yeah. All right should we get on to the next one? 

ALEX HANNA: Yeah let's uh let's move on to this Al Jazeera article, yeah. 

EMILY M. BENDER: Okay. 

ALEX HANNA: So so this one's an Al Jazeera article uh and it's a it's an opinion article written by Momo Bertrand, who's an education specialist for the World Bank. The title is, "AI won't replace teachers -- but a classroom revolution is coming." With the subtitle, "Teachers must remain in charge but for that they too will need to evolve. Here's how." And so I mean it's pretty indicative that this author is from the World Bank, an is a an education specialist um focusing especially on, I'm assuming it on on developing countries. Um it starts off beneficial, starts off positive, in which he says, "I recently asked Bard, Google's conversational chatbot, whether artificial intelligence will replace teachers. Here's what it said. Quote, 'It's--it is unlikely that AI will completely replace teachers in the near future,' unquote. I agreed." 

EMILY M. BENDER: You said this is positive? We're starting off with some synthetic text. 

ALEX HANNA: It is--it's positive insofar as he is saying that these things are not going to replace teachers. But yes it's bad that it started off by the lazy move of saying, 'What does the machine think about this?' But as far as it's saying it's not going to completely replace teachers or not going to replace teachers, I'll at least give him that. 

EMILY M. BENDER: Yeah. Although he could have said that in his own voice, instead of  pretending that Bard somehow knew something. 

ALEX HANNA: Truly, truly. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: I'm scrolling down, scrolling down scrolling down, he says, "During a poetry night, I I remember joking with a friend that it takes a broken heart to nurture and heal another heart. I added, quote, 'Until AI experiences heartbreak, we must trust human teachers to nurture the hearts and minds of the next generation,' end quote." And this is hilarious because of it reminds me of the WGA strike signs in which you know writers are holding up signs like, 'Computers can't experience heartbreak,' or, 'They haven't experienced childhood trauma so they can't write from a place of actual empathy.' 

Which yes I mean these things are not going to experience heartbreak, math is not heartbroken, maybe unless I don't know it's an easy problem it feels like and I don't know I'm trying to send the metaphor very very very very--but in any case, yeah I mean but this whole thing and then--here's the turn here, "Yet--" And he he writes, "Yet, it's hard to ignore the growing questions and concerns emerging from uh and about the teaching community on the impact of AI on their jobs, their classrooms, and their very vocation."

EMILY M. BENDER: Oh so that's good. Yeah, go ahead. 

HALEY LEPP: --and dig into gender again here too, which is you know what is the role of the teacher, what are the roles that we think we can automate um or engage with AI with, and what are the roles that really need to be human? And you know in in a space which is you know traditionally feminized, right, what is the role that remains that's nurturing um hearts and minds? 

Um and I think that you know the the more sort of um traditional forms of teacher expertise are then seen as as something that that can be removed from that vocation. So I do think that it's really hard to look at this without trying to dig into to the gender spot of this. 

ALEX HANNA: Absolutely. 

HALEY LEPP: Yeah. 

EMILY M. BENDER: Yeah. Whew. Okay, so I just want to--Abstract Tesseract is being hilarious in the chat. "Negative exponents, why are you so negative? Who hurt you?” Um so thank you for that, um. 

ALEX HANNA: There's there's certainly a math joke in here about I don't know squaring negative number and having your feelings be imaginary. I don't know, somebody workshop that in the chat. 

EMILY M. BENDER: Yeah yeah exactly or that's the that's the like self-help book for the matthy-maths, has to do with like turning your frown upside down is you know, use the the  handy square function to turn your negatives positive. 

ALEX HANNA: Right right. 

EMILY M. BENDER: Okay we we have better things to do than workshopping math comedy. Um okay. So, "Governments, foundations and corporations have channeled billions of dollars to research develop and deploy AI systems in recent years--"  

True statement. "--which broadly speaking can perform intelligent tasks normally associated with humans." Um so what's an intelligent task? Um and um this like definition of AI as a system that does something that we normally think humans can do, I mean there's worse definitions out there I suppose but I don't like 'intelligent tasks' as a phrase. 

And I also don't like sort of how it's framing this for like, 'well so it can do the kinds of tasks that human teachers do.' Um. 

HALEY LEPP: I love too like just a couple sentences further, "At the moment, AI still lags behind humans in most disciplines." So just like a total contradiction of the first sentence, right after. 

EMILY M. BENDER: Yeah, and, "Especially complex tasks that require a blend of technical competencies and socio-emotional skills." Which is like most things that people do, right? 

ALEX HANNA: Right, um right I mean that's sort of--both of the sort of proficiencies of even if you know being a, for instance a software engineer. Which is part technical but mostly social socio-emotional right? 

EMILY M. BENDER: Right and as Ethnomusicologist is pointing out in the chat, this 'at the moment' in that sentence is also a problem. Right it sort of it suggests that like the um we're on this trajectory where you know, it's here to stay, it's going to get better um, but just right now the position we're at is--and it's like well no we don't have to imagine that trajectory.  

We have no evidence that it's real. 

ALEX HANNA: Yeah. So the further down is I think the real kind of thrust of the article, which I think we've we've gotten about before, but it's about this idea of kind of the AIs for the poors, right. In which the sentence that starts with, "However AI is forcing us to reimagine education as a vehicle for democratizing thinking and knowing--" Uh which already, democratizing thinking and knowing! I didn't know that I didn't know that the masses weren't allowed to think and know.  

Um. "--there's no denying that about 40 percent of the world's population is under 21--uh sorry 24. If schools fail to prepare this generation of youth for the age of thinking machines, the consequences on social and economic peace may be dire." So there's a lot going on in this sentence, and as my people like to say we should unpack that. And so I mean thinking about one, 'preparing the this generation of youth for the age of thinking machines,' which is I mean an interesting turn of phrase and makes me think a lot about--again I think I was talking about this a little bit yesterday in this event that Emily I did--this kind of old uh turns of phrase used even in the 50s of thinking machines, um and this kind of language to be giving autonomy to the machines rather than you know the pile of of linear algebra that they are. 

But then the second part of this is turning this then from kind of a story about development uh and education for education's sake, to social and economic peace 'may be dire' and that turns this then into a piece about, well you're going to have a bunch of uneducated masses in these developing countries and if we don't use AI they're going to riot and you're going to have coups. 

And so this then turns this into questions about state stability uh interstate conflict, and it really is adopting--I mean this is this could be taken straight away from a page of of Kissinger and and and Huttenlocher's book about the age of AI, even even using this kind of 'age of thinking machines' as language, right. 

HALEY LEPP: I think also if for this whole thinking machine discussion there's a great book by Audrey Watters called “Teaching Machines,” um but if for folks interested specifically in this application, the history of teaching machines in education um definitely worth a look. 

EMILY M. BENDER: Thank you for that, that will go in the show notes for sure. Um okay. "AI has  the potential to underpin positive transformation in education." Once again we have potential um and then okay reasonable application here, I think. "For instance--" Uh we don't call it AI-powered but, "AI-powered computer vision and voice to text apps can significantly boost school accessibility for learners with visual and hearing impairments." 

Um you know I would like to see that claim tested by actual users so people who are blind or low vision or deaf and hard of hearing making use of automated systems for, you know, image to spoken text or audio to text, like is it good enough?  

How does it fit, how does it work? Um but that seems like a reasonable thing to do and it's got nothing to do with the other kind of so-called AI they're talking about in here. Um so. 

HALEY LEPP: I think there's that that accessibility word again too. This this idea of in the name of you know supporting people who are not currently supported by systems, you know we're going to substitute this this technology.  

And I think that that that has been I think a big driver of of the hype that I've seen in education,  um this this sort of well it's in the it's in the name of accessibility, um you know we're not necessarily going to improve institutions otherwise but this is this is something that would that would help. 

EMILY M. BENDER: Yeah. But, same paragraph, "AI can also reduce teacher workloads, especially in environments where teachers' capacity and head count are low." Um so you know this is a situation where we can't possibly provide enough teachers, so um we're going to overwork them and that's going to come up in the next one too, which we need to make sure to get time to. Um so we're going to reduce their workload by letting the AI do it for them. 

But, next sentence, "However, human educators must remain central to teaching and learning." 

HALEY LEPP: Yeah there's that defensiveness. I think also in a space again it's traditionally public, that's that's funded by you know taxpayer dollars, public public money um you might speculate that in the name of of you know, we we do something in the name of reducing teacher workloads because that could be then justified to say we need fewer teachers.  

Um and so I think that that's something you see a lot of 'reducing teacher workloads' in in current AI for education research. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: I would also I would also say that this is sort of I mean given who the author is and who this is being pitched to, I mean as a World Bank consultant, I mean this is being pitched to probably a set of stakeholders who are probably in the NGO space or other funders  that they're going to get. I mean I'm thinking especially in education and developing world, uh  like Gates and Chan Zuckerberg are pretty huge figures right and they have been people I mean who have been really pursuing this I mean I think I don't know if we talked about it on  the on the podcast, but there was the kind of dual letters that Bill Gates had put out about  AI and LLM technology where he was unabashedly just kind of looking for ways to introduce  um LLMs into health and education in the way that basically especially for these kinds of things where you have pretty high teacher to student ratio um and so, kind of salivating uh potential for introducing these tools into the classroom, right.  

And so I think if you're pitching this to those stakeholders, then that's going to introduce as a very dangerous kind of cover um to to accelerate the adoption.

EMILY M. BENDER: Yep. Um okay. "On the flip side, the technology also has a high potential for harm. Generative AI could help students cheat in exams." It really bothers me that that's the first harm that comes up, and it's something that I see over and over again. Like that's the thing we're worried about?   

Um he doesn't stop there though. "Moreover, AI chatbots often throw up results that are sexist, racist, and factually incorrect." Oh, right it's a synthetic text extruding machine, it's not reliable. "So what should teachers do?" 

HALEY LEPP: So it's the teacher's problem. 

EMILY M. BENDER: [Laughter] Yeah.

ALEX HANNA: Yeah, completely. And I wanna I wanna I wanna come back to because I when when ChatGPT came out uh there was such a discourse on Twitter about this and there was some kind of a tweet that someone from kind of an abolitionist politic uh that uh put up, and it was something of the nature of 'if you object to--' and and just to be fair I agree with this abolitionist politic, that I agree on on most accounts there, but the kind of tweet by this person was, 'the only people who are objecting to LLMs in um in the classroom like are doing it from like a carceral frame or thinking about kind of policing of students in the classroom.' 

And I'm saying well if you buy the kind of anti-cheating frame then and that's your only concern, then yeah I guess that's where you're going. But there's so much more that's so much more critical to focus on. If you're not thinking about the labor dimension of this, about how these things are going to be used to completely further gut public education, if you're thinking about they make up shit, if you're thinking about all the myriad ways in which these are a negative boon for education and pedagogy as an enterprise, then yeah I mean then yeah then you're just thinking about this form of carceral frame. 

But to say that is it completely just you know just completely off off um off color, out of pocket.  

EMILY M. BENDER: Yeah. Woof. Okay. "Prepare students to ask better questions." This is a header I guess in response to what should teachers do. "A young university official in Cameroon recently told me that he and his colleagues were quote 'trying to see how our classes will prepare students for technology and AI' end quote. Going forward more teachers and education officials will have to think in this way. On the surface this requires reviewing curricula, syllabi and teacher professional development programs, and incorporating objectives and content on AI literacy, risks, ethics, and skills among other things." 

Um so like, yes on the one hand I agree that just like we needed to up our sort of digital literacy to understand how to navigate the web and how to make sense of information that we find on the web, we need literacy around what does synthetic text mean, and how do we deal with  the fact that it looks plausible and is ungrounded. Um but I don't think that's what they're talking  about here. 

ALEX HANNA: Yeah, and furthermore it's it's you're you're adding a bunch of work to already  stressed out and overburdened teachers, right? Okay now you have to suss out this other thing and deal with all this other kind of content when you know we've already been massively behind and we're thinking about this in the in the kind of global majority context again, you've already had like all these kinds of things. 

Like the introduction in the web, the introduction of mobile phones, these other things you have to negotiate in the classroom, and now there's this other kinds of thing coming from the West that folks have to deal with and negotiate as part of the classroom environment.  

HALEY LEPP: Good thing that they're going to have automated tutors to help ease the workloads. 

ALEX HANNA: It's going to be amazing. It's gonna solve--AI literacy performed by AIs. 

EMILY M. BENDER: Yes. So. "At a deeper level," continuing to read here, "as machines become better at answering questions, educators should guide students to ask better questions. This will go beyond writing good prompts for conversational AI. Today's schools should Inspire students to be curious, as this is an essential ingredient to conducting primary research, including in frontier areas where humans have an edge over AI." It's just like um so yes, you know working on critical thinking is really important, and framing questions and figuring out  how you would go about finding an answer--these are really important skills. But framing it in terms of a competition between humans and AI is just really off-putting, at the very least. The--it's yes. 

And Abstract Tesseract in the in the chat, "'As' doing a lot of heavy lifting, 'and as machines become better.'"  

ALEX HANNA: Yeah. Let's get down to the this this the--the kind of final what is this I think three of three of six on this. This is not getting back to the kind of social political the the, 'help avoid  echo chambers' part of this. Um--yeah yeah so this reads, "AI is almost certain to worsen the problem of misinformation." Yes. "Very soon anyone with an internet connection will be able to produce solid arguments--" I don't know about that. "--on any subject simply by inputting a prompt into an AI platform.  

Echo chambers could grow exponentially if we don't train today's young people to find common ground and hold peaceful conversations with people they don't get along with." Interesting um I mean yes. And I want to read the rest of this because it's just it's a it's a doozy. "Short of action, AI may feed the flames of extremism and polarization. 

Tackling the most pressing challenges of our time -- climate change, pandemics, migration -- will require unprecedented levels of collaboration at the global, regional and national levels. While AI will unlock new possibilities to analyze, organize and process information necessary to fix these issues, this potential will be useless if we can't talk to each other. That is why teachers--teaching learners the ability to find common ground is so important." 

This is just an amazing just this this uh these this is just bouncing around everywhere, right. I mean so so if I was going to reduce this, 'AI could beget misinformation but young people should talk to each other and we need to collaborate. AI will be a good thing and yet it'll be useless if they don't talk to each other. And we need to find common ground.' And I'm guessing AI is also going to be the solution here for people to talk to each other.  

I'm it's--this is such a bizarre claim that this kind of you know like, AI is going to lead to more conflict and it's also going to solve the conflict. And the kind of claim just a completely unsubstantiated claim, right. I mean we're already seeing applications of AI in climate in climate change, in pandemics, in migration. All negative, right?  

Um basically the way of of exacerbating climate change, uh fueling COVID misinformation and other pandemic misinformation, and especially in migration. I mean we've already seen there's some great work by um--the organization uh uh escapes me right now but I think it's the um Crisis Translation Project--I'm gonna I'm gonna drop the name uh in a second once I once I look it up. But they've shown basically how use of AI tools in translation have been used to prevent asylum claims, so basically you have people in countries of refuge um that are denying claims because they're basically using these low re--they're using AI tools and machine translation tools to do really shoddy translations for asylum applications.  

You're also seeing these situations in which digital identity is being verified by these different tools, like facial recognition tools, and denying people passage into into countries um that that um um or or--or um humanitarian kind of camps and so and and this has been uh really well documented by uh by Amnesty and and in other organizations. And so these kinds of claims that these things are going to foster and encourage cross-cultural collaboration just boggle the mind, completely unsubstantiated claims. 

EMILY M. BENDER: Oof. Yes. All right let's see if there's anything else we want to hit in this one before we get on to the next one, which is also pretty wild. Um just want to say I do appreciate that this was written I think from a Global South perspective, and so the issues are climate change pandemics and *migration*--which is a much more neutral frame than you know if you talk about it from the countries who are receiving migration, it tends to be talking about *immigration* is the problem. 

Or migration is a much better frame. But none of that has to do with AI in education I don't think um. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: "Use AI as a teaching assistant." Yeah? 

ALEX HANNA: I will just I just want to note that organization's called Respond Crisis Translation and I'll drop the the article um about this uh reliance on machine translation in the in the chat and in the show notes. 

EMILY M. BENDER: Yeah. And Arcane Sciences is saying, "It's amazing what powers get ascribed to...a synthetic text generator."  

Yep. Um all right Haley was there anything else in particular in this article that you wanted to pull  up? Um. 

HALEY LEPP: Um I think again that just 'democratized' and 'for the common good' is something we see a lot in AI for education. 'This AI is for social good,' you know education as seen as something that could be sort of like monolithically a good thing. Um and so anything to be done um to engage in you know the education--the field of education which isn't really a well-defined field already, um is is seen as something that that is good, um and and so I think that you know Alex mentioned earlier this responsible AI movement, also the social--AI for social good movements, um sort of ill-defined um but there's a lot of sort of good intentions, I think, techno  solutionism-style good intentions um that you know that are worth you know not discounting.  

People people are trying to to do good, um the good is just maybe sometimes misinformed. 

EMILY M. BENDER: Yeah and I and I think that there's a in this space it feels like there's a lot of the good getting sort of pointed in a certain direction by the hype and by FOMO um. And you know just one more piece of hype from this article. The heading: "The future." And then, "Innovation works in mysterious ways, as we are barely witnessing the first moments of AI's Cambrian explosion." So this is like a real whopper of a biological biological metaphor that that AI is somehow evolving into all these different things um like in the you know what we see in the fossil record and the Cambrian Cambrian explosion.  

Um and it's like I really wish that educators, like journalists like others, could sort of take a step back and really like hold strong and say you know, 'We have expertise here and it's up to us  those with expertise in education to evaluate if the system is actually going to be good for our  students.' Rather than saying, 'oh no it's moving so fast we've got to keep up.' 

HALEY LEPP: Yeah I think that that's that sort of deficit framing you see frequently from the big tech companies of the public, the teachers, whoever needs to--or at a deficit need to learn about this won't ever be able to, there's going to be a perpetual lag, and so they're gonna have to rely on the technological experts.  

Um and I think that that's that's part of the sort of messaging around all this. 

EMILY M. BENDER: Yeah and I'm once again back to Timnit's um Timnit Gebru's hierarchies of knowledge. So the the knowledge of how to be an effective educator is extremely an important--extremely important and valuable as a skill.  

Like I mean all the stuff about how important education is, that should tell us how important the sort of knowledge of how to be an effective educator is. And yet that knowledge is consistently  devalued compared to technological expertise, um probably not least because it's feminized.

HALEY LEPP: Yup. That's the gender role again yeah.

EMILY M. BENDER: Yup. Okay. From the World Bank to California, here is an article from California Educator, which is a publication of the California Teachers Association.  

Um from June of this year, June 20th, 2023. The title is, "The Future of Education?" Question mark. Um interesting like I guess that becomes a question because of the question mark, it's  just a noun phrase otherwise. Subhead, "ChatGPT in the classroom." Um and okay we have a lot of educators quoted here and it's all distressing. Um-- 

HALEY LEPP: Worth noting, for those who aren't from California, the California Teachers Association is the one of the largest teachers unions in the state of California.  

EMILY M. BENDER: Yeah. So this is coming from some authority. 

ALEX HANNA: Yeah, right. 

EMILY M. BENDER: Um and from a union, who we expect to be attuned to labor issues. Um. 

ALEX HANNA: Right. 

EMILY M. BENDER: That's that's a little bit of foreshadowing. Um so. "When some school districts across the state banned the use of ChatGPT shortly after its release in late 2022, there were widespread fears of the impact on education. Only eight months later, many educators are lauding its applications in the classroom and encouraging colleagues to accept and embrace generative artificial intelligence (AI) as a teaching and learning tool. ChatGPT, which stands for Chat Generative Pre-trained Transformer--" Again I always enjoy when we get its full name. I'm  being very polite there. 

ALEX HANNA: Full Christian name. 

EMILY M. BENDER: Yeah. "--is an artificial intelligence chat bot that is not only able to have a (semi) in parentheses human conversation with users, it can write and debug computer programs, compose music, write essays, answer test questions and translate and summarize text among other applications." Since when is answering test questions an application?  

Like why is that something we need? 

ALEX HANNA: Right I mean I guess for teachers if  they're using it to write a test key, which my gosh that's a terrible-- 

EMILY M. BENDER: Yeah.

Um, "Calling it the quote 'Industrial Revolution of Education,' instructional coach Brenda Richards says the technology will redefine how education looks and feels." Uh I'm gonna go a little bit further and then we'll get some reactions. "'There's no going back to a pre-ChatGPT time,' says Richards, a member of El Centro secondary Teachers Association. 'I think we truly need to embrace, 'How are we going to prepare students for the future?''"  

So again we're at this like, it's inevitable we have to embrace it, um we have to live with it. "As  a 31-year veteran educator, Richards supports fellow Educators in her current role and is excited about generative AI's potential as a tool to help teachers. While she understands the valid concerns about ChatGPT being used by students to cut corners or cheat, Richards says many educators are using the emerging technology to redefine teacher productivity, utilizing generative AI to perform tasks that don't fill their cups, so there's more time for those that do. AI for self-care, she says."  

ALEX HANNA: Oh I when I read this I just oh, the biggest face palm wasn't enough for this, Emily. I just I mean I--the kind of idea that this is this magnificent time saving thing when at a time when teachers are on such a back foot, just everywhere in this country um, and school ratios are in--teacher to student ratios are just completely out of whack. Uh it's just it just really is it's it's it's it's it's flooring. AI for self-care is a thing I never--I can't unsee that now. 

HALEY LEPP: I just want to acknowledge some of the comments in the chat about an Industrial Revolution metaphor coming from a union.

ALEX HANNA: [ Laughter ] Yeah exactly. 

EMILY M. BENDER: Yeah yeah. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: And then also the valid concerns about chat--ChatGPT being used by students to cut corners or cheat. Again, those are the only concerns being raised here. 

ALEX HANNA: Right.  

EMILY M. BENDER: Um which just in the context of you know a union which might have concerns about you know AI being used to discipline labor for example or educators who might have concerns about it just being wrong, like you know. 

ALEX HANNA: Right. 

EMILY M. BENDER: Nope, we're just worried about students cheating apparently. Okay uh, "AI saves teachers' time." That's the heading. "So far Richards has enlisted the help of ChatGPT to write learning targets in student-friendly language, create assessment questions, provide  examples and non-examples of concepts, and other applications that allow educators to focus on the relational aspects of teaching. She's eager to see educators build professional learning groups and networks about generative AI to share, inform and support each other in learning how to use this technology to support students." And it's like you know what, maybe we don't actually have to spend time on this. Right. Maybe we don't have to be reactive to what the tech companies are pushing, um and we can just focus on education? 

HALEY LEPP: I love how--I love the following sentence. "Some of our best learning occurs when we teach each other, too." Which sort of implies that good learning happens with you know humans working together and and collaborating, um which is I think a funny sort of like-- 

ALEX HANNA: Yeah. 

HALEY LEPP: --the example of what they're talking about.  

ALEX HANNA: Teaching each other about about generative AI though, I mean that's the important that's an important use of time. 

EMILY M. BENDER: And then oh this next one. "San Jose High School government teacher  Jason Chang is also focused on ChatGPT's use in professional development and teacher  retention. He says the chatbot is a powerful tool for getting ideas about topics or lessons--"  

Oy. "--and the potential applications that can help educators with tasks and responsibilities  could mean the difference between a happy and burned out new teacher." So the idea is that this is just too hard and certainly you know the the teaching profession is one where we see  high turnover and high burnout. Like that again once again you have identified a real problem,  but why is this AI a solution, rather than like providing more support? Right, n instead you can  you can reduce people's workload by letting them or having them use ChatGPT to generate  lesson plans, because that'll make their lives easier enough and then they won't get burnt out.  

I kinda doubt that that's the source of burnout for teachers.  

ALEX HANNA: Yeah. It's really phenomenal to see that I mean I--phenomenal in a negative register in the thought that that's going to be I mean or has been, because I'm actually kind of curious if CTA or any teachers union of that in that factor has kind of demanded kind of increased access to certain kinds of kind of technology as a means of facilitating their work, rather than what it tends to be, which is more teacher support, higher pay, higher retention. I mean if it's kind of basics of of things, like computers in the classroom, uh printers, things of getting instructional material in the hands of students, yeah. But as if some major--as if this is going to be such a major saving device that demands need to be made on that, I mean it's it's  um--it seems um just way too optimistic. 

HALEY LEPP: Time-saving seems to really come up a lot in the hype around AI for education, like specifically time saving for teachers. Um and I think it's interesting to kind of look at it as you say, take a step back on resource allocation and, okay, teachers may need their time saved. I don't know if that's true, I don't know if efficiency is necessarily sort of the the number one challenge facing people. But um this this idea of teacher burnout, of, this is something that you know has been going on for a long time and um you know innovators, big tech have this have this drive to we're going to go help out with with our tools and it's it's this sort of AI alignment style thinking of--we weren't there before, we have a lot of cash, we could be you know  improving teacher pay or helping you know helping provide more sort of alternative resources, but but our interest in the social good here specifically and something that will align with with profit. 

EMILY M. BENDER: Yeah absolutely so so we've got the setup where we're going to make things better for teachers by sending a whole bunch of money to the tech companies--  

ALEX HANNA: Yeah. 

EMILY M. BENDER: --rather than using that money to you know make sure that teachers aren't spending money out of pocket on classroom supplies for example. 

ALEX HANNA: Right. 

EMILY M. BENDER: All right I think I've got a couple more here before we get to Fresh AI Hell. So yeah-- 

ALEX HANNA: Yeah well yeah I wanna I wanna read this one. 

EMILY M. BENDER: Okay.  

ALEX HANNA: "Coronado High School AP Physics teacher Bill Lemai is embracing generative AI to make his job easier, even if he has to do it from home since the school district administrators banned its use on their network." Oh dear okay uh. 

"Lemai uses ChatGPT as a research assistant, asking the chatbot to find information, clone problems and identify primary sources." And we've already talked about identifying of primary sources, huge red flag. "ChatGPT also provides feedback on student assignments based on a rubric he provides the chat bot, and Lemai even asked it to use 'Elements of Style' to suggest improvements to his words. Uh parenthetical, ('ChatGPT is improving my writing'). "'It's a huge time saver for me,' says Lemai, a member of the Association of Coronado Teachers. 'I can't  believe this is a free tool.'" 

Oh man there's so much there's so much here. I mean first, it's not a free tool I mean insofar as OpenAI and other organizations are are collecting your data um and and monetizing it in many different ways. 

And the kinds of things of evaluation that it's doing, and it's it's and I love how they got a physics teacher here to kind of give the veneer of sort of scientificity, or or STEM-ness to say see! It can even be used in physics courses.  

EMILY M. BENDER: And 'Elements of Style' by the way is trash. So speaking as a linguist. 'Elements of Style' querying ChatGPT is just like trash upon trash. Just have to say. Okay I think I want to get this over to the Fresh AI Hell, um but just to say that this oh--this ends I'm not going to read the synthetic text but uh the last bit of this webpage says "ChatGPT: Your AI conversation partner." And then subhead, "This sidebar and headline were written entirely by ChatGPT." 

And it's like yet again we have a journalistic output, in this case it's the newsletter of the CTA, I guess um, beclowning itself by saying we're going to print synthetic text as if this were information. Like so so frustrating. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: All right, any last words on this before I take us--before I give Alex her prompt for the transition to Fresh AI Hell? Okay. 

ALEX HANNA: Let's take it in. 

EMILY M. BENDER: Alex you are a kindergarten teacher um teaching your class a new song about AI Hell. 

ALEX HANNA: Oh my gosh, all right. Um give me one second. All right all right class, are you ready? Let's do it.

[Singing] H-E-L-L-A-I-Hell, tell me what this is going to spell. Repeat after me, as you see we are going to go and-- [Stops singing]  

Uh that's all I got that's all I got, sorry sorry. I don't plan this in advance. 

EMILY M. BENDER: No, that's yeah for the audience, Alex never gets that in advance. I usually make it up while we're doing the sessions.  

Perfect and awesome okay Fresh AI Hell. We have 10 minutes and eight or so things here.  

Um so this is from August 18th, Jeff Doctor on Twitter um was quote tweeting the CBC News  Piece. "Microsoft has removed an article that advised tourists in the nation's capital to visit the quote 'beautiful Ottawa Food Bank' after facing ridicule about the company's reliance on  artificial intelligence for news." Um so here's the article um and I oh sorry the CBC article about it.  

Um and I I'm not sure I can find the thing um let's see. "The Ottawa Food Bank entry earned the most mockery in technology publications and on social media. The article called the food bank one of Ottawa's 'beautiful attractions' before putting it third on the list."  

Um and then most of the entry simply describes what the food bank does, but it closes with a  bizarre recommendation. "Life is already difficult enough. Consider going in on an empty stomach."  

Like all right. 

ALEX HANNA: This is incredible. The link--I love this link, there's a link in the middle it says, "Opening of new Ottawa Food Bank location nothing to celebrate, CEO says." And I mean I mean I as someone who was a Torontonian for for two years uh I I do love the idea of dunking on Ottawa, uh but this is uh but also this is an incredible incredible uh error. 

EMILY M. BENDER: So so Jeff Doctor here um in his tweet says, "Hey let's make our entire workflows dependent on this tech. What could possibly go wrong?" And so apparently um this was a Microsoft site where they were using this synthetic text to generate news articles and they would generate things like, "Top 10 places to visit in Ottawa," and somehow this ends up in there. Um but then uh Jeff Doctor continues um with a thing about the Microsoft Copilot. So he's posting a link to a Microsoft blog. 

Uh quote, "Copilot gives you a first draft to edit and iterate on, saving hours in writing, sourcing, and editing time. Sometimes Copilot will be right, other times usefully wrong, but it will always put you further ahead."  

It's like-- 

ALEX HANNA: I love that I love that this is actual Microsoft copy. Like someone got paid real currency to write these words, and multiple people got paid to review them and copy at it and yet this is the best thing that ends up on this Microsoft blog. 

Fantastic fantastic work, amazing use of money.  

EMILY M. BENDER: Lovely um. I mean we assume it was people who got paid currency and not just Copilot itself writing that, or being prompted to write that. Um okay uh Alex, you want to take this one? 

ALEX HANNA: Yeah so this is a LinkedIn post by Maribel Lopez, and um below there is I--is this an email? I think it's an email. Uh this is and and it says, "Hi Maribel, please take a look at the info below on--" And and this is underlined, "--DreamGF dot A I, a groundbreaking platform at the forefront front of ethics and safety in the rapidly evolving space have AI companions and pornography." And this is a press press release um looking kind of thing, and the headline says, "DreamGF.ai Ensures Responsible use of AI girlfriends." And in italics, "Prioritizing Ethics and Safety in the New Space of AI Pornography."  

Oh dear, okay, so yeah this is trash, um obviously, and and again there's this um word 'responsible,' which just really means nothing at this point, um and just incredible and if you didn't see there was also a recent report about how um that I it was in the Upturn newsletter. Let me find it.  

But it was about how GenAI is--actually I think it's facial recognition tools are being used to  um dox uh um adult film stars, um by actually just finding their finding their visages um and and doxing them so again the kind of responsible use of AI in--or rather the use of AI and and porn and intimacy um meanwhile ignores the actual needs of actual sex workers.  

EMILY M. BENDER: Yeah and just like the thought that there is--I mean there is something is that um there is such a thing as ethical porn that's a real thing, right. Ethically produced pornography, real thing. I I don't think DreamGF.ai is it. 

ALEX HANNA: No no, certainly not. No.  

EMILY M. BENDER: Um oh we've got too many ads coming up here. 

HALEY LEPP: I think also the GF, I mean again this indication back to gender and and the automation of of certain feminized roles keeps popping up.

EMILY M. BENDER: Yeah. All right so um I think people hopefully know about the story, about how San Francisco is being used as the testing ground for various um driverless vehicles. Um headline from September 1st, "Person died after Cruise cars blocked ambulance, SFFD says." So story here is that um there were emergency response vehicles coming to help out a person um who was severely injured, um so, "On August 14th two stalled Cruise vehicles delayed an ambulance from leaving the scene of a crash in which a driver had hit a pedestrian with their car. Um the pedestrian later died of their injuries, which first responders linked to the delay in getting them to the hospital."  

Um so this is yeah the reporting is fine the underlying facts are terrible, and you know I would like to live in a world where we don't have to put up with our cities being used as the testing  grounds for this kind of unproven technology. Like if you want automated vehicles, put them on tracks, like I think that's probably the way to go. 

ALEX HANNA: You mean you mean you mean you mean trains? 

EMILY M. BENDER: Yeah. Public transit.  

ALEX HANNA: What a concept. Yeah I mean this is also after the California Public Utilities Commission approved robo-taxis for use in San Francisco for public use. And so this is already going to do the thing where it's going to one uh make human labor in this space even more contingent, and two disempower I mean probably less of the Uber and Lyft drivers but more of licensed taxis, um which has already been gutted by Uber and Lyft in San Francisco. 

EMILY M. BENDER: I'm switching the screen just because there was too many ads on that one, but I want to give a shout out to the Tech Won't Save Us podcast. Um their episode from two weeks ago, episode 184, "How Tech Wields Its Power in San Francisco," actually has a lot of really interesting background on the story. 

Um not necessarily on the particular um casualty there but sort of like how it came to be that San Francisco uh you know against the will of the people and the electeds in San Francisco. Like this was a decision in Sacramento, the state capitol.  

Okay this next one's a bit more fun, although sad. 

ALEX HANNA: Yeah it is--it is good and there is a new word we learned. 

EMILY M. BENDER: Yes. 

ALEX HANNA: This one reads, "Withdrawn AI-written pre-print on millipedes resurfaces, causing alarm."  

Um, "A preprint about millipedes--" And this is um this is from the RetractionWatch blog. "A preprint about millipedes that was written using OpenAI's chatbot ChatGPT is back online after being withdrawn for including made up references, Retraction Watch has learned. The paper, fake references and all, is under review by a journal specializing in tropical insects." And I want to get to the um the beautiful word here, uh okay. "The mirror-dipal--" Oh God. "The myriad-apological-podological? The myriad-podilogical--" 

EMILY M. BENDER: There's one more one more D that we want that isn't there. "The myriapodological community," I think. 

ALEX HANNA: Yes, "'The myriapodological--" fantastic word, "uh community is alert and closely following this matter,' says Carlos Martinez, a centipede taxonomist and research associate at the Senckenberg Society for Nature Research in Frankfurt Germany. 'I have also taken a personal interest in stopping the spurious manuscript from polluting the scientific record.'"  

EMILY M. BENDER: Yeah. 

ALEX HANNA: Yeah, some more more made up stuff happening on the--about your favorite bugs.

EMILY M. BENDER: All right. We are going to just quickly go through this for headlines. "How to tell if your AI is conscious." Which newspaper of record might have that headline?  

Why, the New York Times of course. It's your one-stop shop for terrible AI coverage. Um, "In the  new report, scientists offer a list of measurable qualities that might indicate the presence of some presence in a machine." And I'm pretty sure it's a arXiv pre-print that they are pointing to here.  

Um. 

ALEX HANNA: Oh it is. 

EMILY M. BENDER: Yeah. So that one we might need to do as a main course sometime, but pretty awful.  

Um in response to that, "Today's AI is alchemy not science -- what that means and why that matters." Um in uh VentureBeat, so we'll post a link to that in the show notes.  

Um the European commission tweets-- 

ALEX HANNA: And this yeah this has been kind of a a theme and so this is a tweet from the European Commission in which they say, "Mitigating the risk of extinction from AI should be a global priority and Europe should lead the way, building a new global framework--AI framework built on three pillars: guardrails, governance, and guiding innovation." So this is really disturbing that they're treating this especially in light of um the the--excuse me--in the final stages of the AI Act. And we're also seeing this kind of closure happening or rather capture happening in the UK apart from the EU. I don't know if you saw this news but uh Timnit posted it in our in our um in our chat, but it's basically about um how the British government has gotten rid of their independent AI advisors and has basically onboarded a bunch of um AI safety people, people who are really focused on existential risk.  

Um so really alarming stuff happening across the pond. 

EMILY M. BENDER: Absolutely and here's The Guardian.  

Um okay different kind of alarming across the pond. "Court of appeal judge praises quote  'jolly loose--useful' ChatGPT after asking it for legal summary." Don't don't do this.  

Don't do this. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: And we already had a whole episode on um with uh Kendra Albert on  how not to use so-called AI in legal contexts. That brings us to the end of AI Hell um. 

ALEX HANNA: We made it! 

EMILY M. BENDER: We made it. Um Haley any final thoughts for us um as we as we go into the school year, um how to think about what educators do, if we are Educators, if we you know are learning from Educators, if we have kids learning from educators, and so on? 

HALEY LEPP: I would just trust your gut. I think that there's there's a lot of promotion that's happening and so if something doesn't sit right or or seems concerning, you know don't don't ignore that your gut's telling you something, if if you're hearing something else from leadership or big tech or whoever. 

EMILY M. BENDER: Those are those are wise words, thank you so much for joining us today and for sharing your insights about AI and education. Um I feel more enlightened I feel like this has been a good educational moment, I hope it's been so for our audience too. 

ALEX HANNA: Yeah this has been great, thank you so much Haley. Haley Lepp is a PhD student at Stanford's University--Stanford University's Graduate School of Education 

looking at the rise of language technologies and their use in educational settings. Our theme song is by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor and thanks as always to the Distributed AI Research Institute. If you like this show you can support us by rating and reviewing us on Apple Podcasts and Spotify and by donating to DAIR at DAIR-institute.org. That's d-a-i-r hyphen institute.org.  

EMILY M. BENDER: Find us and all our past episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream--and we love the comments in the stream. That's twitch.tv slash DAIR underscore institute. Again that's D-A-I-R underscore institute. 

I'm Emily M. Bender. 

ALEX HANNA: And I'm Alex Hanna, stay out of [singing] A-B-C-D A-I Hell, y'all. 



People on this episode