Mystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
Episode 26: Universities Anxiously Buy in to the Hype (feat. Chris Gilliard), February 5 2024
Just Tech Fellow Dr. Chris Gilliard aka "Hypervisible" joins Emily and Alex to talk about the wave of universities adopting AI-driven educational technologies, and the lack of protections they offer students in terms of data privacy or even emotional safety.
References:
Inside Higher Ed: Arizona State Joins ChatGPT in First Higher Ed Partnership
ASU press release version: New Collaboration with OpenAI Charts theFuture of AI in Higher Education
MLive: Your Classmate Could Be an AI Student at this Michigan University
Chris Gilliard: How Ed Tech Is Exploiting Students
Fresh AI Hell:
Various: “AI learns just like a kid”
Infants' gaze teaches AI the nuances of language acquisition
Similar from NeuroscienceNews
Politico: Psychologist apparently happy with fake version of himself
WSJ: Employers Are Offering a New Worker Benefit: Wellness Chatbots
NPR: Artificial intelligence can find your location in photos, worrying privacy expert
Palette cleanser: Goodbye to NYC's useless robocop.
You can check out future livestreams at https://twitch.tv/DAIR_Institute.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
- Twitter: https://twitter.com/EmilyMBender
- Mastodon: https://dair-community.social/@EmilyMBender
- Bluesky: https://bsky.app/profile/emilymbender.bsky.social
Alex
- Twitter: https://twitter.com/@alexhanna
- Mastodon: https://dair-community.social/@alex
- Bluesky: https://bsky.app/profile/alexhanna.bsky.social
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
Alex Hanna: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.
Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come.
I'm Emily M. Bender, professor of linguistics at the University of Washington.
Alex Hanna: And I'm Alex Hanna, director of research at the Distributed AI Research Institute. This is episode 26, which we're recording on February 3rd of 2024. And as the second half of the school year unfolds, we're coming back to talk about the ways LLMs and other AI are being deployed in education.
Emily M. Bender: The road to AI Hell this year has been almost literally paved with headlines about universities adopting AI tools in the name of so called better learning. But we need to really talk about how-- not just why ChatGPT is inadequate to the task of helping students learn, but also about the harms of built in biases, the potential for AI to act as surveillance, and the other major pitfalls of jumping on the so called 'AI' bandwagon just because OpenAI and Google and others are pushing these products.
Alex Hanna: With us today is Dr. Chris Gilliard. You might know him best as 'Hypervisible' from various corners of social media, where he has tirelessly critiqued overhyped and dangerous uses of AI technology. He's also a Just Tech Fellow at the Social Science Research Council.
Emily M. Bender: His scholarship focuses on privacy, institutional tech policy, digital redlining, and the reinventions of discriminatory practices through data mining and algorithmic decision making, especially as these apply to college students.
Welcome Chris. We are so excited to have you on.
Chris Gilliard: Oh, thank you so much for having me. I'm a, I'm a big fan of both of you as, as well as the podcast.
Emily M. Bender: We're a big fan of yours and, and our listeners should know that, um, so many of the things that end up on the show are there because you've surfaced them on social media and that's just really, really wonderful and I appreciate your commentary.
So now we get to hear some of it, um, live and then record it on the pod. So let's get started. I'm going to share our artifact here, um, first main course artifact is a publication from Inside Higher Ed. Author is Lauren Coffey and the title is Arizona, "Arizona State joins ChatGPT in first higher ed partnership."
Wait a minute. I think we have to talk about the headline a bit. ChatGPT--
Alex Hanna: Yeah.
Emily M. Bender: --can't join anything in a partnership. They mean OpenAI, don't they?
Alex Hanna: I believe they mean OpenAI. And this is an artifact that looks straight like marketing copy. Inside Higher Ed usually okay, when it comes to doing some reporting, but this is more or less taken from a press release, um, from Arizona State.
Uh, they say in the first paragraph, "Arizona State is slated to become the first higher educational institute to partner with the artificial intelligence company OpenAI, which will give ASU students and faculty access to its most advanced iteration of ChatGPT. OpenAI announced the partnership Thursday. The deal for an undisclosed sum aims to bolster research and coursework at ASU through the course of ChatGPT Enterprise, which focuses on larger entities instead of individual use."
Oof. Okay. There's a bit in there already. Thoughts on that, Chris?
Chris Gilliard: Yeah, I, I mean, oh gosh, where to begin? I mean, I, I think, uh, what stood out to me is, uh, where they talk about the, uh, um, a, a potential initiative is AI avatars as study buddies and personalized AI STEM tutors, right? I think, um, you know, I mean, to me, this, what really resonates is this idea that's been, um, perpetuated by a lot of AI boosters that, um, these technologies are going to be tutors or they're going to be therapists or they're going to be diagnosticians.
Um, and I, I think um, far from kind of leveling the playing field as, as they call it, um, what, what really is at work is kind of an idea of what, what's good enough for certain students or, or certain folks and, and what, you know, I don't think Sam Altman is going to have a, an AI diagnostician or AI, uh, you know, therapist.
I mean, I could be wrong, but I somehow doubt it.
Emily M. Bender: Yeah. Yeah, so there's, there's like all this overselling, um, and that's kind of just the beginning of it, right? And the people who are doing the overselling aren't, aren't gonna, you know, use these products themselves. Um, I have to say from the, the chat, we have Medusa Skirt saying, "[joking] Arizona State joins Microsoft Words Clippy in first higher ed partnership."
Alex Hanna: (laughter) I would have loved-- (crosstalk) I would have loved to have seen a formal partnership with Clippy. I mean, could you have imagined when, when Clippy was debuted? Incredible.
Emily M. Bender: But it's, it's a great strategy, right? Anytime someone is saying, doing this with ChatGPT, like just substitute Clippy because it makes as much sense, right?
Um, and because Clippy didn't have this like veneer of apparent plausibility, nobody fell for it. Um, but it's, it's the same, you know, and, um, also just have to say, uh, CogDogBlog in the chat: "Long time podcast listener. First time Twitcher. Hello, Chris. I miss ya."
Um, so you've got, you've got a buddy in the chat or a fan.
Chris Gilliard: Yeah. Yeah. I know him well. I mean, the other thing, I mean, it mentions here, someone's getting paid an undisclosed sum. Um, you know, I mean, I think I know who, but. You know, it's just as easily that, uh, that OpenAI could be paying ASU for this, uh, for this promotional, you know, um, boon.
Emily M. Bender: Yeah. I mean, if I were a student at ASU or a person financially supporting a student at ASU, I would be really upset to learn that ASU was sending money to open AI.
That's for sure.
Chris Gilliard: Yeah.
Alex Hanna: There's parts of this, this, uh, this piece that really, so beyond the AI avatar thing, which really great point, Chris, is the idea that, yeah, Sam Altman's not going to get an AI avatar. It's like that quote from Greg Corrado at, at, uh, Google. About health care LLMs. He doesn't want that part of his own family's quote "health care journey."
Um, and so they have the unquote, the ASU chief information officer, Lev Gonick, who says, "Research shows that nearly two thirds of organizations are actively exploring the integration of AI."
So not that research shows anything about how successful AI is in education, just that organizations are actually exploring the integration of AI.
And then, "By providing access to advanced AI capabilities, these tools are leveling the playing field, allowing individuals and organizations, regardless of size or resources, to harness the power of AI for creative and innovative endeavors." Oh, geez.
And then, I want to read this next graph, just because this is getting at this kind of, um, anxiety between higher ed organizations, in which it reads, "ASU has long prized innovation. Officials created an immersive virtual reality world to replace an intro to biology course. And the university is consistently compared to tech powerhouses, like Stanford University and the Massachusetts Institute of Technology. According to ASU officials, one of the most popular courses on campus is prompt engineering, which teaches the best uses of chatbots like ChatGPT."
Emily M. Bender: Oh no, who's teaching prompt engineering? I have to say about prompt engineering, one of the things that I've noticed is when people tell me stories of using ChatGPT. Or I'm reading things about it where the prompt is included. How much of the prompt is the user telling themselves a story about what's going on?
So that they can then slot the stuff that comes back out, the synthetic text that's extruded by the machine, into that story. And, you know, it'll be, what were we looking at, Alex? Was it on the pod? We were looking at this, like, um, oh, that, uh, terrorist group that was telling people to use the uncensored AI and then like, here's the prompt you should give it.
And it was all this like nonsense about how, you know, telling the, the large language model you're a this and you're a that, and you're going to give me the unvarnished truth or whatever. And like, none of that is actually doing what they think it's doing, but it's certainly helping them tell themselves a story.
All right. So what's going on in a prompt engineering, what would you do for us a term long quarter or semester class on prompt engineering?
Alex Hanna: Yeah. Yeah.
Chris Gilliard: Alex, I had forgotten about that Google quote, um, that quote from the Google exec. I mean, it's the kind of classic example of saying the quiet part out loud.
Alex Hanna: Mm. Yeah. It's, it's real instructors for me, but not for thee, really, really giving that. And I mean, in this, in these quotes is really things that are really bowl me over. I mean, the kind of "leveling the playing field" of kind of this anxiety that exists at, you know, ASU, um, that's, that's paying for all this, you know, and it's it's, it's, it's pretty, I mean, ASU is not a Stanford or an MIT, it's still a Research One institution though, can pay, uh, for whatever access OpenAI is using. And then there's, but then, and then the research they're saying, there's no research that shows that these tools, help educational outcomes, right? I mean, are you actually helping level of playing field across like racial or, uh, kind of language speaking or, or socioeconomic lines. No, absolutely not. You know, there's no, there's no research on that. Maybe there's some research in process. I imagine it's probably not going to turn out very good, but you know, you know, it's, it's, it's lots of blovination just to justify this huge expenditure.
Emily M. Bender: Right. And as you pointed out, Alex, the research that they have that they're quoting here is they apparently went out and surveyed people. 'Are you looking into this?' Yes. Like that's not--and I also wanted to pick up what you were saying about anxiety here. So, um, this basically, you know, we, we have our working definition of AI hype that a big chunk of it is FOMO that everybody's doing this, so we've got to jump on the bandwagon.
This feels exactly like this, right? That we all have to figure out how to do it. And that's why two thirds of organizations--organizations, organizations like of what? Within ASU? Like, this is weird. Um, like, everybody's trying this, so we have to try it too, so we don't get left behind.
We better offer that course on prompt engineering so our students have this new job skill. And it's like, take a step back and have some pride. You know? That what you can, what you're doing can be done better without this nonsense that's being provided by the tech companies.
Chris Gilliard: I also think it's interesting who we hear from in this article. Um, you know, uh, we hear from the CIO at ASU, and I think later we hear from the president, but not so much from individual instructors, you know, um, and I think that that, you know, part of another thing they said is that you know, as part of the initiative, faculty are going to sort of, um, brainstorm different uses for these technologies.
But, uh, I mean, I think ultimately, a lot of them are going to brainstorm their way into their own firing. Um, but we don't. So I think that's part of why we don't hear from them in this.
Alex Hanna: Well, Chris, also, we don't hear from, we don't hear from students, right? And I mean, something I know you've written a lot about is just about student consent and student, you know, student rights right. When having to deal with all this ed tech and how it fuels so much, you know, surveillance, you know, the student days being used to basically tell on themselves.
Chris Gilliard: Right. Absolutely. Like, I think, um, you know, as Emily said, I mean, um, one, if you were, it's, it's weird to think of your, your tuition kind of participating in this.
Um, but I, I don't think, you know, and I don't think they've gotten this far in terms of integrating these solutions. But let's say I'm, I'm very suspect that students will be kind of fully informed, um, fully consenting participants in these experiments. You know, I think, back to some earlier, um, examples where, uh, oh, oh gosh, I can't remember the, the university, but there was a, a computer science professor, who um, made a chatbot to respond to student questions throughout the class, but he didn't tell them, it was based on Watson, I think, he didn't tell the students that it was a, that it was a chatbot, or that it was a, you know, um--
I don't, I don't know if you all recall that example.
Emily M. Bender: No, it's a great one. I'll have to look into it.
Chris Gilliard: But, yeah, I actually wrote a piece, uh, that appeared in the Chronicle, um, where I, um, suggested that this, uh, professor was exploiting his students, um, let's say he did not respond well to that suggestion.
Emily M. Bender: You know, there's this thing where computer scientists think that the IRB doesn't apply to them.
Um, you know, because they're just playing around with the computers. So where's the human subjects? Um, or they're like not even like not even aware of its existence, which is a real failure of like the bureaucratic structures of the university that this stuff can go through.
Alex Hanna: Yeah, no. And Nate, Nate, uh, Nate Angell in chat says, "Let's see the IRB on this experiment."
Yeah, no. I mean, the IRB is pretty much absent. I think I was, I was, I was remarking this on, on Twitter where I was looking through old ImageNet, um, images, you know, an ImageNet being this, um, art, you know, this huge data set that's, you know, proceeds has proceeded, uh, you know, LAION and the, yeah huge bunch of crap that's in, in that data scene, including, you know, CSAM.
Um, and, um, I was just casually browsing it and I, you know, seeing actually really awful images and I remarked, "Ooh, I am taking psychological damage from looking at ImageNet." And someone was like, Oh, you know, I bet their IRB does, you know, didn't, didn't, uh, account for that. I'm like, you're making the mistake of thinking images that went through IRB to begin with, and I know for a fact it didn't.
Um, so anyways, but getting a little off course there. Right.
Emily M. Bender: But I want to, I want to pop up a bit and talk about the intersection of consent and AI hype. So let's say even if you have the students actually doing some sort of opt in procedure, which as you say, Chris, is wildly optimistic in this case, um, what are they being told that they are opting into, and to what extent is that really going to be informed?
So, so best case, they're being asked to opt in, they're given materials that are short enough that they can read them, and they're actually expected to read them, but what do you think is going to be in there, and does that really count as consent?
Chris Gilliard: Yeah, I, I think, I mean, as we've seen with a lot of these technologies, often the people who are, are, are kind of deploying them on students, often they don't know, you know, the full extent of how these things were going to be used.
And so there's no way to fully, um you know, there's no way to fully inform, you know, or get kind of full consent from, from students because the people using them don't even know what, uh, you know, sort of ramifications are going to come down the line. Um, I think that's really important, but I, I think that is sort of a hallmark of, of a lot of educational technologies um, that there's just this mass, uh, you know, consumption of data that is going to, or may be used in some way later on. Um, and students are just kind of left holding the bag. Um, I mean, I think of a lot of the, uh, um, so called plagiarism detectors. I think of, yeah, a lot of systems like that.
Emily M. Bender: Apropos of that, there's a, there's a comment in the chat right now, which is spot on from, uh, Nate Angle, Angell?
"Given all the concern about detecting when students use AI, where's the concern about detecting when educational institutions are using AI?"
Alex Hanna: Ooh, yeah. I love that as a bit of a resistance tactic, right? I mean, it's watching the watchers in this way. How can we track that? How can we see and follow these contracts?
If you have a dissertation topic that you're looking for, I'm talking to people in Ruth Starkman's class right now, maybe tracking down who has contracts with what AI companies and whatnot. There's another thing that I also want to mention, and this is not really necessarily the higher ed, but it's about, um, some of the uses of data technologies in K through 12 education. And one of the fellows at DAIR, also a Just Tech Fellow, uh, Adrienne Williams talks about a lot is when she was a charter school teacher, um, at some charter schools, they, uh, you know, the students are asked or rather not ask, but forced to put a lot of things into different things like Google Classroom or different specialized tools.
And then that data goes off somewhere right? And she's remarked that you know, students are really being asked to put stuff in the chat, like, um, kind of mental health crises they're having sort of stuff around family conditions, really kind of personal stuff.
And, you know, they don't really have any kind of way of opting out or, you know, or, or having that or--and the teachers are also forced to do that. So, you know, this stuff is being turned around and god knows what is being done with it. Is it training new AI systems? Is it being mined? Uh, are people using it for kind of criminal justice entrapment and surveillance, you know, just some awful stuff.
Chris Gilliard: Right. I mean, we've seen, you know--I mean, Center for Democracy and Technologies has done great work on this as have some other organizations--how these surveillance systems are used, um, and to kind of the, the, under the guise of helping students. Who, um, maybe are in crisis in some form or another, but the, the, the practical way this plays out sometimes is that, you know, it punishes or, you know, um, disproportionately harms trans students, um, um, marginalized students, uh, you know, um, students who are seeking information about sexual and reproductive health.
And so, uh, I think that these kind of AI initiatives. Have a chance to supercharge some of these things, um, which again is not something that's typically mentioned in the hype.
Emily M. Bender: Right. And it's, and it's all sort of math washed too, right? Like, oh, this is the AI you get to play with. We wouldn't want you to miss out on this.
And then, you know, let's, let's look over here at this shiny thing that everyone else is using and we got to use it too. And, and never mind that this is massive surveillance, et cetera. Right?
Chris Gilliard: Yeah. And I wonder too, I mean, there've been, um, some, there's been some pushback, uh, of students or, or actually in K through 12, there's been some pushback about the surveillance, um, most notably. with kind of like a high powered or wealthier parents, and I wonder if we'll see a sort of similar response from, uh, from people, as, as you mentioned, Alex, from parents who want some kind of guarantee or mandate that their, uh, their students are only interacting with humans and not, and not AI.
Alex Hanna: Right. And I mean, it's, it's a, it's a real telltale sign when at least in the K through 12 level, you know, these things are being rolled out in charter schools, you know, and especially charter charter schools are targeting in, in, in Adrienne's case, a most, mostly black and brown school, you know, being used to supplant in, uh, uh, in the, in the existing school district and the public school. But now we're kind of seeing this in, you know, K through 12 or rather higher ed, you know, and we're having this increasing rate, um, of, uh, you know, adopting AI technology.
And I mean, ASU is, is curious. I mean, they have all this copy around it and, and someone in the chat had mentioned, I think it's, um, CogDogBlog talking about this, um, conference, the ASU, um, Global Silicon Valley Summit, uh, tech bro venture and yeah, and you know, the kind of copy on it is--it's really absurd. It's, you know, about ASU consistently being ranked as the quote, "most innovative university," um, as being a prototype for the quote, "new American university."
Um, and so we're just seeing this way in which this university is, is tending more and more towards a sort of technological casualization, right? Their removal of, uh, faculty who could teach this in person, um, and really hands on to, you know, let's just toss a bunch of tech at the wall and see what sticks.
And, oh, by the way, we're going to funnel a bunch of money to, you know, all these tech people.
Emily M. Bender: Yeah. And here they're either sending money to OpenAI or they're providing advertising to OpenAI. So they're providing OpenAI value one way or another. Right?
Alex Hanna: Yeah.
Chris Gilliard: Yeah. And I posted this, uh, on Bluesky and, and several, um, people pointed out that, um, one of the probably immediate effects of this is going to be to, uh, um, get rid of, uh, of, um, of non tenured faculty.
Alex Hanna: Yeah.
Chris Gilliard: That, um, the, the, the short version is like that a lot of, um, so you talked about the casualization Alex, but yeah, I think the short term is that a lot--so for instance, like, um, in composition, right, which is a class often taught by, um, adjuncts or precarious workers, right? I think that one of the real, um, possible, uh, options is that this is gonna, it's gonna be used to eliminate those jobs.
Emily M. Bender: Yeah. And I mean, those jobs need to be made more secure and less precarious, not eliminated. Right?
Chris Gilliard: Exactly.
Emily M. Bender: Yeah. Totally. All right. So should we, should we bump over to this? This was Inside Higher Ed, basically, uh, making it look like this wasn't just a press release. Should we go look at the press release itself?
Alex Hanna: (laughter) Yeah, let's go look at the press release, which is nearly identical.
Emily M. Bender: Yeah. Okay. Okay. So Arizona State University, ASU News, um, tag is University News and the title is, "A new collaboration with OpenAI charts the future of AI in higher education." Um, so just, you know, headline to headline, um, at least they've got the collaboration with a corporation and not with a product.
So score one there, I suppose. Um, but also, you know, this is this is so much like they are, they have been subjected to the hype, they've bought the hype, and then they have to put more hype out into the world because the whole purpose of a university newsroom is to say how great the university is. So here in this case, the university is so great because they are charting the future of, you know, the mathy math in higher education, um, because they were the suckers who went in with OpenAI.
Alex Hanna: And so the, this for the folks who are listening on the pod, the text is nearly identical to the Inside Higher Ed, uh, copy. I think the only difference down here is, um, some of the emphasis on privacy.
So they say uh, here, "The platform prioritizes user privacy, employing enterprise grade security measures to safeguard user data. These measures are meticulously designed to protect against digital threats, providing a secure environment to utilize the platform's functionalities."
Emily M. Bender: You know what's a great way to provide security for student data? Don't send it to a company that's not on campus.
Alex Hanna: Good idea. Yeah. Pretty much. Incredible. Uh, yeah. And then there's some more, uh, advertising from the university by, um, down here. "The collaboration builds on ASU's commitment to exploring AI in all forms," uh, and links to kind of an AI top level, AI.ASU.edu. And then, um, down--
Emily M. Bender: I'm missing this. Okay. This one here. Okay. Yeah.
Alex Hanna: Yeah. Yeah. Yeah. Yeah. And then, "Last year, the university announced the launch of AI Acceleration, a new team of technologists dedicated to creating the next generation of AI tools. The collaboration with OpenAI will empower new solutions being developed as part of this team's efforts."
Yeah. And so, I mean, and then that has its own links to another press release, yada, yada, yada. Um, you know, and, and--
Emily M. Bender: So MLVX in the chat says, "Enterprise grade, like gets hacked every other week." (laughter)
Alex Hanna: Totally. Absolutely. That's, that's what it is right. Uh, I mean, yeah, these words are, you know, I mean, it's marketing copy. Yes. But like, yeah, there's nothing behind these things. Meticulous--just some of it, "meticulously designed." Incredible. Uh, just, uh, I can't, I'm going to stop talking.
Chris Gilliard: Well, yeah. What, what struck me is how that's so short on details. Um, I mean, um, I mean, there's obviously already countless examples of people putting in private information, um, into chatbots that kind of wind up being reproduced somewhere else. Um, so it would, it would be helpful, I mean, it is a press release, but it would be helpful if there was like some nod to that, um, because I mean, you know, in, in classrooms, people are, you know, often kind of exploring some of the most kind of intimate, um, details of their lives, uh, or their beliefs, you know, or what have you. And so, yeah, uh, there needs to be a little bit more about what that, what privacy means in this respect.
Alex Hanna: Yeah, I would love this. I mean, as something that's a press release and I know press releases are not aimed towards students. Um, uh, that's not the, that's not the audience. It tends to be donors. It tends to be, um, yeah, mostly donors if we're going to be honest here. Um, but, if I was an ASU student, I'd be pretty concerned about, you know, what happens to my data here, um, what it's being used for, what this third party is planning to do with it, is it going to be used to train new ChatGPT versions or, and do I have any kind of mode of opt out on it, right? I mean, ostensibly ChatGPT through the OpenAI interface has some kind of opt out mechanism, which, you know, who knows if how true to true to form that actually is. But then with this Enterprise version, what are you getting? Are you going to be able to opt out? Are you going to have any kind of control over your data from the jump?
Emily M. Bender: Yeah, yeah. And, you know, thinking about this from the instructor perspective, there's probably a bunch of instructors at ASU who are being told, you have to incorporate this into your class and, you know, if I were in that position, I would, I would try to figure out ways to resist and like, you know, basically just A, not do it or B, at most craft an assignment that would help the students figure out that it's useless as a source of information.
Um, but I'm really concerned that not everyone is, you you know, savvy enough in the face of all this AI hype to do that. And if you think about the kinds of quote unquote information that's going to come back, right, it's going to be incorrect, but not just randomly incorrect, because you're tapping into all the biases that are in the training data.
So what does this look like in, you know, an English comp class or something where the students are maybe not--I mean, English comp, you can do lots of cool critical thinking about lots of things, but, but maybe they're not specifically focused on the systems of oppression in the country, and writing on a topic that's relevant to that, and then out comes ChatGPT nonsense about, you know, that--reflecting racism, reflecting transphobia, reflecting ableism, and so on.
Um, and who's, like, who's looking out for that at ASU?
Alex Hanna: Yeah. Yeah.
Emily M. Bender: Just, yeah.
Chris Gilliard: Yeah. That's such an important point to me and, you know, I think, um, the, the other thing I think about is like the the parallels between this and these, uh, partnerships and journalism, right. Um, what we see, um, you know, like the New York Times and, uh, I think, you know, a bunch of other publishers signing deals, which I think, you know, reminds me too of pivot to video, right?
That the, the, um-- it's a, it's a very short step to, um, these to uh, OpenAI kind of um, bankrupting journalism, right? And I, I think, um, when universe--colleges and universities make these, um, partnerships, I feel like they're not looking kind of like two or three steps down the road, um, where the idea is likely going to be, um, that you don't need college because you've got ChatGPT.
(laughter)
Emily M. Bender: Yeah.
Chris Gilliard: I should say, like, I don't believe this right now.
Emily M. Bender: No, very clear, very clear. No, just a shudder for like, what that means for access to education beyond the elites, right? Because the we're always going to have college for the elites, because the function of college there is to meet people and form the networks to perpetuate the elite structure of society, right?
So that's not going away. But everybody else, if we don't end up down that path, are going to lose access to really critical spaces for learning how to think and become informed citizens.
Alex Hanna: Yeah. And this comment from, from Severisto, uh, says even, I mean, "Even assuming this data does not get rolled into a general purpose model, they will first weasel it into a training a model specifically for ASU, and then they get to monetize that as university. And then probably at some point they will argue that quote, you cannot reproduce individual data well enough unquote, and then they'll roll it into the general data set."
So kind of, yeah, it's, I mean, like, I think that's. Kind of spot on. I mean, thinking about the partnerships, we've seen this partnership of an AI did with Axel Springer, um, and and the kind of, you know, move there.
Um, and then, you know, we just saw journalism get decimated in the past, you know, past month. Um. You know, and I mean, this is part of a long term kind of decimation of journalism, kind of beginning with, you know, Google and Facebook and Craigslist kind of siphoning up advertising dollars, but now, you know, now we're rushing into even more ways that we see journalists, you know, newsrooms getting, you know, um, shut down and, you know, because right now you're just trying to make them money and they're owned by these hedge fund managers that, really don't care about journalism per se, but care about maximizing their, their return on investment.
And our producer uh, Christie, in our group chat, just, um, put the sobbing emoji, uh, as a, as a journalist herself.
Emily M. Bender: I want to put out, I want to put in a shout out to the most recent episode of Tech Won't Save Us, um, where Paris Marx is interviewing Victor Pickard. Um, and the title is "What's Really Killing the News Media?" And the argument there is that yes, the siphoning off of advertising dollars is a big problem, but actually the dependence of the news media on an advertising model is the predecessor problem. Um, and we need be thinking about other models of funding journalism.
And I think other models of funding journalism starts with recognizing the value of journalism and not falling for the idea that, you know, synthetic text extruded from ChatGPT is a decent replacement.
Alex Hanna: Yeah. Yeah.
Emily M. Bender: All right. So should we leave Arizona and travel up to Michigan?
Alex Hanna: Let's get a bit cooler. Much closer to you, Chris.
Chris Gilliard: Yeah, yeah. I know this, uh, this college well.
Emily M. Bender: Yeah, so, um, uh, this is in a publication called MLive Michigan, um, location-- dateline Grand Rapids. And the headline is, "Your classmate could be an AI student at this Michigan University." This is published on January 8th, um, by Melissa Frick.
Um, so I'll read the first paragraph here, uh, "Big Rapids, Michigan -- A Michigan university is believed to be the first in the country to use artificial intelligence, AI, to create virtual students that will enroll in classes and participate in lessons and assignments." Uh. (laughter)
Alex Hanna: Just, just, I mean, just, just why, why, why, why are we doing this? What, what, what are we accomplishing here?
Emily M. Bender: Yeah, oh, um, so, "Ferris State University, which has one of just three undergraduate AI programs in the U.S., has developed two AI students who are enrolling at Ferris State as freshmen this semester and taking classes alongside human classmates." I mean, so should we talk about IRB and consent again here?
Alex Hanna: Right. Yeah, I mean, it's, it's, I mean, and the more of this is, I mean, okay, so, "The two students, who are named Ann and Fry--" First off, um, am I missing like a, a joke here? Why are they called Ann and Fry anyways? Okay. "--will participate in classes just like any other student would, listening to lectures, tuning in assignments, turning in assignments and participating in classroom discussions, said Ferris state associate professor Kasey Thompson, who is helping lead the AI experiment."
Oh my gosh. Yeah. I mean, so yeah, people who are unwitting unwittingly involved in having these kinds of agents in their class to do what? Okay.
Chris Gilliard: Yeah, if I, if I could have one wish about the reporting on these stories, it's that people wouldn't just be stenographers in terms of reproducing some of this language.
I mean, because like these technologies can do no such thing, right? Like they did not enroll in classes, right? They're not choosing their classes. Like it's not choosing its classes, you know, things like that, but it just reproduces this, uh, in a way that I think if you're not versed in these things, you actually would think like that it's an independent thing that actually sort of sat down and combed through the course catalog to decide, you know, what courses that wanted to take, um, which of course is not what happened.
Emily M. Bender: And then this next quote from Thompson, uh, oh, professor Kasey Thompson helping lead the experiment. So. "'Like any student, our hope is that they continue their educational experience all the way up, as far as they can go, through their PhD,' Thompson told MLive / The Grand Rapids Press. 'But we are literally learning as we go, and we're allowing the two AI students to pick the courses that they're going to take. We're in general courses at this point, but hopefully they will complete their undergraduate degree and even graduate degrees, and even further than that.'"
Like, what kind of a misconception of what education is, and what research is, you have to be holding on to, to, to say this stuff. Right? Like, it's, it's glaringly obvious with a PhD, right? A PhD is about becoming trained as a researcher and producing some original research on some topic. Large language models can't do that. But the, the falsehood is already there in, as you're saying, Chris, in even 'choosing' any classes or 'participating' in the classes, right?
Yeah, you could get a large language model to output some tokens that are on topic and seem coherent. Um, I have no idea who thinks that's going to improve the educational experience of the actual students in the class, but you could, you could get it to, like, produce the form of participating in a class, but that's still not what education is, right?
It's not the form of saying things in a class, no matter how many times we had to sit through classmates who, like, were just showing off for the teacher, right? The point of the class is the learning that you're doing there. Yeah?
Chris Gilliard: Yeah. I think it, yeah, these, these kinds of things really kind of, I mean, to my mind, misunderstand what college is supposed to be like, what classroom experience is supposed to be like, you know, there's the obvious consent issues.
Um, but you know, even further down, what they say is that the purpose of this is to glean insights about the college experience. You know, and it's like, I mean, I can think of other ways. I mean, there's some well developed, you know, disciplines where you can, um, you know, talk to people and, and get insights about what they're thinking and experiencing, you know, and those don't include, um, chatbots.
Emily M. Bender: That includes actually talking to actual students and like designing a survey with rigorous qualitative methodology.
Chris Gilliard: Yeah.
Alex Hanna: I'm really just puzzled about, I mean, the reporting on this, um, you know, so, gosh, the reporter says, "So what exactly is an AI student? AI technology uses computer systems to simulate human intelligence, including the ability to perform tasks, make decisions, recognize speech, and analyze images, according to Ferris State."
So back up here, that's not what human intelligence is. It's not just tasks, it's not just making decisions, recognizing speech or whatever, or even saying that it's recognizing speech. Um, and so I'm really, first off, puzzled what the hell this software even is. My guess is it's probably just a multimodal, you know, large language model or something of that nature.
And you know, they're saying that, you know, it's getting, it's, it's participating in a way that is taking the data and say lectures or quote unquote reading or I mean, it's, it's, it's just really, um, it's really disingenuous to even report this as kind of an agent that is, that is getting new data in the same way a human student would be and, and yet this reporter just kind of takes it on, on faith from this person and from Ferris State.
And it's, uh, I'm just, I would, I would love, I would love some pushback here.
Emily M. Bender: And then of course, we're back to the surveillance, right? "Researchers will set up computer systems and microphones in Ann and Fry's classrooms so they can listen to the professor's lectures and any classroom discussions, Thompson said."
And it's like, this isn't about setting up a hybrid teaching environment so students who need to stay home for whatever reason can still access their class. No. It's about basically surveilling the whole class so that this computer science prof can do his experiment with it.
Chris Gilliard: Right. And I can't overstate how this is not safe for students, right?
I mean, again, later down in the article it talks about the partners of this initiative.
Alex Hanna: Yes.
Chris Gilliard: Who include the Department of Defense, NSA, and Homeland Security. Um.
Emily M. Bender: And Amazon Web Services too.
Alex Hanna: Yeah.
Chris Gilliard: Right. Um. Yeah.
Alex Hanna: Wow. And I, there's also this thing down here. I mean, first off, yeah incredible fresh hell. Also, "Ferris State hosted its first AI day, where high school students were able to participate in a series of interactive exhibits and workshops, including a deep fake lab and AI social engineering lab."
Like those are not, I'm sorry. I'm sorry. What in the frick? And I think, and I use that kind of funnily because the author is also named Frick, ha ha. Uh, so what-- (laughter) uh, but what, these are, these are considered bad things in the vernacular, right? Deep--so you're teaching, so again, you're teaching high school students how to make deep fakes? How to social engineer? Oof.
Emily M. Bender: Yeah, I mean, is this, is this, like, the most charitable reading, which I agree is probably not merited here, is, uh, here's what you should know about things that exist so that you can become a more informed consumer of information. I doubt it, right? That wouldn't be called AI Day if that's what they were doing.
Alex Hanna: Also, autonomous vehicle racing? Are we going to get LLMs Tokyo drifting is that what, is that Fast and Furious? Uh, oh, I'm trying to think of a Fast and Furious title.
Emily M. Bender: Also, if the vehicles are actually autonomous and the students are just observing, in what sense are they participating in this exhibit?
Alex Hanna: I think it's, is it like Pinewood Derby, like in Boy Scouts, but instead, uh, using a, I guess you could just I mean, we don't need AI for that.
Although I, I'm kind of interested in autonomous Tokyo drifting. I'm sort of convincing myself the more I say that.
Emily M. Bender: Oh man, this is, this is all such a mess. And, and it's so frustrating that the universities behind this are just like proud of it. And, and that the reporters didn't go find anybody. There's no critical voices in this at all.
Like.
Alex Hanna: Yeah. It's, it's, it's, it's, it's, it's a, it's a bad time y'all. It's not, it's not, it's really, it's really awful. Oh, there's more to this article down here about the AI day.
Yeah, I thought this, I thought this article was almost over.
Emily M. Bender: That's the problem with the ads, yeah.
Alex Hanna: Um, so they're talking--Thompson, who's the, um, professor says, "'We're hoping that these learnings will impact every aspect of the university, from admissions to registration to faculty and the way they deliver their curriculum, the way they deliver their lessons to students, and also impact the way that we're learning how students learn in 2024, which is very different post COVID,' she said."
Um, and I want to actually come back here to another point that Severisto said, um, which is really helpful here. "Thinking about large-- about language models' tendency towards the statistical average, there's a pretty good, pretty great sabotage to good individual reasoning and idea generation. Most people get fed average crap that does not inspire novel thinking. And big corporations get to capture the supposed quote value of this. And then later they can downvalue you because you don't have enough, have a in parentheses exploitable kind of enough competence."
And it makes me think about the kind of insights this person is hoping to glean from kind of, I guess a student, sort of these agents' audit of classes.
Why are we, you know, first your, your, your, your, um, your administrators would be very mistaken to draw any real evidence from this experiment. It would be just a horror show, um, to do this. And you're not going to reveal anything novel. It's going to tend towards kind of an average or a, and a kind of thing. Why aren't you trying to actually use some human involved kind of evaluation methods for trying to see what's happening at the university?
Um, but we know why, because we're moving towards this casualized, um, kind of regurgitated hell, uh, in which certain things are valued and certain things aren't.
Emily M. Bender: Oh, I'm realizing that speaking of moving towards hell, I need to come up with your improv prompt. And also Euler's joining me here. Um, I love though, the, um, in the chat, Jeremy says, "'AI can impact education in quotes,'" followed by in quotes, "'asteroids can impact dinosaur evolution.'"
(laughter) Um, all right, so I think it is time to take us over to fresh AI hell. Um, and, uh, Alex, this time you are. All right. One of the few students in these classes, who's really aware of what Ann and Fry are doing, and you are trying to subvert the experiment through your comments in class. Um, and if you want, it can be a music class. So you're singing.
Alex Hanna: Got it. Um, uh, I will say that I will go, Hey, Ann, Fry, check this out. And I will just show them a CAPTCHA. (laughter) Checkmate.
Emily M. Bender: Checkmate. Okay.
Alex Hanna: That's all I got. I wish that was more musical.
Emily M. Bender: I got to give you a musical style if I want musical I think, okay, I have shared with you except that now I can't see it.
Hold on. I got to get to the right window for me. Um, okay. Welcome to Fresh AI Hell. Um, many of which were things that we found because Chris shared them. Um, so here we have a reporting in something called uh, InnovationOigins.com, um, and the headline is, "Infants gaze teaches AI the nuances of language acquisition."
Um, and then the, the lead up here or the subhead, it's long for a subhead, "Digital - The child's ability to learn language with limited data outshines current large language models, revealing a potential pathway to more efficient and human like AI learning." And then we have this horrendous image. Did you want to describe it, Chris?
Chris Gilliard: Oh gosh, yeah, it is a toddler on a wooden floor, um, with a, what looks like a, a look of shock, or perhaps dismay. And then the, the child, the toddler is wearing an oversized, what looks to be bicycle helmet, um, with a gigantic, um, cyclops like, uh, camera loaded onto the front.
Alex Hanna: So I went into chat, yeah, Ragin' Reptar says, "That baby looks like it's about to fly an X Wing."
(laughter)
Chris Gilliard: Oh my gosh, that's, that's true.
Alex Hanna: Not wrong.
Emily M. Bender: Yeah. Jeremy says "R2D2 cam." I have to say, um, that, uh, at least this publication owned the fact that it was an AI generated image. The caption is, "A toddler with a helmet camera, AI generated image." Um, better than putting one of these things up and not saying that it was a generated image, but like even better still would be not doing this.
Oh no, Severisto: "Use the farce, Luke."
Alex Hanna: Oh, geez.
Chris Gilliard: It's a very, it's a very haunting image. Uh, yeah.
Emily M. Bender: Yeah. Oh, okay. Um, so this is basically the study that came out, um, in Science today where they used 61 hours of a child's life. So someone has done some seriously invasive recording. Um. And, uh, this is, so in quotes, "An AI model trained on a mere 61 hours of a child's life has astonishingly grasped a fundamental language element, connecting words to their corresponding objects."
So, you know, I just want to point out that when they say "a potential pathway to more efficient and human like AI learning," it is true that a large language model that only has text has only one modality, and if you give a system both text and image, then it's got the possibility of mapping between them.
That is still not what infants are doing when they are learning language. What infants are doing when they're learning language is that they are doing joint attention with their caregivers and aware of another thinking being that they are communicating with and using every single cue they can to what that being might be trying to communicate to them so that they can bootstrap and work out what's going on with the language. And I am continually frustrated by the nonsense that comes up in the AI space, about 'just like babies.'
And this is like the next level of that. Um, okay. We gotta keep going because this is Fresh AI Hell. We could spend, I think, a whole episode on this one. Um.
Alex Hanna: Yeah.
Emily M. Bender: Yeah.
Alex Hanna: So this next one, Wall Street Journal. Yeah.
Emily M. Bender: Yeah. Go for it Alex.
Alex Hanna: Yeah. So Wall Street Journal, famously known for great AI opinions, says "Employers are offering a new worker benefit wellness chatbots."
And then the, uh, subtitle, "The apps use artificial intelligence to hold therapist-like--" Therapist-like is a is a word I didn't need to know. "--therapist-like conversations or make diagnoses." And by Stephanie Armour and Ryan Tracy. Uh, and so yeah, it's about--and so the first--I haven't read I haven't reviewed this one.
So, um, really terrible. Um.
"More workers feeling anxious, stress or blue have a new place to go for mental health Uh, mental health help: a digital app. Chatbots that hold therapist-like conversations and wellness apps that deliver depression and other diagnoses or identify people at risk of self harm are snowballing across employers healthcare benefits."
Uh, and then the quote, "The demand for counselors is huge but the supply of mental health professionals is shrinking," said J. Marshall Dye, Chief Executive Officer of PayrollPlans, a Dallas-based provider of benefits software used by small and medium sized businesses, which began providing access to a chat bot called WoeBot--" Uh, W O E, so like your woes. "--in November. PayrollPlans, expects about 9,400 employers will use Woebot in 2024."
Oh my gosh.
Yes so go ahead Emily.
Emily M. Bender: So first of all, you know, a chatbot is not a therapist and we already have a story of somebody dying by suicide after using a chatbot effectively as a therapist. Um, and we have the story of the, um, the NEDA hotline, um, being, uh, canned in favor of a chatbot that started dispensing dieting advice to people calling in about eating disorders.
So like, no, this is dangerous. And then on top of that, um, do you really want to be giving this data to a system that is probably not under HIPAA because they probably are not medical providers, um, but is, I don't, uh, what's the word I'm looking for, hired or provided by your employer? I don't think so.
Chris Gilliard: Yeah. And what, what stands out to me always is they, there's a kernel of truth, right? That, uh, therapy is often expensive and hard to access. Like, this is true. Um, but then it takes that and says, well, and so here's a chatbot, right? And, and that leap, I think, I mean, we, it's the same thing we saw a very similar thing to the discussions about ed tech, right?
That there are certain systems and technologies that, you know, they're trying to tell us are good enough for people who can't afford, uh, you know, uh, better or the real thing.
Emily M. Bender: Yeah. Just because you've identified a problem doesn't mean the chatbot's a solution for it.
Chris Gilliard: Yeah.
Emily M. Bender: That's a tap the sign kind of (crosstalk) (laughter)
Alex Hanna: And now that, well, hey, there, there, there's another merch idea. Just because you've identified a problem, doesn't need to put a, doesn't mean you need to put a chatbot on it.
Emily M. Bender: Yeah. Oh, good one. Good one. We're collecting merch ideas.
Um, okay, so here's an NPR piece, uh, tag is technology, headline is "Artificial intelligence can find your location in photos, worrying privacy experts." Um, heard on All Things Considered. And the, uh, journalist is Geoff Brumfiel, um, and basically as I understand it, what's going on here is that, um, image processing systems, um, have gotten good at a certain kind of pattern matching, which is matching images to the locations that they were taken in.
Um, this is worrying. It doesn't help to call it artificial intelligence, right? Um, but, uh, you know, just sort of, When we think about, um, 'back in the day, you uploaded photos so that anybody could see them' is a very different proposition than uploading photos that anybody can process them. Um, and I mean, already, if you are at all worried about this and you post photos to social media, you are well advised to go in and take the location stuff off of your phone so that it's not embedding the location directly into the photo.
Um, but, you know, I bet you actually that that's the training data set for a lot of this stuff is, is just--
Alex Hanna: I imagine there--or they might be taking a lot of these things from Google Street View too, um, and, and whatnot. And so, you know, that also, yeah, I mean, so it's, I'm worried very much. And I think they kind of mentioned this a little, uh, I mean, the kind of I mean, issues about stalking a little, a little higher in the, in the text, um, you know, in the kind of ways that this be abused for that. And, uh, down, down 1 more.
Emily M. Bender: Down one more below the ad?
Alex Hanna: Yes. Yeah. And so they have a quote from some, uh, Jay Stanley, a senior policy analyst at the ACLU. Um, it says, "He worries that similar technology will be widely available."
It'll probably be widely available, "and used for government surveillance, corporate tracking, and even stalking." So yeah.
Emily M. Bender: Yeah. Yikes. Okay. Fresh AI Hell keeps moving. Um, "A new kind of AI copy can fully replicate famous people. The law is powerless." This is, um, from Politico. The tag is "department of the future." No, thank you.
Um, and the, this is written by Mohar Chatterjee. And the lead paragraph here is "Martin Seligman, the influential American psychologist, found himself pondering his legacy at a dinner party in San Francisco one late February evening." Can I just pause and say, every time I hear stories about dinner parties in San Francisco, aside from the people I know personally, they all sound horrendous.
Alex Hanna: I know, I, I really rarely, ever--I live in close proximity to San Francisco and avoid any dinner party invites.
Emily M. Bender: "The guest list was shorter than it used to be. Seligman is 81 and six of his colleagues had died in the early COVID years. His thinking had already left a profound mark on the field of positive psychology, but the closer he came to his own death, the more compelled he felt to help his work survive.
The next morning he received an unexpected email from an old graduate student, Yukun Zhao. His message was simple, as simple as it was astonishing. Zhao's team had created a virtual Seligman."
Um, and so basically this is a chatbot trained on Seligman's own words. And so, um, it sounds like him, um, which like the, the sort of most AI Hell thing about this is Seligman's like, yeah, all right. I like it. It's okay. And it's just like, Oh, have some pride.
Chris Gilliard: Yeah. I'm consistently amazed by people's beliefs that people only consist of their public utterances. You know, like, even the headline buys into that hype, right, which is like, 'it reproduces individuals,' like, no it doesn't.
Emily M. Bender: All right. I think I'm going to, um, save these last couple of ones for a later one and take us to the, uh, good news here. Um, cause we really don't need to dog on this.
Alex Hanna: This is, this is great. We're all, we're all loving this one. This is one of the New York Times. "Goodbye for now to the robot that sort of, in parentheses, patrolled New York's subway."
"The New York police department's Knightscope K5 debuted in Times Square amid fanfare from mayor from Mayor Eric Adams. It ended its brief tour exiled to a vacant storefront all alone." This is a kind of robot, this is, this is a kind of robot loneliness I can get behind. And it shows the image here is, um, you know, a, the, um, Uh, New York, uh, Police Department, if you've seen this robot, it kind of looks like a, um, kind of a, uh, what is the, the, um, white robot from, from Wally that's kind of like, it's kind of, um, but it's like that robot, but kind of, uh, opposite shape.
So it's, it's more wide at the bottom and at the top kind of, um. Uh, yeah, yeah, Eve, Eve. Thank you. Christie, our producer helping me. Um, and then it's alone in this kind of library space with wooden floors and wooden shelves and the, uh, the sub, uh, and yeah, so, and it says the police department assigned--and then the caption on this is great, "The police department assigned officers to chaperone the K5, which could not navigate stairs and spent most of its time plugged into a charger."
And I was in New York in December and just every time it was in the time, it was in the Times Square station and there was always two cops, um, uh, literally just babysitting at all times. Um, so incredible. Uh, I think I just laughed at it and pointed, um, very loudly, like, look at this, what a waste of money.
Um, and if you're a New Yorker, you know that, uh, Eric Adams for former cop himself has, you know, increased so much more in policing technology and, uh, you know, you can't go to the library now on, on Sundays. Is it Saturdays and Sundays, maybe, maybe Christie will back me up now. Um, but it's, yeah, and they've been shutting down library hours.
Just Sundays. Okay. Thank you. Uh, yeah.
Emily M. Bender: Meanwhile, there's a 24-7 open Apple store near Times Square in New York, I discovered when I was there recently. Um, because when you have that many people living together, you know, you can support all kinds of things. If you actually spend the money wisely, not that the Apple store being open 24-7 is wise, but like, you know, libraries instead of helpless robo cops.
Chris Gilliard: Yeah.
Alex Hanna: Good Lord. All right.
Emily M. Bender: So, I mean, it's fun, it's fun to end on this picture of completely incompetent surveillance, but, um, we should keep in mind that it's, it's all, it's not all bumbling Daleks.
Alex Hanna: It's not all bumbling Daleks.
Chris Gilliard: Bumbling Dalek. Oh, yeah. Love it.
Emily M. Bender: All right. I think we should end. That's it for this week.
Dr. Chris Gilliard is a Just Tech Fellow at the Social Science Research Council. Thanks so much for joining us today.
Chris Gilliard: Oh, it's absolutely my pleasure. Thank you so much for inviting me.
Alex Hanna: It was so wonderful to have you on. Our theme song is by Toby Menon, graphic design by Naomi Pleasure Park. Production by Christie Taylor and thanks as always to the Distributed AI Research Institute.
If you like this show, you can support us by rating and reviewing us on Apple Podcasts and Spotify. And by donating to DAIR at DAIR-Institute.Org. That's D A I R hyphen institute. org.
Emily M. Bender: Find us and all our past episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream.
That's twitch.tv/dair_institute. Again, that's D A I R underscore institute. I'm Emily M. Bender.
Alex Hanna: And I'm Alex Hanna. Stay out of AI hell, y'all.