Mystery AI Hype Theater 3000

Episode 12: It's All Hell, May 5 2023

Emily M. Bender and Alex Hanna Episode 12

Take a deep breath and join Alex and Emily in AI Hell itself, as they take down a month's worth of hype in a mere 60 minutes.

This episode aired on Friday, May 5, 2023.

Watch the video of this episode on PeerTube.

References:

Terrifying NEJM article on GPT-4 in medicine

“Healthcare professionals preferred ChatGPT 79% of the time”

Good thoughts from various experts in response

ChatGPT supposedly reading dental x-rays

Chatbots “need” therapists

CEO proposes AI therapist, removes proposal upon realizing there’s regulation:
https://twitter.com/BEASTMODE/status/1650013819693944833 (deleted)

ChatGPT is more carbon efficient than human writers

Asking disinformation machine for confirmation bias

GPT-4 glasses to tell you what to say on dates, "Charisma as a Service"

Context-aware fill for missing data

“Overemployed” with help from ChatGPT

Pakistani court uses GPT-4 in bail decision

ChatGPT in Peruvian and Mexican courts

Elon Musk’s deepfake defense

Elon Musk's TruthGPT

Fake interview in German publication revealed as “AI” at the end of the article


You can check out future livestreams at https://twitch.tv/DAIR_Institute.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

ALEX HANNA: Hello hello there. Welcome everybody to Mystery AI Hype Theater 3000 where we seek catharsis in the age of AI hype we find the worst of it and pop it with the sharpest needles we can find.  

EMILY M. BENDER: Along the way we've learned to always read the footnotes and each time we think we've reached peak AI hype, the summit of Bullshit Mountain we discover there's worse to come. I'm Emily M Bender a professor of linguistics at the University of Washington. 

ALEX HANNA: And I'm Alex Hannah director of research for the Distributed AI Research Institute. Today is episode 12. The theme is "Welcome to Hell."  

Usually we have a segment called Fresh AI hell where you bring some bring you some of the worst things we've seen this month. 

EMILY M. BENDER: But guess what folks we looked around it turns out it's all hell. 

ALEX HANNA: It's hell all the way down. 

EMILY M. BENDER: All right we're gonna get there I have a browser window full of tabs about 19 different topics we'll see if we can do it. 

ALEX HANNA: It's so many. 

EMILY M. BENDER: But before we get there we want to do something serious um which is that um I owe an apology um we owe an apology I think. It's mostly me though so I want to be the one to say it. In the last episode um we decided to go after somebody's self-presentation, um this is an apology to Sebastian Bubeck in particular.  

There's no call for going after someone's self presentation just like there's no call for going after someone's sort of inherent appearance um and I really want to thank the folks in our community who were keeping us honest about that through the stream chat. That's super valuable.  

ALEX HANNA: Yeah thanks for keeping us keeping us on our things on our p's and q's so yeah I appreciate it.

EMILY M. BENDER: There's a lot to get into. There's plenty of stuff to criticize, we don't need to go after anybody's appearance. Um so let me get this set up. 

ALEX HANNA: Should we transition to Hell? Hold on give me a second-- 

EMILY M. BENDER: Are you gonna take us into hell? 

ALEX HANNA: --taking us into Hell. Dun dun dun. We're now in hell. I'm very proud that I got this one to work, everyone. This is this is what you get, this is why they pay me the big bucks. All right let's do the thing. 

EMILY M. BENDER: Okay so here we go. We're gonna try to keep 'em quick and and one so this is the article that appeared in the New England Journal of Medicine um which is a serious publication. The title is, "Benefits, Limits and Risks of GPT-4 as an AI Chatbot for Medicine." What do you think of that title Alex? 

ALEX HANNA: I mean it's pretty um it's pretty it's pretty incredible. I'm really curious what they're doing here. 

EMILY M. BENDER: Ah well on a quick schema we're doing quick because this is the Pure Hell episode, we got to get to a lot of it. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: Um but they look at AI chatbots and medical applications and one of their applications is medical note taking.  

Um so here is the physician and the patient have had an appointment and then the provider needs to actually produce a medical note. So the idea is um you use a speaker like the smart speakers that can work with ambient sound. That thing is recording everything and then proposes a note for the physician to edit afterwards. So huge privacy implications already for patients, yeah. 

ALEX HANNA: But you're already doing a smart speaker, I mean for one and I mean there's been a few--a bit of reporting on kind of any kind of--having contractors having different people on Amazon, Google um listen to those things directly. So-- 

EMILY M. BENDER: Yeah. 

ALEX HANNA: --those are certainly not uh you know useful for or allowable under um health privacy legislation. 

EMILY M. BENDER: Right exactly and then on top of that um they're making excuses for the mistakes that it makes. So this part just really like I felt my health was impacted by reading this article when I got to this part. 

"Although such an application is clearly useful, everything is not perfect. GPT-4 is an intelligence system that, similar to human reason, is fallible."  

ALEX HANNA: Yes I mean. Yeah right okay. 

EMILY M. BENDER: "For example, the medical note produced by GPT-4 that is shown in figure 2A states that the patient's BMI is 14.8. However the transcript contains no information that indicates how this BMI was calculated. Another example of a--" And here's this problematic word, "--hallucination." 

ALEX HANNA: I feel like this is such a such a term we just need to have a drinking game whenever it comes up. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: I mean because it is everything that's packed into the idea of hallucination, I mean especially if it's hallucinating just random numbers which happens more and more. 

EMILY M. BENDER: So their proposed solution here is to have GPT-4 go back and read it and correct it. Which like what? All right. So that was one we're doing this fast though, I've just got to show you one other thing. 

ALEX HANNA: Yeah we got to do this. 

EMILY M. BENDER: Yeah so curbside console-- hold on I have to search for uh empathetic, no ah-- 

ALEX HANNA: Yeah AbstractTesseract--and our our chat isn't appearing on in the stream, I gotta debug-- 

EMILY M. BENDER: You debug that while I find this. 

ALEX HANNA: --but the Abstract Tesseract says, 'I don't believe any of the A's in HIPAA stand for AI but it could be wrong.'

EMILY M. BENDER: Nice. He's like rah-rah it's going to be so great, "GPT-4 is a work in progress and this article just barely scratches the surface of its capabilities."  

We need to add capabilities to the drinking game. 

ALEX HANNA: I think capabilities yeah I mean especially there's there's a few these these words that come up, everything that's always on the horizon, yeah.  

EMILY M. BENDER: These things are tools, we should talk about them in terms of functionalities, which I think foregrounds that you would like evaluate a functionality and see how well it works for the function it's intended for. But here's their examples of things that could be done. "It can for example write computer programs for processing and visualizing data, translate foreign languages, decipher explanation of benefits notices and laboratory tests for readers unfamiliar with the language used in each, and, perhaps controversially, write emotionally supportive notes to patients.  

ALEX HANNA: This last this last bit. Yeah controversial for sure I mean the thing about these things performing empathy is really something else and I mean it appears a little more in some in the in the future hell. But it's so fascinating that that's that's what we're--outsourcing empathy is a task for these tools. 

EMILY M. BENDER: Oh and and the whole thing about empathy is that it is you know sharing of feelings, not saying something that sounds like you're sharing the feeling. 

ALEX HANNA: Right, exactly. 

EMILY M. BENDER: So okay next. We're gonna go fast. 

ALEX HANNA: All right let me do this one. A new study. "We compare ChatGPT responses--"  and this is a tweet from Mark Dredze. "We compare ChatGPT responses to people's medical  questions with those of doctors. Health care professionals preferred ChatGPT 79 percent of the time as more empathetic and higher quality. I'm excited to find out how to use LLMs to help doctors."  

So yeah this is also is this published in the Journal of American Medical Association.  

The link goes to-- 

EMILY M. BENDER: Looks like it, yeah.

ALEX HANNA: JAMA Network. Yeah. 

EMILY M. BENDER: Let's see, um I might not be able to access it but we can at least see where it is. 

ALEX HANNA: Just to see the abstract. 

EMILY M. BENDER: JAMA Internal Medicine, yeah. I'm mad, thank you. Um yeah.  So-- 

ALEX HANNA: This is this is in JAMA, yeah. 

EMILY M. BENDER: Yeah and this is another one of the ones where like the headline or in this case the tweet is pretty distant from what's actually going on. So if you look at the title of the article, it's, "Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum." So this is a text-based interaction um not necessarily patient-doctor in the first place, because it was I think it was Reddit.  

Um and um let's see if the LLM can answer well--you don't have access to like good ground truth as to what's actually going on with the patient. It's not a conversation, um it's not a very realistic thing and I'm I'm forming these ideas based on this really nice um set of responses that were collected in this thing called Science Media Centre.  

And you know we've--we've got links in the show notes when we're done. So all these things, the good, the bad and the ugly are gonna be um accessible.  

But they asked a bunch of folks um including I think physicians and also like medical ethicists and and similar and some AI people um what their I what their response was to this and uh so here is Professor Martin Thomas: "As the authors explicitly recognized they looked at a very small sample of medical questions submitted to a public online forum and compared replies from doctors with what ChatGPT responded. Neither the doctor nor GPT had access to the patient's medical history or further context, and it wasn't the kind of interaction that doctors usually do." So um and anything else on this? 

ALEX HANNA: Yeah just some of the other kind of statements kind of you know replicate things that we said. GPT has no medical uh quality control or accountability. LLMs are known to invent  convincing answers that're untrue. Doctors are trained to spot rare conditions that might need urgent medical attention. And that's kind of like the kind of thing that we were talking about when we looked at Glass.ai. Basically if you try to talk about things that present, you're going to give kind of the-- you're going to have this kind of um you know this imbalanced class problem. Basically kind of supervised learning. It's going to give you the thing that is most prevalent in the data, right, it's not going to tell you if there's something that's a little more rare to spot. And so interventions are going to differ drastically.  

EMILY M. BENDER: Yeah there's a comment here from Trochee Trochee that I want to bring up, which is, "MDs should not be answering Reddit questions either." Indeed.

ALEX HANNA: I mean truly, yeah. 

EMILY M. BENDER: So the the question that I have is why were they--sorry that's the next thing. Why were the study authors asking this question in particular? Like what did they hope to learn or prove by setting up this particular experiment? Um it seems like not not a direction towards LLMs helping doctors. Maybe it's better framed in the JAMA Network thing but it's certainly not well framed in this tweet. 

ALEX HANNA: Mmm-hmm.

EMILY M. BENDER: Okay, next. So this is this is a um peer-reviewed article, it's got this peer-reviewed badge, in a journal called Cureus, c-u-r-e-u-s, which is billed as part of Springer Nature um and the title is "ChatGPT in Dentistry: A Comprehensive Review." And I clicked through the reason--I'm like how do how are the dentists imagining that they're using ChatGPT? 

And I get as far as the abstract. "Chat generative pre-trained transformer (ChatGPT)--" I like how they gave it his full name. "--is an artificial intelligence chatbot that uses natural language processing that can respond to human input in a conversational manner. There's numerous applications."  

Um and then one of these is, "In the dental field it has provided many benefits such as detecting  dental and maxillofacial abnormalities on panoramic radiographs, and identifying different dental restorations." In other words they're claiming that ChatGPT can read x-rays.  

ALEX HANNA: This is this is just really incredible, I mean I think--and this is getting I think in a way that different fields are really running to identify how this can be used and they're just putting putting anything on it. Just slap a GPT on it, and that's kind of what it's happening. That  sounds like great merch let's get let's get a shirt that says "slap a GPT on it."  

But you know and and so you said, Emily, like down in the text what they actually are thinking is that they're just using ChatGPT as kind of a thing that means any kind of AI or any kind of computer vision, because yeah computer vision could possibly be used to classify certain kinds  of things, like that seems like a doable thing um to like identify like cavities, yeah, but and so yeah they say and you've selected, "Additionally AI can automatically classify dental restorations on panoramic radiographs and detect dental and maxillofacial abnormalities, such as appear--period--periodontal diseases." Yeah. 

EMILY M. BENDER: Right so I think what happened here in this 32 by the way is another review article it's not actually the experiment that was done. But I think someone decided that ChatGPT and AI are synonyms and so they wrote that ridiculous thing in the abstract and it wasn't caught by anyone. And this was um uh Vukosi Marivate was the one who clicked through this part and discovered um that this thing--peer review began on April 26 2023 and was concluded on April 27 2023. Which means this thing was not peer-reviewed. Even though it has the peer review badge. And it left me wondering like did someone just run it into ChatGPT to get a review out? Like. 

ALEX HANNA: Hey, it's possible. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: Some folks in the comments are saying, "*slaps top of car* this bad boy can fit so much salami in it." And Christie our producer said something similar. So it can hold so many GPTs. Oh dear.  

EMILY M. BENDER: That's that's the merch that I want. 

ALEX HANNA: I know. 

EMILY M. BENDER: Okay still in the health area-- 

ALEX HANNA: Oh my gosh. 

EMILY M. BENDER: --you want to read this one to us? 

ALEX HANNA: Yeah, this is an arXiv paper so again not peer-reviewed um-- 

EMILY M. BENDER: And not even nominally peer-reviewed. 

ALEX HANNA: No and this is um, the paper's title is, "Towards Healthy AI: Large Language Models Need Therapists Too."  

Um yeah so this what it's for proposing is uh they say, "We present Safeguard GPT framework that uses psychotherapy to correct for these harmful behaviors in AI chatbots, which includes four types of AI agents: a chatbot, user, a therapist and a critic."  

Um so this is incredible. This is like someone that watched therapy you know like saw a TV show about therapy and thought yeah like and it's like yeah chatbots we could they could they could stop quote hallucinating if we just we just we just gave them what they needed. Hurt models...hurt models. [Laughter]

It's just I'm curious on the um the professions of the people. Can you click the PDF? 

EMILY M. BENDER: Yeah, let's click the PDF and see where these folks are. 

ALEX HANNA: Because I'm interested if they are computer scientists or if they are um in the medical profession. 

EMILY M. BENDER: arXiv's being slow, yeah. And just while we're waiting for that to come up um I want to raise something up from um the chat .

So Ethnomusicologist says, "Isn't reading and interpreting medical imagery a deeply complex  mix of experience training and art?" Yes um I think that there may be some applications of computer vision in assisting radiologists um but there's also all these examples of like the computer vision system detected whether or not the patient was lying down or sitting up and decided that all the ones lying down actually have whatever it was that they were looking for, because yeah so there's a lot of software that picks up on spurious cues. 

ALEX HANNA: I would say computer vision is probably well suited for some of these things, but it's also curious to see like where--I don't know any of the uses of that systematically in dentistry. 

Okay so we're looking at the title of this so one asking who are these people, what do they study. 

ALEX HANNA: One author from Columbia, three from IBM Research. Um what's the footnote say here on on this first author, Lin? Uh oh it's not-- 

EMILY M. BENDER: It's not there? 

ALEX HANNA: There's a star next to their name but it doesn't say. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: Is it at the bottom? 

EMILY M. BENDER: At the very bottom of the paper? 

ALEX HANNA: How are we gonna do yeah--we don't have time to go to the bottom of the paper. 

EMILY M. BENDER: No, no. It's just a mystery. 

ALEX HANNA: It's just a mystery star. 

EMILY M. BENDER: They heard that we're reading the footnotes and they prevented us from doing it. 

ALEX HANNA: How dare. Okay all right let's let's let's move on. 

EMILY M. BENDER: Yeah okay so so so backing up just just to say one or two more things about this um. The--the again why ask this question and then why frame it this way? So I guess the problem they're trying to solve is they're trying to make the chatbots--have uh these chat right, you said, "it can be potentially harmful, exhibiting manipulative, gaslighting, and narcissistic behaviors." No, none of that. What the chatbots do is they synthesize text. But people might interpret it as those things and they would like less of that and so they somehow think that using ideas from therapy that might work with humans is going to help these. 

It's ridiculous. And then the the title is terrible. Large language models do not need therapists.  

Okay. 

ALEX HANNA: And the behaviors are [unintelligible] "manipulative, gaslighting, and narcissistic behaviors" which again is making this equivalence between human agents and a sort of human psychology, and these machine pattern matchers. Yeah. 

EMILY M. BENDER: Yeah. Okay. 

ALEX HANNA: All right. 

EMILY M. BENDER: So still in the therapy space. I need to tell this story so there's a--the top of this thread here has, "This tweet was deleted by the tweet author."   

And then a Twitter user with the handle NecroKuma3 tagged me and Timnit in, saying "This is  completely unethical and even dangerous. They are playing with people's mental health." What was it?  

Um so NecroKuma also grabbed a screen cap. Someone named with Twitter handle BEASTMODE, all caps, says, "Introducing AI therapy on @ForeverVoices, with AI therapist Sasha. Using realistic two-way audio, Sasha specializes in CBT, DBT, and mindfulness meditation. Let me know what you think." 

And then previously it was a video. The screencap just shows a picture of a you know conventionally attractive probably white woman with long hair that's not styled professionally,  let's say, it's much more for going out. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: And then the script was something like she was--is back and forth with this supposed user of the system and she's explaining what CBT is and DBT is. And like he had literally set this up as a come do um uh you know come do therapy with the chatbot. And  Forever Voices is not a therapy outfit. They create these characters to do lots of different things.  

So. I thought hmm, um and this is me on Twitter on April 24th. "Indeed psychotherapists tend to require certification at the state level. I can't even find a web page that says what locality that claims to be operating in, but @BEASTMODE is in Texas." And then um I said, "So this seems quite relevant."  

And posted the link for the relative relevant certification board in Texas's fraud detection or fraud reporting line. And um right after that the initial tweet went down. So that felt like a win. 

ALEX HANNA: You might have--and I I want to um I do want to like say a few things about this. First off, how dare this man take the handle "Beast Mode?" Disrespectful to Marshawn Lynch, first off.  

Um secondly um I mean it's sort of you know like, there's--again I think there's sort of something  happening here as in with the prior one where I think there's a certain kind of weird imaginary of kind of like what what therapy is. And I mean it I mean there's there's writing a little bit on this kind of growth of remote therapy and kind of the problems with that um. This reminds me a little bit of um Hannah Zeavin's book, um I think it's the title is the--the remote--"The Remote Cure"?  

Um but it's about this uh or it's it's on it's sort of the rise of telemedicine. And so it seems to be sort of the natural direction of these things moving to become more automatable or app-able or whatever. But knowing that these are things require licenture and have licensing regimes, with whatever problems licensing regimes have, they're also the case so you can't jump in and say anyone's a therapist, right? 

EMILY M. BENDER: Yeah. And any machine's a therapist. Okay one last thing that we've got to keep moving. I'm noticing that Twitter here is displaying um the bio for Forever Voices, because they're tagged in this conversation, um and you know, "[microphone emoji] Experience the magic of engaging in two-way voice conversations with iconic legends such as Steve Jobs, Taylor Swift, and more, right through your phone."  

That's super sketchy. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: I'm sure the family of Steve Jobs has not authorized that. Really doubt Taylor Swift has. 

ALEX HANNA: I would love to do sort of thing and get get my friend Tamara Kneese on the show, who's written a lot about these kind of post-death like um memorializations and the AI. Because she has a whole book on it that's that's coming out that's about sort of um death and and and maintaining this. Um uh. 

EMILY M. BENDER: We got to move on. 

ALEX HANNA: A few things--yeah let's move on. In the chat RuthStarkman: "As an as an engineer who serves medical researchers I do see a lot of administrative pressure to use these technologies, a heavy push uh to get grants even though there's no use clear use for the tech." And yeah, helpful to basically point that out. 

EMILY M. BENDER: Yeah that's a that's a problem that needs solving at like more of a systemic level. And I hope that by being loud with the you know anti-AI-hype we can maybe um claw back a bit the enthusiasm and like government funding following VC funding pattern that we're in.

ALEX HANNA: Yeah, yeah. All right we are all right we're all right we're almost there, I see the I can see the finish line. 

ALEX HANNA: Oh yeah, go to the top of the article.

EMILY M. BENDER: You get to lead on this one. 

ALEX HANNA: Oh gosh all right, so this article is called, "The Carbon Emissions of Writing and Illustrating are Lower for AI than for Humans."  

Um and it's written by a few people, I don't know their disciplines. Is there an abstract on this one?

EMILY M. BENDER: Yes. We have an abstract. And we have their we have their affiliations. 

ALEX HANNA: Oh they're all computer scientists and informatics or management scholars. "As AI assistants proliferate, their greenhouse gas emissions are increasingly important for human societies. In this article we present a comparative analysis of carbon emissions associated with AI systems." I mean cool, I like that. But then but then here's the twist and they're not comparing the AI systems, they're comparing their human individuals performing equivalent writing and illustrating tasks. 

"Our findings reveal that AI systems emit between 130 and 1500 times less carbon per page of text generated to human writers, while AI illustration systems emit between 310 and 2900 times less CO2 per image than their human counterparts."  

Um okay what they're basically saying is that the cost of AI systems generating this is much less than the cost of humans living and working. So there's a great chart down here. 

EMILY M. BENDER: Yeah let's get the chart, sorry the chart that was real too early. 

ALEX HANNA: But the chart is yeah this is the chart, Figure 1 of this shows that, and this is on a logarithmic scale, so BLOOM takes the--BLOOM, which is the um kind of the the big science model um writing one page takes you know this amount. ChatGPT--which I don't even know how they came up with these ChatGPT estimates since we haven't seen um these estimates anywhere--takes a little more. 

Huge jump for a laptop computer, then a desktop uh computer, than a human from India writing one page, and a human from the US writing one page.  

So if you are a human and you are living you are taking up a lot of carbon and while that is true uh it it--just like this boggles the mind.

Like there's not humans actually like typing 1800 prompts for ChatGPT, this like okay this is a ridiculous comparison. 

EMILY M. BENDER: Yeah like basically you know humans are not optional, we are here. You know we can look to reducing our individual carbon footprints for sure. Um ChatGPT is fully optional. And so I have to I have to do one last thing from this which is in the end, they have their acknowledgment section, or was it methods um let's see, uh--

ALEX HANNA: Yeah, "As part of this research, we utilize ChatGPT--" 

EMILY M. BENDER: Yeah so, "--to support the drafting and editing of sections within this article. Nevertheless the core scientific work, including data analysis, calculations, and conclusions was carried out by the authors. The authors carefully edited all AI-generated text to ensure the quality of writing remained high.  

Our usage of AI aligns with the findings of the study, since incorporating AI into the writing  process can be an environmentally sound decision when managed responsibly." 

ALEX HANNA: Wait so like I'm so confused. So did they, like did they just not eat or live or drive. Like like make it make sense, Emily, did they did they just hold their breath and like fast while they were writing this article so all the carbon--did they did they shiver in cold? 

Make it make sense. 

EMILY M. BENDER: My guess is that they probably refrained from running the gas stove in their own home and had restaurant food delivered. 

ALEX HANNA: Got it.  

EMILY M. BENDER: That would be very consistent with their thought processes. 

ALEX HANNA: Much cheaper, great great I love it.  

EMILY M. BENDER: All right next. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: So this is a tweet from @DrMisterCody who's screen capped a bunch of tweets that we wanted to share--um oops come back--uh so this is in the context of that tragic death of um that uh tech person in San Francisco whose name I don't remember off the top of my head. But that's like the sort of the discourse that this lands in. Um do you want to read these or you want me to do it? 

ALEX HANNA: No you--I'll I'll comment on them okay yeah because living in the Bay Area I feel probably able to talk about this stuff. 

EMILY M. BENDER: Okay so this is um @WizLikeWizard says, "Seems like SF has highest violent crime of 10 major cities." Uh quote tweeting themself um with a table um where the rank, um so it's the violent crime for 100,000 inhabitants, rank 1 city, San Francisco. And then a number. Two is New York, three is Los Angeles, four is Chicago, but that number is actually higher than the number for Los Angeles. Five is Houston, which is actually the third highest number. And six is Miami, which is actually the highest number, so the ranking makes no sense.  

And then it says, "Source: FBI Uniform Crime Reporting UCR Program 2019 via ChatGPT." And then there's a couple more of these. 

ALEX HANNA: "ChatGPT sorted that incorrectly. SF should be second and then there it's resorted, and the numbers are different. Murder rate for 100,000 inhabitants. Now these are much smaller numbers um and San Francisco is fifth.

Um uh maybe that's actually a real table this time, I don't know? 

ALEX HANNA: That's that's not a real table. 

EMILY M. BENDER: That's not a real table either? 

ALEX HANNA: This is this is this is the ChatGPT made up table.  

EMILY M. BENDER: It's another one because the first one was also a ChatGPT made up too. 

ALEX HANNA: Yeah and then this one is the and this this one--and then the next one is the actual actual homicide per capita. 

EMILY M. BENDER: Okay.  

ALEX HANNA: Um so San Francisco actually ends up here--so I think the first one I don't see the top of this image but San Francisco is near the bottom of this, equivalent with Mesa Arizona, Seattle-- 

EMILY M. BENDER: Below Seattle.

ALEX HANNA: Yeah. So yeah. 

EMILY M. BENDER: But then um so someone says, "Is this the source data or the ChatGPT output?" And @WizLikeWizard says, "ChatGPT output." 

ALEX HANNA: Yeah so the ChatGPT is making up the data here and this is this is incredible here because just and--I want to say just about what kind of things this is doing--because this supports so many different narratives. Right I mean there's this moral panic now in San  Francisco you know which has justified incredible amounts of policing in San Francisco, by the mayor um by the Board of Supervisors, that had led to the recall of the progressive DA Chesa Boudin.  

And like so it's this it's this very San Francisco-speak. Also when they resorted it Chicago popped up to the top which also is just like supporting this Trumpy narrative of of Chicago being this kind of like murder capital of of the world. And so I mean it's like very it's and it's just incredible this is made up--yeah like the data is just completely made up and it's supporting a very you know this very Silicon Valley narrative of San Francisco as this like den of crime and it needs to be--and I mean is this is like this shit is consequential y'all. Like the kind of things that they're talking about in San Francisco is effectively returning to like a War on Drugs kind of policing style because of fentanyl in the city.  

And if you know you're just posting these things--and I mean UCR reports are fucked up to start with there's a there's a lot of people that have written about the um how crime data is messed up. Tamara Nopper comes to mind as talking about how crime data is generated, especially UCR reports. But at least acknowledge that source of made-up-ness and not add another layer onto it. 

EMILY M. BENDER: Yeah and it's so like it's it's so sort of self-serving like I'm--please, feed my confirmation bias ChatGPT. Give me give me tables of numbers that you know say what I want them to say and so that I can drop this fake data into this completely messed-up discourse. All right um okay.

ALEX HANNA: It's all hell. All right we're halfway through, we're halfway through the hell. I don't--if I could I could I could I could put more flames on the screen. But I don't think it would help. 

EMILY M. BENDER: This next one is a flavor of hell, let's say. We don't need that ad. So. 

ALEX HANNA: Oh Lord. 

EMILY M. BENDER: I just said--this is from a publication called The Bite, which is poorly edited by the way,  

"New smart glasses tell you what to say on dates using GPT-4," and then, "Say goodbye to awkward dates and job interviews." This would make it definitely less awkward. 

ALEX HANNA: I know why don't you have a--I mean already if you're wearing smart glasses on a date we already know it's it's not going to go well.  

What is this? Oh oh holy shit, I didn't read the copy. Um a team--"Charisma as a Service," oh my lord. "A team of crafty student researchers at Stanford--" Naturally. Hashtag #AbolishStanford. "--have come up with a pair of smart glasses that can display the output of OpenAI's ChatGPT large language model, potentially giving you a leg up during your next job interview, or even coaching you during your next date the device, dubbed rizzGPT--" 

Like what? "--offers that's where I quote "real-time Charisma as a Service (CaaS) and listens to your conversation and tells you exactly what to say next," as Stanford student Bryan Hau-Ping Chiang explained in a recent tweet." Oh yeah. Oh my gosh.  

EMILY M. BENDER: Here's Brian Hau-Ping Chiang's tweet: "Say goodbye to awkward dates and job interviews. (Frowny face) We made rizzGPT, a real-time Charisma as a Service (CaaS). It listens to your conversation and tells you what to say next. (The shocked face emoji) Built using GPT-4, Whisper and the Monocle AR glasses." 

And then this is just too painful to watch that people can click through on the show notes and watch. It it's a video of um someone asking him interview questions and then he's like reading the responses that this thing is giving, but like getting stuck and coming out with completely unnatural intonation. 

And you know again like you're going out on a date with someone and the first thing they do is show you that they are going to like radically violate your privacy by you know sending what you're saying out to some cloud service so that they can then tell you fake things that aren't what like--why?  

ALEX HANNA: Cody in the comments says, "I'm almost angry with y'all for exposing me to this." Yeah, sorry.  

EMILY M. BENDER: Yeah I mean I think that the the show, the pure hell was was the trigger warning right? Okay. 

ALEX HANNA: Yeah okay. 

EMILY M. BENDER: You get to do this one oh, you're so mad at it. 

ALEX HANNA: Yeah this made me furious. So James Meickle, who's a a researcher uh tweet tweeting a screenshot saying, "I fucking hate this, man. I don't want my data pipeline vendor doing this ever. Really regretting bringing this tool into my job." 

It is this um tool called Ask Marvin's AI, um and the tweet says, "Photoshop has context-aware fill to seamlessly replace unwanted parts of your image. Shouldn't data have the same? Introducing context--" 

EMILY M. BENDER: No. 

ALEX HANNA: Absolutely, it shouldn't. "--introducing context-aware fill-na." And Fillna for those listening is is all one word, and is it is this Pandas um uh function um which is the the Python data analysis package. "--to seamlessly replace missing values in your data. Part of @AskMarvinAI's new AI functions for data." And if you scroll down um it's sort I think there's a gif on on what it is but basically what it's doing is they have nones in the data so the data frame says, "The Terminator, 1984." And the director's field is missing.   

Okay, so maybe for something like this where you know like you could get something from Wikipedia, is uh is is is easy enough. But you are basically doing imputation using something, we don't know what it is, and there are good imputation methods and some stuff can be imputed. Like that is a field of research, you are making up imputations for your data. Sometimes there's a reason that there's missing data. Sometimes it means there's a fucking problem in your data, man, don't use this.

EMILY M. BENDER: All right next. This one's short. 

ALEX HANNA: There's lots of people in chat going Sad Panda. Yes.  

EMILY M. BENDER: All right um some nice reporting from Vice, um I want to see who the author is here. Maxwell Strachan. I was looking because um Chloe Xiang does excellent reporting um on Vice too, I really like her work. 

Um okay, headline here: "Overemployed hustlers exploit ChatGPT to take on even more full-time jobs." "'ChatGPT does like 80 percent of my job,' said one worker. Another is holding the line at four robot-performed jobs. 'Five would be overkill.'" 

ALEX HANNA: Four four is okay. Five? Too much. I--just this image.Incredible stock image, it's just it's just uh--it's like someone typed in 'hacker man' there's a and they've got six screens and they've all got like, like three have code on them, four has four has like just a hex dump, and the guy is like literally wearing like um uh what's that called? 

EMILY M. BENDER: Black ski mask.   

ALEX HANNA: Balck ski mask, like typing and just like big like "I'm in!" vibes. Um but it's actually he's just over just doing a bunch of jobs. 

EMILY M. BENDER: What I like about this the six screens there but the one that he's actually paying attention to is a small little laptop in front of all of it. So I guess it gives this like, 'ChatGPT's handling all the rest of it and then I'm just you know sort of doing little bits here and there.' 

ALEX HANNA: It looks like he's just watching the progress bar actually. 

EMILY M. BENDER: Yeah yeah come to think of it although he's got his hands on the keyboard. 

Um yeah okay so people doing multiple jobs at the same time because working remotely you can maybe pull that off um and now it's even easier because you get ChatGPT to do your work for you. That's got to fall apart at some point um. 

ALEX HANNA: Yeah I mean it's just it's part of this whole-- 

EMILY M. BENDER: Sorry. 

ALEX HANNA: --there was a gross image that kind of kind of came over there. But yeah I also want to say I mean it's just part of this like hustle I mean this kind of hustle economy kind of thing right? I mean you're going to--

I mean it's sort of sort of thing and I don't know if we have this in hell this week but I'd love to talk about it sort of next next time we do this but is the WGA strike, the Writers Guild of America strike, and a lot of the discussion that's been going around. 

EMILY M. BENDER: Yes. If we go fast enough we'll get to it. 

ALEX HANNA: We'll get to it, okay. Great. But it's definitely thinking about the hustle economy, and like how these things are not going to make your jobs easier, they're going to depress wages for everybody and then make you take four jobs using ChatGPT. 

EMILY M. BENDER: Yeah. All right so we've got a couple in here about um courts in various countries using ChatGPT-4 to get information, make decisions? I don't know like, why you would want to go to the tech synthesis machine in legal matters is beyond me, so you know we had a whole episode with Kendra that was amazing um and of course that's not enough to stop this in the world. So the headline on this one is, "Pakistani court utilizes ChatGPT-4 to grant bail in a juvenile kidnapping case."  

"The court claimed that after posing a number of inquiries to the AI chatbot it was pleased with the responses received." All right this harkens back to the like, tell me what I want to hear.  

Right there's a it's a, um you know what you put into one of these things shapes what comes back out. And it's not fully deterministic, obviously, because you put the same question in multiple times you get different answers, but it absolutely points it in a direction. Um and we don't need that interacting with automation bias in our legal systems anywhere in the world. That was Pakistan. We also have--you wanna take this one? 

ALEX HANNA: Yeah this one's from Tech Policy Press um um and the title is, "Judges and magistrates in Peru and Mexico have ChatGPT fever." Uh this is written by a legal uh uh public policy and AI scholar uh uh Juan David Gutierrez, assistant professor at Universidad del Rosario in Colombia um and he's describes this as, "A judge in Peru and a magistrate in Mexico claim they used ChatGPT to motivate a second instance ruling and to illustrate arguments in a court hearing, respectively." Uh okay. 

EMILY M. BENDER: Yeah. And then the news reporters in Peru and Mexico aren't better than ours here, so they frame the use of ChatGPT in judicial proceedings as a positive innovation.  

ALEX HANNA: Yeah and it's--I don't I know nothing about the court the court systems in Peru and Mexico um but it's just and but yeah there was also this reporting that he had about the judge and magistrate in Colombia that had used ChatGPT to draft judicial decisions. And we've  talked about this before too. Uh just like incredible just--and and I mean it's sort of--drafting decisions already seem beyond the pale and then you're going in kind of different places where these decisions becoming more and more consequential.  

Um yeah yeah. 

EMILY M. BENDER: So you know there's desperate need here for broad public education so that people in these consequential decision making um positions know better than to look to this and you know you can I think to a certain extent you can forgive someone whose expertise is in--you know they're a judge or a magistrate they've got deep expertise in a certain area. They're not computer scientists, they--you know all of a sudden are getting this tsunami of hype coming over them, and um I think these are bad decisions they were making, I think it's bad that the press was framing it as a positive innovation, but also a whole lot of the blame here sits with  the people who are doing the AI hype. Because they make this sound plausible. 

ALEX HANNA: Yeah. Lord. All right, what's next?

EMILY M. BENDER: You wanna check the chat?

ALEX HANNA: No, just like--I'm getting the vapors here. oh this one so Dr--Dr. Damian P. Williams uh whose current title is great in his is evergreen but it's, "Dr. Damian P. Williams warned you about exactly this."  

Um but he says, "Well, here we go." And the quote is from The Verge. "Tesla lawyers claim Elon Musk's past statements about self-driving safety could just be deepfakes."  

[Laughter] Uh in and Damien's comment says, "Also? It was entirely predictable that the first high-profile use of this defense would come from this man or another man just like him." So yeah this is this is some to some accountability I mean this is this is more accountability hell than anything. But sort of  yeah--I didn't actually say that, that was a fake dude. 

EMILY M. BENDER: And it goes it goes right to the thing about, once you have non-information polluting the information ecosystem, then it becomes harder to trust the things that are real when they're real. And of course Elon Musk would try to take advantage of that. Um so Dr. Damian P. Williams had a wonderful follow-up that you read better than I do so go for it Alex. 

ALEX HANNA: Oh yeah. 

EMILY M. BENDER: He quote tweets himself. 

ALEX HANNA: Yeah he says, it says, "Epistemic crisis go brrrrrrr." 

EMILY M. BENDER: Which is brilliant.

Um and like you know great commentary on something tragic. Okay we're flying through, this we're gonna get--ugh, more Musk. 

ALEX HANNA: More Musk. I got to read the last one, you do this one. 

EMILY M. BENDER: Okay so this is a Twitter user, Sawyer Merrittt. Um, "NEWS: @ElonMusk says he's creating a maximum truth-seeking AI chatbot called TruthGPT that tries to understand the nature of the universe." And then in quotes, "An AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe." 

ALEX HANNA: And the video itself--have you watched the video? And it's just you know like I  won't insult his appearance but I will insult his like his like white South African you know extractive mining ass accent. I feel like Elon Musk is open season um. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: And sort of so you know and the way he sort of said it was like so reverent and it was less like, "We're an interesting part of the universe," it's more like, "I'm an interesting part of the universe and in AI should care about me."  

Um and you know yeah. 

EMILY M. BENDER: This is like I said know it's more of the long-termest existential risk bullshit right? Which is slightly different from what I thought TruthGPT was going to be. I thought  TruthGPT, when I saw that handle, was going to be you know like Trump's TruthSocial, it was going to be like the non-politically correct version of it. Like for the people who think that ChatGPT um I'm not going to say it, because I dislike the way that word gets used that way, um but you know where I was going. Um but no it's not that it's this like this idea that somehow um a language model that is a system that is trained to predict the next word in text can somehow be something that can discover truth? 

ALEX HANNA: Truth yeah. 

EMILY M. BENDER: Maximum true--like that that's a thing that can be discovered and it can be--so that's the first problem. Second problem is it can be discovered by predicting the next word in text with more and more text, and then the third thing is that we're worried about AI annihilating humans so we're going to build an AI, and then the fourth giant leap here is that but humans are interesting, so a truth thinking one surely will like--it just-- 

ALEX HANNA: It will do that, yeah. And I mean that's--like one, very I mean a lot of the people  who are--it's kind of interesting because a lot of the people who criticize say, us, they'll say well you're being very anthropocentric. And it's not it's like well no not actually like like-- 

EMILY M. BENDER: Yeah. 

ALEX HANNA: --it's actually like it'd be better if these things didn't exist and we would have  probably a more vibrant ecosystem in the world in which humans were one of part then to say  we need to build these super intelligent things and we will be eliminated because um we're not. And it's I'm just it's just you can't have--I'm so confused by this type of argument um yeah. But this these kind of "truth," this kind of idea of "truth" um is is is is is is just so bizarre this kind of notion of truth. And and and Viidalepp in the chat says something, "The concept of TruthGPT is a total semiotic failure. There's no truth, every current truth the result the social negotiations and can and will be renegotiated sooner rather than later." 

And I don't want to get too too deep into the epistemic rabbit hole here but it's sort of saying like at least with that saying is that yeah, there's trying to find these things truth or calling something truth and it's very it's a very right-wing type of move. You know the Truth Social this sort of idea without acknowledging the contextual and negotiating nature of these interactions. 

EMILY M. BENDER: My favorite Truth Social factoid is that if you um boost someone's post you have "re-truthed" it. It just like really talks about how is it you know and deciding what is truth is a social negotiation and so on. Okay also very ironic that this technology which is being used to pollute the information ecosystem and find it harder to find trustworthy sources is somehow a thing that's going to be finding truth. Here is a terrible example of that. 

And this is some reporting in NPR about an article that appeared in a German newspaper--German magazine. And the title is, "A magazine touted Michael Schumacher's first interview in years. It was actually AI." And the story here, um, "A German tabloid magazine raised hopes and eyebrows earlier this month when it published what it called the quote 'first interview' with with Michael Schumacher, the race car legend who hasn't spoken publicly since suffering a near-fatal brain injury in December 2013.  

The April 15th 'Die Aktuelle' article featured quotes purportedly from the German athlete  discussing his medical condition and life after his skiing accident, the kind of information that  his family has fought to keep private for nearly a decade. The big reveal came at the very end: 'Did Michael Schumacher actually say everything himself?' the article concludes, according to 'The Independent.'  

'The interview was online on a page that has to do with artificial intelligence, or AI for short.'  

And I'm just--this is heartbreaking. 

ALEX HANNA: Yeah that's really awful, and I mean just completely abrogation of journalistic ethics here. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: And just like making up a whole thing um against the wishes of of this guy and his family. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: All right. Move on. 

EMILY M. BENDER: Okay next. 

ALEX HANNA: We've got 10 minutes. 

EMILY M. BENDER: Yeah.  

Um, so. You can have this one. For 10 minutes! We can do it, get through all these. 

ALEX HANNA: Yeah so uh so this is a tweet from the Washington Post. "As an award-winning vocal artist, Remie Michelle Clark's smooth Irish accent backs ads for Mazda and MasterCard, and is the sound of Microsoft search engine Bing in Ireland.  

But in January her sound engineer told Michelle Clark he'd found a voice that sounded uncannily like hers someplace unexpected: on a website called Revoicer, credited to a woman named Olivia."  

And so um yeah so it seems to be the case here that they basically trained a voice synthesis model um to generate her voice, um someone who is makes her living off this, um and is is basically has provided sufficient amounts of uh of of training data um to replace herself on the market. So already you're getting into a place where you're--it's a violation of artists uh you know where they're making their money, without their consent um and without remuneration, um which is yeah this is just like happening all over the case here. I mean this is also in in the in the we talked about this a bit within the AI art episode, um this is becoming the case more and more. 

EMILY M. BENDER: Which brings us to the writer strike. 

ALEX HANNA: Yes.

EMILY M. BENDER: So from Hari Kunzru, "Writers in other fields should be paying attention to the WGA negotiations. Here's the current position on the use of AI--" And just want to because that says current, want to flag that this is a tweet from May 2nd, um so we're getting close to the most current hell here.  

So this is a little breakout of one of their documents, I think reporting on negotiations, um subheading 'Artificial Intelligence.' On the left hand side um is the um the WGA proposal. "Regulate use of artificial intelligence on MBA-covered projects. AI can't write or rewrite literary  material, can't be used as source material, and MBA-covered material can't be used to train AI."  

This sounds like wonderfully sensible positions, I'm so excited um that the folks at WAGA sort of  are on top of this and have articulated these positions. Unfortunately the right hand side:  

"Rejected our proposal. Countered by offering annual meetings to discuss advancements in technology."  

ALEX HANNA: Yeah and so this is a I mean this is interesting. I was talking to um I was talking to someone the other day and it was, I think it was spurred by the discussion--and someone in the chat, Abstract Tesseract says, "No ScabGPT." And I think there was it was when there was a strike of food workers at the San Francisco Airport and um there's this very annoying thing at  the San Francisco Airport and there's, it's called Cafe X and it's this like stupid robot arm that  like it like dances and it like it's a--and it's a really it's a glorified like vending machine that makes coffee.  

Like one of those vending machines like in Better Call Saul, you know like it's really a glori--and  they still have to set the sugar off to the side, and someone still has to tend to it like a little  fucking baby and like and and and it's sort of--and I was like, can robots be scabs? And I think it was um I want to say it was Erica Robles-Anderson said something like, well the threat of AI is always the threat of scabbing and the threat of job replacement. 

The threat of automation is always doing that. And so like big kudos to um the Writer's Guild for really seeing that and making it part of their negotiations. And I'm going to drop in the chat uh this this picket sign by um Ify Nwadiwe, who um uh who is a um a comedian I enjoy, and he's got a picket sign that says uh, "Allen Iverson is the only AI I mess with." So um I just love that picket sign. 

EMILY M. BENDER: Nice. Okay next. "Bill Gates: ChatGPT will soon be able to replace teachers." 

ALEX HANNA: Oh my gosh. 

EMILY M. BENDER: And and the phrasing of this was so infuriating. So, "Artificial intelligence parentheses (AI) has been an interesting topic for many years." What a stupid intro. "Bill Gates, the co-founder of Microsoft, is a strong advocate for its benefits. He believes that generative AI models like ChatGPT will be able to serve as teachers and perform as well as any human in the near future." Um and then down here um, he says, repeats this, "he believes that AI powered chatbots will become as good as any human tutor--"  

Um no where's the thing? Oh: "This would be a more economical solution for many disadvantaged parents who cannot afford a human teacher." Like wait a minute public education, right? It's not on parents to afford teachers, it's on society to provide education. 

ALEX HANNA: But this is also very I mean this is very much within the Gates vision too. And I want to shout out some reporting of the um, what is it, the Bridge schools that were funded by Gates? Um, that were prevalent in many parts of sub-Saharan Africa, you know basically. And it was established by I want to say there were like two TFA--Teach for America alums that were basically saying, we can educate people at such a economical cost um, but these are schools that are incredibly formulaic and rife with sexual abuse.  

There's an article in what is in The Intercept, there was a really long forum piece on this that's   indicative, that's that's really helpful. So I mean it's it's really I mean it's and it gets back to that Sam Altman tweet that's that people were ragging on that like, AI is going to replace doctors and teachers. And you're saying, what's your saying is that you know the poors are going to get AI and the rich people are actually going to get empathetic you know human contact.  

Um so real amazing dystopia that we're building here. 

EMILY M. BENDER: Yeah, ugh. Okay.

Got a couple of tabs here on Jeffrey Hinton's x-risk media tour um. And uh so Hadjar Homaei:  

"Asked Jake Tepper to ask Hinton on CNN about his lack of support for Timnit Gebru and Meg Mitchell and Meredith Whittaker and other whistleblowers. And his response--" Hinton's response. "--is that their concerns were less existential than his and also that it's easier to voice concerns if you leave first." 

Which, just, I'm doing this quickly.  

ALEX HANNA: There's a lot to this. He also missed--was like misgendering people like like-- 

EMILY M. BENDER: No that was that was the CNN guy was misgendering. 

ALEX HANNA: Oh Jake Tapper? Oh great. 

EMILY M. BENDER: Yeah yeah um uh but this is the thing I wanted to pull up of this one. Carbon. Um so uh this is a Guardian article. So the Guardian article says, "But when it comes to  offering concrete advice, he is lost for words. 'I'm not a policy guy,' he says. 'I'm just someone  who's suddenly become aware that there's a danger of something really bad happening.'" Which is x-risk. 

"'I wish I had a nice solution, like just stop burning carbon and you'll be okay, but I can't see a simple solution like that.'" So he's trying to get us to believe that the existential risk to the  human race from rogue AI is um sort of a bigger deal in this case, harder to solve than climate  change. And also if climate change were simple, like the policies required to move off of car--like it's not, right. 

ALEX HANNA: Yeah like like this is not a thing and it should be said that, like Hinton is a multi-millionaire. He sold his company to Google with Krizhevsky and Sutskever for 44 million.  

Hinton has been a huge investor in different large language model technology companies. He's been part of uh deals that have amounted to 150 million dollars. I mean the man has money in it. This talk is so cheap. I just and it's in that and then he gets to do a press tour on this. It just makes--it just pisses me the hell off. 

EMILY M. BENDER: It's infuriating. Yeah. All right um we've got like two minutes to do this one last one which is silly. So: "Mind- reading machines are here: is it time to worry?" And so this is an experiment. I didn't show this one to you beforehand, but just very quickly a bunch of neuroscientists had some people go into an MRI machine and listen to 16 hours of podcasts  and then also do some other things where they were like uh describing what was on a video, I  think or otherwise where they they um knew because like knew what they were saying and they could trace brain activity, and then they had them do it without actually talking and tried to use machine learning to go from the brain signals to what that person was saying. And it didn't work very well until they threw a language model on top of it to smooth it out and then it was somewhat better.  

Um and the sort of really interesting application here is for people who have lost their motor  control and can't speak, like this is this is a tech that would mean a whole lot to some people in  a very specific situation. Um but the the headline here is just ridiculous um because one of the  things that it does get pointed out in the article is you can't do this without the cooperation with the person, right. Like an MRI is not just like I'm going to aim the MRI at you from across the street.  

Um and so anyway it's a this is this is um you know--if it actually worked, this is an application  of language models that makes some sense. It's sort of like the speech recognition application--sorry automatic transcription application or machine translation application--where the biases in the language model are going to come out and they're going to cause problems and it's not perfect, but at least the language model isn't just making shit up. And I'm not enough of a neuroscientist to know if this is like a reasonable--like if there's something in the motor control area or whatever it is that would like be the signal that could get translated.  

ALEX HANNA: Yeah. 

EMILY M. BENDER: But like let's not talk about it as mind reading machines because that's not what this is.  

ALEX HANNA: Yeah I do see yeah I see this as like you know and I think that's an important point Emily is sort of like, there is a way in which you could if this worked enough for communication, you know as a continuous kind of thing. But you know FMRIs are not--you know you have to consent to quite a bot a bit, I mean it's a pretty engaged process, right. And so-- 

EMILY M. BENDER: Yeah. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: Um all right. So, we did it Alex. This other tab was just another view on the same article. We actually made it through. 

ALEX HANNA: That's incredible, just in time, that's amazing. Oh wow. All right thank you for navigating, thank you Emily for queuing all that up, and getting us on time--  

EMILY M. BENDER: Maybe next time we can dig into one thing and like take our time with it but there was just so much hell we had to--we had to go through it. 

ALEX HANNA: Totally. Uh we're gonna read our outro because um yeah. But that's it for this week um our theme song is by Toby Menon, production by Christie Taylor and thanks as always to the Distributed AI Research Institute. If you like this show you can support us by donating to DAIR at dair-hyphen institute.org. That is D-A-I-R hyphen institute.org. 

EMILY M. BENDER: You can find us and all our past episodes on PeerTube and coming soon, when we get the podcast out, wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream um that's twitch.tv/DAIR_Institute. Again  that's DAIR_Institute. I'm Emily Bender-- 

ALEX HANNA: --and I'm Alex Hanna. Stay out of AI hell y'all.




People on this episode