Mystery AI Hype Theater 3000

Episode 37: Chatbots Aren't Nurses (feat. Michelle Mahon), July 22 2024

Emily M. Bender and Alex Hanna Episode 37

We regret to report that companies are still trying to make generative AI that can 'transform' healthcare -- but without investing in the wellbeing of healthcare workers or other aspects of actual patient care. Registered nurse and nursing care advocate Michelle Mahon joins Emily and Alex to explain why generative AI falls far, far short of the work nurses do.

Michelle Mahon is the Director of Nursing Practice with National Nurses United, the largest union of registered nurses in the country. Michelle has over 25 years of experience as a registered nurse in various settings. In her role with NNU, Michelle works with nurses across the United States to protect the vital role that RNs play in health care as direct caregivers and patient advocates.

References:

NVIDIA's AI Bot Outperforms Nurses: Here's What It Means

Hippocratic AI's roster of 'genAI healthcare agents'

Related: Nuance's DAX Copilot

Fresh AI Hell:

"AI-powered health coach" will urge you to drink water with lemon

50% of 2024 Q2 VC investments went to "AI"

Thanks to AI, Google no longer claiming to be carbon-neutral

Click work "jobs" soliciting photos of babies through teens

Screening of film "written by AI" canceled after backlash

Putting the AI in IPA


You can check out future livestreams at https://twitch.tv/DAIR_Institute.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Alex Hanna: Welcome, everyone, to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.  

Emily M. Bender: Along the way, we learn to always read the footnotes and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. 

I'm Emily M. Bender, Professor of Linguistics at the University of Washington.  

Alex Hanna: And I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. And this is episode 37, which we're recording on July 22nd of 2024.  

It's been a while since we checked in on the excessive hype over AI allegedly transforming medicine. But somehow, despite all evidence to the contrary, the AI boosters continue to push this as a technology that can somehow transform a system that would benefit from--uh, benefit more from greater investment in skilled humans and the actual nuts and bolts of patient care, or, you know, universal healthcare. 

Emily M. Bender: Something we find particularly insidious are the efforts to make LLMs part of your relationship with your healthcare team, so called AI agents that might replace the work of nurses. From variations on ChatGPT to Google's LaMDA, to a model from a company called Hippocratic AI. I wish I was making that name up. 

And while Hippocratic's agent is allegedly testing better than human nurses on tasks related to medications, right, maybe it's time for a trained professional to remind us why there's a lot more to nursing than just rote memorization of drug interactions or dosing.  

Alex Hanna: That trained professional with us today is Michelle Mahon. 

She is Director of Nursing Practice with National Nurses United, the largest union of registered nurses in the country. Michelle has over 25 years of experience as a registered nurse in various settings. In her role with NNU, Michelle works with nurses across the United States to protect the vital role that RNs play in healthcare as direct caregivers and patient advocates. 

Welcome, Michelle.  

Michelle Mahon: Hi, I'm so glad to be here.  

Alex Hanna: We're so excited having you here today.  

Emily M. Bender: So excited. We're both going to say it at the same time. All right, let's get into this. I have our first artifact, which is an article in Forbes. Um, the, uh, author of this article, not a journalist, is someone named Robert Pearl MD. Um, listed as "a contributor covering the tech business and culture of U.S. healthcare," so he says, and the title is "NVIDIA's AI bot outperforms nurses, study finds. Here's what it means." Anything before we dive into the text?  

Michelle Mahon: Just oh my gosh, what kind of nonsense is this? I can't wait to see, um, the claims, uh, that he makes about, um, how replaceable nurses are. 

Something they've been trying to do for 30 years.  

Alex Hanna: Yeah, and I mean this in, in, just for folks who are listening on the podcast, uh, this is a very terrible image of AI. It's got the CEO of NVIDIA standing in front of, uh, kind of a slew of humanoid robots, um, at a keynote address at GTC, which is NVIDIA's big, uh, big developer conference. 

Emily M. Bender: So this article starts with some sort of boring stuff about the author's, um, father's experience with the original iPhone. Um, so we can just summarize that where basically the dad takes it and puts it in his trunk for emergencies and like basically didn't realize what that was the harbinger of. Um, so then it reads, "Generative AI is on a similar trajectory. In my new book, 'ChatGPT MD,' I envisioned AI tools becoming hubs of medical expertise for doctors and patients within the next five years."  

Um, so I guess we need a date on that. Um, I forgot that that book came up so early in the article. Um, do we have a date on this book?  

Alex Hanna: Yeah, April 9th, 2024. So only, how many months ago? 

Three months ago?  

Emily M. Bender: Yeah. Um, and Alex, you looked at like who would be publishing this kind of garbage, and you discovered what about the publisher of this book?  

Alex Hanna: Well, not surprisingly, like many things on Amazon, there's, it's a self published book. The publisher is, surprise, Robert Pearl, MD. Um, and in the, uh, very small text at the top of the book, it doesn't say this in the byline on the Amazon metadata, but it says "Robert Pearl, MD, and ChatGPT." 

Uh, so, you know, doesn't really quite admit it until you actually dig into it. Um, and even the copy on the Amazon site is trash. So, you know, look it up if you want to, if you want to do a hate read of it, let us know.  

Emily M. Bender: You can save us having to do that hate read. All right. So the reason I wanted the date there is the next--so he says, "This is going to become hubs of medical expertise for doctors and patients within the next five years." All right. Before I get to the thing that, that contextualizes that five years, uh, there's some important people missing from that statement, isn't there? Doctors and patients.  

Michelle Mahon: That's right. 

Wonder where they are.  

Emily M. Bender: Yeah. (laughter) Anybody who's ever experienced healthcare knows that there's a lot of really important care work being done by nurses and others, right? It's not, not everyone who's in there doing that care work is a nurse, um, but nurses are super important and not mentioned. So he says, "I feared my optimism about the technology was too ambitious, but by the time my book published last week--" By the time I self published my book last week. "--I realized my estimated timeline might have been too conservative." 

 (laughter) It's coming faster and faster.  

Alex Hanna: Yeah, and so he goes into this, and I mean, this is where it really gets nasty. Um, "In March, NVIDIA stunned the tech and healthcare industries with a flurry of a headline grabbing announcements at its 2024 GTC AI conference, the most striking being a collaboration between NVIDIA and Hippocratic AI to launch generative AI 'agents.'" 

Um, and then here's really where the hype meets the--the road? I don't know. I was trying to think of something else that started with H. "According to company-released data--" So according to the company's hype-ness themselves. "--the AI bots are 16 percent better than nurses at identifying a medication's impact on lab values. 24 more accurate detecting toxic dosages of over the counter drugs and 43 percent better at identifying condition specific negative interactions from OTC meds. All of that at $9 an hour compared to the 39, uh, $39.05 median hourly pay for U.S. nurses. These AI nursebots are designed to make new diagnoses, manage chronic disease, and give patients a detailed but clear explanation of clinician's advice." 

Emily M. Bender: Oh no, no, there's so much there.  

Michelle Mahon: Just in one paragraph. (laughter)  

Emily M. Bender: Yeah. Yeah.  

Alex Hanna: Thoughts on that, Michelle?  

Michelle Mahon: Oh, my gosh. So many. Where do I begin? Um, I mean, have the first thing that's coming to mind on this, uh, study, right, is, uh, how many of us have seen a, a drug commercial on TV where at the very end they speed it up and they're like, you know, that the special announcer needs to talk at, you know, a thousand miles per second, um, just to list all of the drug interactions.  

They do make software for this too, right? I mean, it's just like, just because you can look something up faster, doesn't mean you understand it or know what it means or know which one to pull out of that litany as more likely than something else. Um, that comes from understanding the person that you're looking at and your, your experience as a, as a clinician, right? 

You know, we're, we can look at all of these things. And so it's just really, um, basically, uh, you know, a drug handbook, um, with a, an avatar face on it. Something like that is what came to my mind.  

Emily M. Bender: And it's a drug handbook that is done through this stochastic technology, right, where it's basically you put in a question, you, but the input is something like, here are the drugs this patient is taking, are there any interactions I should be afraid of? 

And that's framed so that you think you're asking a system a question, but in fact, you are just putting in text for it to continue. And so what comes out the other end isn't necessarily accurate. Right. So even, even if it were accurate, there's all the other stuff you were talking about, Michelle, about how this stuff really works in skilled nursing care. 

But what might be useful is a queryable drug handbook that's not going to randomly sometimes say the wrong thing. Right? That a person could use, that a nurse could use as part of their care.  

Michelle Mahon: Absolutely. And the other thing that, that is really, um, shocking to me is, is they, they directly make this comparison that the, you know, look, you can get this, um, chatbot that does these three things. 

And it can do them better than nurses, supposedly, um, but, and that's all you get for $9 an hour instead of $39 an hour. So it completely erases everything else that nurses do. Um, and, and that I think is, you know, really what we have seen is that all of this hype is so contingent upon, um, the, you know, the predication is that the work that nurses do is not valuable. And I think we see that right here in this direct comparison, um, from, from this. It's just like, hey, all nurses do is look up drug interactions, you should get this, you know, look, it's the same. And I see that here.  

Emily M. Bender: Yeah. And the, the, the fact that they are that this $9 an hour compared to the $39.05 median hourly pay thing is infuriating on so many levels, right? 

So the first line is, this is specifically about undercutting what it costs. to pay someone to do this highly skilled care. I mean, $39.05 is low, right? Should be higher, I think, um, for the, the, the work that's being done, the training required, the, anyway.  

Um, but so there's that level, but also the fact that they are giving this an hourly fee is weirdly anthropomorphizing, right? You don't pay, if you're using, um, a subscription service for looking up drug interactions. Let's, let's say there's some kind of software like that, that the, that the nurses and other healthcare providers can use. You never pay for that on an hourly basis. 

That makes no sense at all.  

It's not, it's not a person who is working hours.  

Alex Hanna: That's all. That's completely part of Hippocratic's, um, pitch though. I mean that I know we're going to look at their website in a second, but it's, that's the kind of idea, that they pitch these things as agents in particular to encapsulate everything that nurses are supposed to do, even though this seems to be like a glorified drug interaction search database. Even though it's not that, it has the potential to get things horribly wrong too. Um, and--  

Emily M. Bender: Also, oh, go ahead.  

Alex Hanna: No, no. And also, I mean, and also Wise Woman For Real points out in the chat, the, you know, the, the, where the data comes from is, it's a Fox Business article, but it's not even, they don't even link to Hippocratic's data. 

Um, and so it's all internal company data coming from someplace. I mean, we could probably find it if we dig into their decks a little more. Yeah. It's it's a little weird, but it's, I don't think it's linked from the site. I mean, you can probably find it on their site if you dig around enough, but it's we, you know, it's, it's still like their internal data that they're releasing about these narrow tasks. 

And then, and so you're absolutely right, Emily, that this idea of doing like a, they're trying to do an apples to orange comparison of kind of a software service, but saying that it can supplant this kind of human hourly wage.  

Emily M. Bender: And then on top of that, so this is company released data, so no third party verification, um, and then it measures these three specific things, and then the last sentence of this paragraph, so the, the, the evaluation that is completely unreliable because we know, you know, nothing about how it was done, and sort of irrelevant because it's misframing what it is that nurses do, but it's about "medication's impact on lab values," uh, "toxic dosages of, of over the counter drugs," and "condition-specific negative interactions from OTC meds." 

So three specific tasks like that. But then the last sentence says, "These AI nurse bots--" And I hope to never hear that phrase again. Um, and I probably will not have that hope met. "These AI (mm) are designed to make new diagnoses, manage chronic disease, and give patients a detailed but clear explanation of clinicians' advice." 

So that evaluation actually does not substantiate their effectiveness at these other things at all.  

Michelle Mahon: What a major leap. I mean, come on, break it--now it's sudden, just can do these three things, maybe, right? Maybe. And maybe not, probably not because they didn't actually share their source. 

So they, uh, be, and then to say that now they can basically--this is also the AI nurse bot is replacing a doctor by 'making new diagnoses.' 'Managing chronic diseases' replaces so many different kinds of people in their highly skilled professions. Um, rehabilitation, um, experts, physical therapists, like that's a pretty broad claim. 

And then also 'detailed, but clear explanation of advice.' Where is it getting this advice?  

Emily M. Bender: Yeah, and this sounds like, 'clear explanation for clinician's advice 'sounds exactly like, you input into ChatGPT or whatever LLM, 'rephrase the following in a way that's easier to understand,' and then out comes some text with, again, no guarantees that it means the same thing as the input. 

Because that's-- (crosstalk)  

Michelle Mahon: --not even relevant to the person, you know, sometimes people don't know how to ask a question. And, uh, and that's really, I think, very important in healthcare. You know, I don't have the words because I'm not a skilled professional to, to use the correct word to ask for what I am. But when a, when a human is there, when a nurse is there, we, uh, understand the problem they're describing, even if they're not using the correct word or correct phrase and correct term. 

Um, because we were dynamic and flexible in our knowledge. So yeah, I think that's really, yeah.  

Emily M. Bender: And you're relating to them person to person, right?  

Um, okay. So now we have some of this like rah, rah, the future is fast thing again. "These rapid developments suggest we are on the cusp of a technology revolution, one that could reach global ubiquity far faster than the iPhone. Here are three major implications for patients and medical practitioners." Okay. So implication one, "Gen AI and healthcare is coming faster than you can imagine." And this next bit is just some stupid mathiness. "The human brain can easily predict the rates of arithmetic, arithmetic and geometric, geometric growth. But even the most astute minds struggle to grasp the implications of continuous exponential growth. And that's what we're witnessing with generative AI.  

No.  

Alex Hanna: Yeah, it's really, I mean, it's just, it's, there's nothing, I mean, there's very little of substance in this entire piece, but this is incredibly devoid of substance just because, you know, the claim he's basically making is, 'This is how exponents work. And wow, can't you believe it? It's going to exponent your mind.' Okay, okay, okay, my guy. Just, slow down here.  

Michelle Mahon: I love this Zen moment. "Imagine, for example, a lily pad."  

Emily M. Bender: (laughter) Yeah.  

And Wise Woman For Real is saying, "Duh, we learned about exponential during Corona." Like, yeah, we had a lot of experience with that, but there's some real hype buried in here. 

"So experts project that AI's computational progress will double roughly every year, if not faster." No links there, right? Who are these experts? Probably the Altmans of the world. "But even with conservative projections, ChatGPT and similar AI tools are poised to be 32 times more powerful in 5 years and over 1000 times more powerful in a decade. That's equivalent to your bicycle traveling as fast as a car and then shortly after, a rocket ship."  

Alex Hanna: (laughter) Well, he has a link here, and it's a link to himself writing an op ed for the Twin Cities Pioneer Press, so, you know, really fantastic linking practices by Robert Pearl, MD.  

Michelle Mahon: I wonder what would happen if he entered, um, like, uh, ego check into ChatGPT. (laughter) It would come back with at him for, uh, apparently the only source you need.  

Emily M. Bender: Yeah. Okay. So his first implication is it's going super fast. Hype, hype, hype. Um, implication two, "Gen AI is vastly different than past AI models." Um, you want to have a go at this one, Alex?  

Alex Hanna: Yeah, yeah. He says, "When assessing the transfer to potential of generative AI in healthcare, It's crucial not to let past failures such as IBM Watson's cloud our expectations. IBM set out ambitious goals for Watson, um, hoping it revolutionized healthcare by assisting with diagnoses, treatment planning, and interpreting complex medical data for cancer patients. I was highly skeptical at the time, not because of the technology itself, but because Watson relied on data from electronic medical records, which lacked the accuracy needed for narrow AI to make reliable diagnoses and recommendations." 

And I'm like, what is what is, uh, this and it looks like this is the link to the IBM Watson here is a piece from STAT News by Casey Ross and it says, "NVIDIA says generative AI will revolutionize health care. So did IBM with Dr. Watson." And it looks like it's, it's, it's behind a paywall but, uh, knowing Casey Ross is one of the better journalists at STAT and so I imagine this is kind of one of the cautionary tales in what Dr. Watson was. Um, if you're familiar with that thing, you know, hit us up.  

Emily M. Bender: Anyways, Watson, Watson was the, the, the demo was that they had it playing Jeopardy! and somehow that was going to qualify it to do, to be a useful tool in a healthcare context.  

Alex Hanna: Well, Watson is also, you know, the thing that famously beat, you know, Garry Kasparov in chess and that, you know, well then replace nursing. 

Right. Um.  

Emily M. Bender: I don't think that's, I don't think that was Watson. I think it was another IBM product, but I don't think it--  

Alex Hanna: Oh it was Deep Blue.  

Emily M. Bender: It was Deep Blue, yeah. Um, and the thing about Watson also is that it wasn't a large language model. It actually had, um, language processing, where it was parsing into semantic representations and then matching against the database. 

And you could imagine if you constrained the task enough, that could be very useful. That could make, for example, a more flexible front end to a system where you might query for drug interactions in the context of being a person providing medical care to somebody. Um, but--  

Alex Hanna: Yeah, well, let me, let's finish this and then I want to get Michelle's reaction because they say in--so they say, "In contrast, generative AI leverages a broader, more useful array of sources. It not only pulls from published, peer reviewed medical journals and textbooks, but also will be able to integrate real time information from global health databases, ongoing clinical trials, and medical conferences. It will soon incorporate continuous feedback loops from actual patient outcomes and clinician input. This extensive data integration will allow generative AI to continuously stay at the forefront of medical knowledge, making it fundamentally different from its predecessors." Oh, gosh. Yeah. What's your thoughts on that, Michelle? 

Michelle Mahon: I mean, this is just, I, I just can't, one thing that just drives me crazy, I think, is that there's so much effort. Like, why, why don't we just. hire people? Why so much effort, um, and energy and money, um, put into all these things that are supposed to change healthcare forever, to change care forever.  

Um, we know that nurses are the most trusted profession. Um, and I think people want them. And repeatedly we see like these failing investments of, um, you know, like this one, there's so many examples, even beyond AI as nurses. They're like, hey, this new alarm is going to make it so we don't have to have as many nurses. Hey, this new thing is going to make it so that everything is faster. 

I don't even want to begin with the catastrophe that the EH--electronic health records brought and how those continue to fail and dysfunction. And then we're building all this on. So the history of failure all really in the sake of just not having to, to have skilled humans there.  

Um, I just also keep thinking about, you know, last week, um, and, uh, for--in May for almost six weeks, the Ascension hospital system lost access to its electronic health record system, which integrates with all of this, where it's getting all the data, right for, um, a lot of different, um, patients, if they're using a chat um, type or answer--ambient type situation or other type of AI. Um, a lot of this data, we saw this downtime last week and then 140 hospitals down for six weeks. Um, that is really--to rely upon these kinds of things, um, exclusively. Uh, the people would have been in so much trouble if there weren't real skilled people there to figure things out. 

And so it's, I hear, you know, about, you know, this story is just harkens back to really decades, all my decades of nursing, uh, where they just come up with every tool they can to, uh, remove nurses from care, which people really need. We remember this is life and death stakes, you know, even if you're at home. 

Alex Hanna: Yeah. And I found that to be, I mean, the things here, I mean, it's, it's pure hype just because the kind of, I mean, I, I kind of love this point because he is really showing his hand here, you know, "The current generative AI--" He actually says in the next paragraph, "That said, generative AI will require a couple more generational upgrades before it can be widely used without direct clinician oversight." 

And it's this thing, you know, this thing, it's like, it's always gonna, it's, it's just around the corner. We're going to have it. Just wait. And he just kind of says, well, it's reading from conferences and, you know, that it has the latest knowledge as if that's the thing that's going to move the system from a glorified, you know, text predictor autocorrect to, you know, actually providing skilled nursing care. 

Um, it's a really wild claim to make.  

Emily M. Bender: It's really wild. And also this, this paragraph, um, "so it not only pulls from published," et cetera, really shows how little he understands this technology.  

Alex Hanna: Yeah.  

Emily M. Bender: So "pulls from published peer reviewed medical journals." Okay. Yes, there's probably published peer reviewed medical journals in the training data for this stuff. 

And maybe the NVIDIA thing sort of has a lot of that. Like maybe they've actually curated a particular training set, but the text synthesis machines just make paper mache out of their training data. So it doesn't matter. Like we saw this with Galactica back in the day, you put science, uh, scientific text in, you get scientific text looking artifacts out, but that's not science. 

Right. You put healthcare text in, you get text out that's that uses words from healthcare domain, but that doesn't mean it means anything. And then beyond that, "able to integrate real time information from global health databases, ongoing clinical trials, and medical conferences." How? Right? This is imagined technology. 

This is what he wants it to be. He's not basing this on anything real.  

Alex Hanna: Yeah.  

Michelle Mahon: Well, also, if you think about like medical conferences, like this is where ideas are discussed. They're not necessarily the standard of care. They're not necessarily what, you know, again, like, it's not really even necessarily relevant. 

And so you can think about all of the different ways that kind of information can be harmful. Right, I'm just thinking people talk about all kinds of things at conferences, um, and even if they call them medical, you know, there's lots of ideas and that they're not all great.  

Emily M. Bender: Right. And none of this is predicated on carefully curated data or constructed databases. 

It's just like we're going to throw in all the text. And so all that stuff is going to be there.  

Um, all right. And so here's something that it probably is true. "Once an AI clinician chatbot is available--" No matter how poorly it works, that's me initializing. "--multiple other companies will quickly follow." Undoubtedly, because that's where all the VC money is, unfortunately. 

Alex Hanna: Yeah.  

Emily M. Bender: Okay, and that brings us to his third point. "Gen AI in healthcare will be ubiquitous, parentheses, hospital, office, home. Just as my father never imagined his iPhone stored in the trunk would evolve into an essential tool for navigating life, many Americans struggle to envision the transformative impact that generative AI will have on healthcare. The concept of accessing medical advice and expertise continuously, affordably, reliably, and conveniently around the clock represents such a departure from current healthcare models that it's easy for our minds to dismiss it as far fetched. Yet it's becoming increasingly clear that these capabilities are not just possible, but likely, and even imminent." 

 (laughter) No. 

Michelle Mahon: Not imminent. 

Emily M. Bender: Not possible, not likely, not imminent. And also, it's not that we have a hard time imagining what a better health care system would look like, right? And we had it in the intro, universal health care, paying nurses fairly, making sure things are sufficiently staffed, um, electronic health record systems that are reliable, right? 

These, it's easy to imagine these things. Um, he just wants to somehow say, my imagined future is being poo pooed because everybody else is too uncreative? I don't know. 

Michelle Mahon: Yeah, it's, well, maybe, uh, maybe he should be taking it as, again, I guess this is again by the person who keeps publishing, um, and relying upon himself as a source over and over again, um, is, uh, we, you know, should all bend his way, I guess. 

Alex Hanna: Yeah, I mean, that's the kind of thing and it's, I mean, the next, the next paragraph shows that he's in a bit of a filter bubble because he says, "Daily I receive feedback from both clinicians and patients who have interacted with generative AI tools." Who? Who? Who, motherfucker? (laughter) Sorry, I just got upset.  

"The majority report--" The majority, so there are some detractors there. "The majority report that the responses, particularly when prompted effectively--" Make sure you prompt effectively. "--align closely with clinician recommendations. This is a testament to the evolving accuracy and reliability of generative AI in healthcare settings. And it promises revolutionary medical care delivery in the near future." 

And then he starts with like, you know, my dad, what a dummy, underestimated the iPhone, but if you don't want to be a dummy, you gotta, you gotta prompt effectively.  

Emily M. Bender: That last paragraph is just, it's just hype, right? He's not, he's not even trying. So I think we need to look at this Hippocratic AI thing, um, which is, so this is the thing from NVIDIA. 

Um, and this is so appalling. So the, the company's name is Hippocratic AI, and then they've got this 'do no harm' thing.  

Michelle Mahon: Okay.  

Emily M. Bender: Uh, "safety focused generative AI for healthcare." Safety focused, really? And then they have these little cards for different in quotes, "nurses," each of which have an image that's probably synthetic. 

Um, and these are very racialized people. Um, most of them look femme, but not all, and they have personas. So the first one here, um, it's a, an image of a woman in scrubs. I say, she looks Southeast Asian. Um, name given as Linda. Title, "CHF discharge," and then "rating by nurses: 83 percent." 83 percent of what? "Style: engaging. Estimated costs: less than $9 an hour, asterisk," and then there's a little byline here. "Linda is a gen AI healthcare agent who follows up with a discharge patient after being admitted for congestive heart failure."  

Michelle Mahon: What could go wrong?  

Alex Hanna: Yeah.  

Michelle Mahon: (laughter) Also, who is the, who are the nurses reading these, the, the nurse, the other gen AI nurses? Or, um, is it, oh maybe it's Robert Pearl.  

Emily M. Bender: Who claims to be a nurse too, right? 

And this is rating for what? And 80 percent or 83 percent of what? Like.  

Alex Hanna: Yeah, they've got, so if you dig around, I mean, so I've, I've spent a lot of time on this site, unfortunately, in the process of working on our book, Emily. And there's and I'm just trying to go into these and go into like what their data are, so they have kind of a nursing advisory board on here which you know I imagine they take a big chunk of that VC money and they find people who are willing to do some kind of evaluation on this. Um and then I think they've done, you know they, they're subjective evaluations, they've, they say in the paper, they've recruited "over 1100 licensed registered nurses and over 130 US licensed physicians. And after verifying their licenses, each participant had a series of conversations with our system."  

So first off, I want to say for y'all nurses and clinicians who are evaluating this stuff, please stop scabbing. Don't evaluate these things for a job that's, you know, kind of primed to replace, replace people or do a, do a shitty job of your profession. 

So I just want to say that first off, but that's kind of where they are, but they don't, then they, they kind of give a breakdown of who these people are, but this number, kind of the, the top line numbers are, you know, ridiculous. And if you scroll through all the personas on here, there's one thing that's the same across all of them. Which is the estimated cost of less than $9 an hour. So, if that doesn't give away the game, I don't know what else does.  

Emily M. Bender: And then the asterisk for that leads to "lowest price available after testing is complete." I don't, don't even know what that means.  

Alex Hanna: I don't know what that means, yeah.  

Michelle Mahon: What is it? What kind of testing is completed? 

It does, however, say, um, that it's not--that their gen AI is not safe enough for diagnosis. So, I mean, I feel like they were like nursing's fair game. Uh, we can, um, we, you know, that, that--because, I think it fundamentally, um, if you look at what they're claiming all these nurses can do, right. This one is, you know, chronic kidney disease. Um, a couple of them are. Um, pre op colonoscopies. So what are they just doing reading like pre, you know, pre surgery, uh, pre surgical prep, you know, how to drink your bowel prep, um, a number of things. Uh, it's just like, again, fundamentally misrepresenting. The thing, though, that really scares me about this is, um, for people who might be using them at home who really need the help. 

Like, I really need some help. And, you know, when you try to call a call center, or you use any other kind of chat bot, you end up in this wormhole where you're not getting an answer. You know, um, how many times have we all been in that position? Now imagine that you are really nervous about your procedure tomorrow or maybe you've started bleeding and you don't know if you should go to the emergency department and then maybe your vision's not great or you can't see this chat bot person, um, and this chat bot image. Sorry. Um, and they're, they're very realistic looking. And I think that is, is so dangerous in and of itself to, um, have somebody might believe that this is a real human. And I, that to me is just like, so there's so many layers of appalling, but that one to me is one of the most dangerous and appalling, um, elements of it, you know, and then people will go here to these, this chat nurse and, and think that they're getting real help. 

And they might die.  

Alex Hanna: There is a, they don't provide diagnosis, I know on Hippocratic, but there are tools that do such a thing. So for instance, uh, one we've talked about this-- 

Emily M. Bender: Yeah, what was that one? 

Alex Hanna: It's Glass Health yeah. So we've actually talked about them and they actually give differential diagnoses and clinical plans on, and that's what they advertise. 

And they, they haven't given us, they haven't given, gotten as much, uh, uh, venture capital as Hippocratic, but they still have gotten you know, a decent amount.  

Emily M. Bender: Yeah. And that one's all wrapped up, at least when we looked at it last time as like a, a training thing for, I think was it EMTs and maybe med students or something. 

And the problem is that if you put in a search query, like I need, I need help with this medical question or how do I get a diagnosis? You could very well land on one of these sites. Like maybe not this one. I don't think I can click on Jasmine here. Oh, "test our AI." Right. I guess I could end up clicking through and, um, you know, interacting with this thing. 

And as you say, somebody who is experiencing some kind of a medical crisis is at their least likely to be able to critically evaluate information and critically evaluate what they're interacting with. And so making these things accessible at all is really bad practice. I want to-- (crosstalk)  Oh, go ahead. 

Michelle Mahon: Especially if they'd be coming through a hospital website primarily. Right. You know, so the idea is that the, the healthcare facility can purchase this Hippocratic AI and put it out, um, on their platform. And so not only that, there's this extra layer of suppose of legitimacy where you think that it will be reliable because of who's putting it forward. And fundamentally, this is really, you know, when we think about these various situations, we've already seen examples, we know AI isn't accurate. We know that if you don't ask the right question, you're not going to get the right answer. And even if you get a correct answer, it may not be the correct answer for you. 

Emily M. Bender: Yeah. And the way these things work, there's no, like, you might go back the next day and think you're talking to the same Jasmine, say, right? But it's, there's no relationship there. There's no person on the other end who has an understanding of who you are, who's tuned into how you're phrasing these questions as a non medical expert and so on.  

Um, so in light of all that, it's particularly galling that they've got this thing in the upper right hand corner here, "help make AI safe, test our AI," right? So just. The mere fact of putting this out in the world is unsafe regardless, that's what we're coming to, but they want people to work for them for free to make it safer and like does this this like whole, you know, safety focus probably is a selling point for them.  

Alex Hanna: Yeah. I mean, I think the way, the way that they're doing it looks like they're partnering with organizations. 

So on the site they have a subhead and it says, um, "These partners are helping us to ensure AI is safe." And then they've got a bunch of different, um, uh, different organizations. So the ones I recognize are like Ohio Health, um, You know, Cincinnati Children's, um, a few other ones, WellSpan, um, and they have a few others that they had list, that they've been testing with. 

So, I mean, I think there's definitely a big de--and they have typically, you know, who the investors are. What their, their VC rap sheet is. Um, so it includes, you know, Andreessen Horowitz, um, you know, General Catalyst, some of these big VCs, um, and, you know, really, you know, wanting to get in.  

I mean and that's the sell, right? 

You help us test, you know, once we say this is safe enough, you know, you're going to get some, you know, some excellent deal on this in your, you know, in your hospital. And it also pays to say that Robert Pearl in his bio, uh, served as CEO of the Permanente Medical Group, um, and as president of the Mid-Atlantic Permanente Medical Group, so, you know, came up as a hospital administrator. 

So hospital administrators are really chomping at the bit for this stuff.  

Emily M. Bender: Yeah, and I'm looking at the end of this page here on HippocraticAI.com. It says, "At launch, as an added layer of safety, we've trained our model to engage a human nurse when appropriate." Which is like actually, it should be people doing the nursing care all the time, right? 

Michelle Mahon: "When appropriate," like you just said, all the time, right?  

Emily M. Bender: Yeah, exactly. But this, this looks like it's being set up to, um, you know, make the nurses' jobs shittier. Basically, you end up having to clean up after the, the messes that the automated system is making.  

Alex Hanna: Yeah. Um, yeah. And that's a thing that, uh, someone said in the chat. 

I think they said something of the, of the nature, they're like, well, what's eventually what they want to do is effectively turn nurses into annotators or people who just kind of label the data for these things rather than actually provide any frontline care. 

Emily M. Bender: Which is exactly why people go into nursing. 

Right? 

Alex Hanna: Yeah. Don't you want to be an annotator? 

Michelle Mahon: I have a lot of annotations for this bullshit. (laughter) They're just pouring into my mind right now. And yeah, it's disgusting, frankly. Um, you know, and then, then they will say like, well, we don't even actually need nurses for that. Um, and because that's the ultimate goal, right? We don't need nurses, is what they're saying here. And, um, you know, they will say, well, other people who are nurse adjacent, um, can do the annotation, um, ultimately. Because what, when, what they often do is whenever we are, um, saying that the computer system or the different AI or the forecast is wrong, they, uh, just find another pivot and saying, well, actually, we don't need nurses to do that, we'll find somebody else, uh, cause they don't like what we have to say about it.  

Emily M. Bender: Yeah. So before we started recording, Michelle, you were pointing out that that "nurse" is actually a controlled title. That to be a nurse, you have to have certain certifications. And I'm looking at this website. So the the Forbes article talked about nurse bots, which, uh, was alarming. Here they seem to be being careful about it in a very slimy way. So, "Choose a role to get started." Um, "genAI healthcare agent" is what they're calling them, but then, each of the, um, specialties. So we have, "CHF discharge," we have "CKD chronic care," "pre op colonoscopy." These sounds like maybe roles within a hospital that specific nurses will take on. 

And so they're sort of like doing everything but saying this agent is a nurse.  

Michelle Mahon: It's it seems like they are. Um, by the way, every one of those roles that they're advertising one nurse can do, um, in a hospital if they have that, you know, the skills for it. Um, yeah, but they've seemed to be very clearly evading. 

Now the press releases that came out that where we saw, you know, you see this AI nurse for less than $9 an hour. Um, that's everywhere. So they don't even--also what kind of nurse? People don't, maybe don't realize there's registered nurses and licensed practical nurses, not that we want to get into that, but again, it just shows one more layer of complete disregard for the rules and regulations that are in place to protect people from appar-- you know, from people who practice without the skills, um, and, and authority. And, um, it looks like we might need them in relation to technology that attempts to mimic practice without skills or authority.  

Alex Hanna: Yeah, a hundred percent. Yeah. There's one, one thing I, and I want to talk, hit on this a little bit, cause I know that NNU is kind of, hit on this a little bit, but in one of their roles is VBC at risk and they have two of the nurses, um, the quote unquote nurse agents under remote patient monitoring. And I know that's already a thing that's, you know, a reality in many many clinician offices where, you know, you, you aren't, you don't have an AI agent out doing it, but you have a very low paid clinician, you know, somewhere else who's doing this remote patient monitoring and it's infringing on, you know, the actual monitoring that in person clinicians are providing in clinical settings.  

Michelle Mahon: Yeah, it's really, um, it really is a, you know, a disturbing trend and, and, and to, to really represent that this is what monitoring is. And we see that with these things like, you know, oh, we can do documentation. Remember the documentation and monitoring, they actually are representative of a thought process. 

They're, they're representative of something much larger than what you're putting on a piece of paper or what you're actually responding to. Um, we already see all kinds of problems with remote cardiac monitoring where they're like, well, we will, it's a fail safe, you know, um, the eight AI or the technology, um, um, that evaluates the data about people's heart rhythm, uh, whether it is stable or not, will alert. You know, so now they're just putting a face behind this? Or is this person just there, um, if the ambient monitoring technology detects a dangerous maneuver by a, by a human, um, will Mira, I think her name is, pop up and say, um, get back in bed, Mrs. Jones, you know, what is the actual, what is even the status remote patient monitoring mean? Uh, it's just, it can be, it's, it's, it's kind of nothing.  

Alex Hanna: Yeah.  

Emily M. Bender: Yeah. Yeah. So I think we're back to our standby questions of for all automation, you say, what's the input? What's the output? Right. And then could you possibly get to useful output from that input and what goes wrong when the output is, is incorrect or, um, otherwise sort of over like, you know, you've, you've got false negatives, you've got false positives. 

How does that impact the patient in this scenario? How does that impact the actual nurses in this scenario? And you know, when it's sort of wrapped up in this, um, friendly looking synthetic image, hopefully badged AI, right? So we know that's not a real person. Um, all of that is hidden and it's sort of suggesting way more functionality than could possibly exist. 

Michelle Mahon: Absolutely.  

Emily M. Bender: Yeah. And I just want to, um, agree with Abstract Tesseract here in the, in the chat. They say, "I really appreciate this point that documentation is communication, not just a statistically, statistically likely pile o' text." And the functions where the, um, it involves text going from one party to another and back, those are the ones that these synthetic text extruding machines are sort of most primed to, um, take over except do poorly. 

Um, and I think it's, it's really, really valuable to remind ourselves that when we are writing texts, we are not just emitting sequences of characters, but we're thinking about something we're deciding what needs to be communicated and so on. Super important. I have, um, again, someone in the chat, the link to the paper-- 

Alex Hanna: Yeah.  

Emily M. Bender: --that these guys point to.  

Alex Hanna: That's where I was reading some of the sample, the samples from where they, um, were uh, trying to evaluate some of these things. It's we're, we're coming up a little bit on time, but if you were to get it, it's, it's, of course, it's one of these long, um, arXiv-- I mean, it's not as long as as--  

Emily M. Bender: Not as long as some, yeah.  

Alex Hanna: Well, not as long as crap that the, you know, Deep Mind puts out or, or whatever, but it's, it's--  

Emily M. Bender: 46 pages is pretty long.  

Alex Hanna: It's still quite long. Um, but it's, you know, it's on arXiv, it's, it's not peer reviewed. Um, you know, it's, also this footnote is hilarious. Footnote two, "Safety is our north star. We name our system after Polaris, a star in the north, the northern circumpolar constellation of Ursa Minor, currently designated as the north star." Thanks. 

Michelle Mahon: Oh gosh.  

Alex Hanna: That's so hokey.  

Michelle Mahon: There's no end to, like, the so called creative bullshit that, that, that can, that gets ascribed to all of their, uh, schemes, uh, to replace people in healthcare. It's, it never ceases to amaze me.  

Emily M. Bender: Yeah.  

Alex Hanna: It's really something.  

Emily M. Bender: Yeah. No, and, and it's really great to talk with you, Michelle, and, and to hear, you know, that you are resisting this already. Because one of the, one of the dynamics that I sometimes see is that the tech people are in there saying, our tech is doing all this. It's it's fancy. It's shiny. It's new. It's sophisticated. You can't possibly understand. And it can make it hard for people who actually have the expertise to say, that's not what we're doing, to really stand solidly in that expertise. And so it's great to hear you doing that.  

Alex Hanna: Yeah, absolutely.  

Michelle Mahon: If you ever watch the Muppets, uh, and you remember Chef, and he's there, he's chopping and singing, and he's doing all the things that somebody perceives the chef might do, but he's not actually cooking, um, and he's just like, uh, doing, uh, some sort of artistic interpretation of it for fun. 

And sometimes that's what these technologies feel like. It's who brought this here, uh, it's just what they're imagining we do, and not really what we do. Where's the substance? So, I've been using the, um, Muppet Chef as an analogy for what this AI is relative to what we're actually doing as nurses taking care of patients. 

Emily M. Bender: That is so wonderful.  

Alex Hanna: I love that.  

Emily M. Bender: So one thing that we do on the, on each episode is when we transition over to Fresh AI Hell, which is where we're going, um, I give Alex an improv prompt. Um, and Alex, did you want musical or non musical this time?  

Alex Hanna: Well, let's do non musical. I'm not, I'm not having a musical day. 

Emily M. Bender: Okay. So, so you are the chef from the Muppet Show. Doing the chopping, and, uh, you cut your thumb, and you try accessing one of these nurses to figure out what to do about it.  

Alex Hanna: Hold on, I need to look up what the Muppet chef said. It's the Swedish chef.  

Emily M. Bender: 'The Swedish chef, yeah.  

Alex Hanna: (impersonating) Erka derka. Erka derka. Erka derka! Derka. 

Herpocratic! Erk a derk a! It says I have congenitive heart failure. What the what the salmon? What the doot do doo doo.  

 (normal voice) I actually just bought a huge meat cleaver today, so it's actually very appropriate. Hopefully I don't cut my finger off. Anyways, that's what I got.  

Emily M. Bender: All right. Thank you. And thank you for, uh, for giving us the prompt there, Michelle. 

Um, all right, that brings us over to Fresh AI Hell. Um, I have a few artifacts for us, starting with a healthcare relevant one. Um, this is an article in the Atlanta Journal-Constitution. Um, sticker is "pulse," headline is "Open AI teams with Arianna Huffington to create AI powered health coach." Subhead, "Thrive AI Health aims to revolutionize chronic disease management through hyper personalization." And this is by journalist Avery Newmark from July 16th of this year. Um, and as I understand it, this is basically, um, Sam Altman and, uh, Arianna Huffington teaming up to create something that is going to nag people, um, about various health behaviors to try to prevent and manage chronic disease. 

 (laughter)  

Alex Hanna: It sounds like they really--yeah, they, they published this really long op ed in Time. One, one of our friends in the pod, uh, Yacine, um, uh, I don't really know how to pronounce his surname. Jernite, Jernite, um, who's a Hugging Face sent this to us with just this long, uh, sprawling, you know, hype piece in time. 

Emily M. Bender: Yeah, and of course, you know, A, it's not going to work. B, it is, I think it's in the Time piece where it's all very like blame the patient, um, type stuff. And then on top of that, this is built on enormous amounts of surveillance, right? Hyper personalization. Um, so the AI health coach will be trained on users' biometric data, lab results, and personal preferences to offer tailored recommendations across these five key areas. For example, it might remind a user to go to bed early before a morning flight, or to swap their third afternoon soda with water and lemon, demonstrating its ability to integrate health and calendar data seamlessly.  

Alex Hanna: (laughter) I really, I really need that. 

 (laughter)  

Emily M. Bender: Right. If, if I, if I don't have this AI health coach watching out for me, I will never figure out that I should go to bed early before my early flight.  

Michelle Mahon: Also so tone deaf, like who the hell can like actually take time out of--you know, it makes so many presumptions that people have the privilege to do all these things. 

And it doesn't even start out, when I read this one before, like really rich people can get their own personal health coach and pay lots of money. Um, but now your life's nothing like that person, but here's this other artificial one we can give to you. 

Alex Hanna: Yeah. Yeah. Our producer, Christie Taylor, and in our chat is saying water and lemon is such a rich people tell. 

Yeah. It's giving, it's giving ' how much, how much money can a banana cost? $10?'  

Emily M. Bender: Yeah, absolutely. Okay. This is Fresh AI Hell. We move quickly. Um, this is from Sherwood, um, by the journalist, uh, Rani Molla. And the headline is, "Watch AI eat the VC world in one chart." A subhead, "Nearly half of US venture capital investment went to AI companies last quarter." 

And this is from July 17th. And. I love that this is documented. So we have a graph here, um, where the X, sorry, the Y axis actually goes from zero to 100 percent, appreciate that. Um, Uh, X axis is years from 2017 to 2024. And then we have for each quarter, um, basically it's breaking down how much of the VC investment went to AI slash ML, so ML is machine learning, in purple versus everything else in yellow, and alarmingly, we are just under the 50 percent mark for Q1 of-- or Q2 of 2024. I can't quite read that, probably Q2.  

Alex Hanna: I think we're in Q2. Yeah, this is just after Q2.  

Emily M. Bender: Yeah.  

Alex Hanna: I appreciate it. This is a graph that Sherwood, uh, created. I know, uh, Rani, um, if I'm, I don't think I'm pronouncing her name correctly, I think used to be with Vox. Uh, but as basically, if you scroll down, shows that VC investment as a whole has kind of gone down, I think in the next chart. 

Um, so it kind of peaked right before, uh, right in, looks like that's like Q3, 2021--yeah. Q4, 2021. And then it's gone down, but even then these, I mean, half of the, I think it's that, that one, yeah, so $28.5 billion, which looks like it's the highest of 2024 and half of that being, um, in AI nonsense.  

Emily M. Bender: Yeah, actually, no, that's the, it's, uh, like $59 billion. 

So $26.2 billion in AI, this is a very dynamic chart. And this like explains so much because, you know, it seems like everybody's excited about AI and it seems like there must've been some big breakthrough in the technology, but there's not. What there is, is a lot of money coming into it from VC and that's why we're seeing it everywhere. 

Alex Hanna: I'm glad that someone made this graph because this is, this comes from PitchBook data and I was trying to, you know, say as much at some point, but shout out to Sherwood for making the graph.  

Emily M. Bender: Yeah, this is a good one. We will link to it, of course, in the show notes. Um, all right. Uh, this is sad, but, um, props for honesty. 

So we know we are, this is an article in Bloomberg from, July 8th of 2024, uh, journalist is Akshat Rathi, um, headline, "Google is no longer claiming to be carbon neutral." Um, so Emma Strubell and Sasha Luccioni and others were already on this beat in 2018, 2019, talking about the energy costs of doing this stuff. 

And at that point you have Google and Microsoft, you know, making these big pledges of not just carbon neutral, but carbon zero. They were going to have cleaned up all of the carbon they've put into the atmosphere by I think 2030. And, um, now they are, uh, backing off of those claims.  

Alex Hanna: Well, microsoft said it was going to be carbon negative and then, you know, and so in both of their sustainability reports, they had to hang their heads in shame and say, well, we're not going to be carbon neutral. Actually, AI takes a lot of carbon and we're, you know, we're, uh, you know, we're eventually going to get there.  

Emily M. Bender: Yeah. Sasha Luccioni was on the Tech Won't Save Us podcast recently, and they were talking about, together with Paris Marx talking about, um, how. Microsoft advertised this moonshot of, of carbon negative. And apparently the, the CEO is now saying, yeah, well, the moon is five times further away than we thought it was.  

Alex Hanna: (laughter) Incredible.  

Emily M. Bender: Okay. One more drastic thing before we get to the fun stuff. Yeah. Alex, you want to narrate this one for us?  

Alex Hanna: So this is from blue sky by Erin quote, "Context" Fogg, "What a great new tech economy this is we got going here." It's an image of, um, two children, which--  

Emily M. Bender: Actually there's three uh-- 

Alex Hanna: There's there's three images. Yeah. The first two are babies and they, I don't know if it's AI generated or not. It says, um, "Show us your baby slash child. Help to teach AI by taking five photos of your baby slash child, $2 per job." 

Um, and the second one, "Record a video of your child crying. Your child has to be between seven to 12 or 19 to 24 months old. $1 per job." So this is some awful annotation, uh, exploitation. And then the last one, "Take nine pictures of your teenager's face. Thirteen--"  

Emily M. Bender: Not your teenager, a teenager's face.  

Alex Hanna: Oh! Okay. So not even, not even your teenager, just you could go to a high school and be a creep. Oh Lord.  

Emily M. Bender: And this one's $3 per job. And like, so this is, I don't, I don't know who Erin "Context" Fogg is and I don't recognize this platform. Um, but like, I don't have reason to believe that this is fake. Like it seems totally plausible that someone is out there putting these click work jobs up to get people to capture images of minors. 

Alex Hanna: Ooh, just real fresh hell straight out.  

Emily M. Bender: Yeah, right. And like, I don't even want to get into thinking about why they want videos of babies crying. Like. No.  

Alex Hanna: Yeah, probably, yeah, just generative AI because they can't, people are not putting a lot of this stuff on the web that they're just stealing. I don't know. 

Emily M. Bender: Yeah, also weirdly specific age ranges.  

Anyway, I have two palate cleansers for us, which is good after this one.  

Michelle Mahon: Need it.  

Emily M. Bender: Need it. Yeah. . So BBC News from June 19th, 2024. James W. Kelly and PA Media Report, headline, "London cinema drops AI written film after backlash." Um, so, uh, this was, there was going to be a private screening of a film which was entirely written using AI. 

And that got, the screening got canceled. 'cause people were like, why would we want that at all? So, um, let's see. Uh, I, I just appreciate this accountability and responsiveness to, you know, people being loud and actually having an effect.  

And then what's this last one? Oh yeah.  

Alex Hanna: Oh yeah. So this is another, this is a tweet by-- 

Emily M. Bender: Good luck.  

Alex Hanna: --somebody that has a absolutely wild username on Twitter. I'm not going to try to read it. "Drove past a billboard that said, 'putting the AI in IPA.' And if I didn't find this picture, this picture of it online, I would have thought I entered psychosis."  

And it's, um, this purple billboard where there's kind of like a desktop, uh, or a laptop computer and the other half is a cask that has a spigot on it, pouring. And it's an advertisement for Dell. 

Um, it reminds me of that one thing we addressed a few months ago on AI Hell, where it was the thing about the person who's an engineer that wanted them to put AI in the piston of an engine. Um, so it's got, got the same kind of energy. Let's just, you know, wave this AI magic dust over it. And if you, you know, if you drive down the 101 in San Francisco, you see so many of these ads. 

I think I counted them once. It was something like 15 of the 40 billboards were you know, kind of of this ilk, just absolutely maddening.  

Yeah, but Michelle, I thought you had some ideas about what the, what the AI might be doing in IPA, right?  

Michelle Mahon: Right. I think it's just increasing the bitterness scale on it. 

Um, making it just a little less palatable.  

Emily M. Bender: Yeah. And Abstract Tesseract in the chat says, "Hoppy with hints of silicone."  

Michelle Mahon: Yes.  

Alex Hanna: Oh Lord.  

Emily M. Bender: All right. Well, we are at time. That's it for this week. Michelle Mahone is a registered nurse and director of nursing practice with National Nurses United. Thank you so much for joining us today, Michelle. 

Michelle Mahon: Thank you.  

Alex Hanna: It was such a pleasure. Our theme song was by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor, and thanks as always to the Distributed AI Research Institute. If you like this show, you can support us by rating and reviewing us on Apple Podcasts and Spotify and by donating to DAIR at dair-institute.org. That's D A I R hyphen institute dot O R G.  

Emily M. Bender: Find us and all our past episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.TV/DAIR_Institute. Again, that's D A I R underscore institute. 

I'm Emily M. Bender.  

Alex Hanna: And I'm Alex herk a derk Hanna. Stay out of AI hell, y'all.

People on this episode