Mystery AI Hype Theater 3000

Mystery AI Hype Theater 3000, Episode 40: Elders Need Care, Not 'AI' Surveillance (feat. Clara Berridge), August 19 2024

Emily M. Bender and Alex Hanna Episode 40

Dr. Clara Berridge joins Alex and Emily to talk about the many 'uses' for generative AI in elder care -- from "companionship," to "coaching" like medication reminders and other encouragements toward healthier (and, for insurers, cost-saving) behavior. But these technologies also come with questionable data practices and privacy violations. And as populations grow older on average globally, technology such as chatbots is often used to sidestep real solutions to providing meaningful care, while also playing on ageist and ableist tropes.

Dr. Clara Berridge is an associate professor at the University of Washington’s School of Social Work. Her research focuses explicitly on the policy and ethical implications of digital technology in elder care, and considers things like privacy and surveillance, power, and decision-making about technology use.

References:

Care.Coach's 'Avatar' chat program*

For Older People Who Are Lonely, Is the Solution a Robot Friend?

Care Providers’ Perspectives on the Design of Assistive Persuasive Behaviors for Socially Assistive Robots

Socio-Digital Vulnerability

***Care.Coach's 'Fara' and 'Auger' products, also discussed in this episode, are no longer listed on their site.

Fresh AI Hell:

Apple Intelligence hidden prompts include the command "don't hallucinate"

The US wants to use facial recognition to identify migrant children as they age

Family poisoned after following fake mushroom book

It is a beautiful evening in the neighborhood, and you are a horrible Waymo robotaxi

Dynamic pricing + surveillance hell at the grocery store

Chinese social media's newest trend: imitating AI-generated videos


You can check out future livestreams at https://twitch.tv/DAIR_Institute.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

 

Alex Hanna: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.  

Emily M. Bender: Along the way, we learn to always read the footnotes. And each time we think we've reached peak AI hype, the summit of Bullshit Mountain we discover there's worse to come. 

I'm Emily M. Bender, Professor of Linguistics at the University of Washington.  

Alex Hanna: And I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. This is episode 40, which we're recording on August 19th of 2024.  

One of the biggest places where AI is hype these days is elder care. How often have we heard the refrain that technology will ensure we can continue to care for elderly in countries with aging populations, that mobility assisting robots can supplement the healthcare workforce, or that companion robots could help older people avoid loneliness and feelings of neglect? 

Emily M. Bender: Or how about this? What about using AI to monitor seniors in their homes, or nudge them into supposedly healthier behaviors? Can a mathy math replace the work of human care providers? Especially when lack of support for those human care providers means that such care can be out of reach, expensive, or outright dangerous. 

Unfortunately, hype won't take care of us or our family members as much as we want it to. And our guest today is someone who studies the actual needs of the elderly, as opposed to those imagined by Silicon Valley tech bros. Dr. Clara Berridge is an associate professor at the University of Washington School of Social Work. 

Her research focuses explicitly on the policy and ethical implications of digital technology in elder care, and considers things like privacy and surveillance, power, and decision making about technology use. Welcome, Clara.  

Clara Berridge: Thank you. I'm delighted to be here. And I am also intimidated by you two who do such an amazing job with this podcast and articulating issues and being funny to boot. 

Emily M. Bender: Well, thank you so much for being here. And you know, the success of the podcast really rests in large part on our guests who bring so much expertise. So we are grateful that you're here.  

Alex Hanna: Yes, really glad you're here. Thank you for bringing everything that you study and you do to the pod.  

Emily M. Bender: Yeah. And greetings from Japan, um, where I am in an absolutely beautiful location. 

And that explains the strange timing today. And also you said that we're recording on August 19th, which is true for you. It's August 20th for me. Hello from the future.  

Alex Hanna: Whoa. How is it over there?  

Emily M. Bender: Really hot and humid, but otherwise beautiful.  

Alex Hanna: I meant the, I meant the future, but the future would probably be hot and humid too. 

Emily M. Bender: Yeah.  
 

Alex Hanna: No matter where we are.  

Emily M. Bender: Hopefully beautiful. All right, so let's, uh, get started, um, with the first artifact, um.  

I have us here on the Care.Coach website, um, which is a company offering solutions, um, to, I guess their, their customers are what the, um, uh, sort of nursing, home, uh, what's the right word for that, Clara? 

Clara Berridge: Yeah, it looks like they are marketing to a lot of different groups, potentially. So, they're marketing to providers. Um, people are providing care through PACE programs, funded by Medicaid or Medicare. And then to individual caregivers.  

Emily M. Bender: Mm hmm, mm hmm. Um alright, and I guess maybe to a certain extent to families, but maybe not so much? 

Clara Berridge: Yep, families.  

Emily M. Bender: Can you, can you explain what PACE is? I know on their website, it says program for all inclusive care for the elderly, but, um, I'm not really familiar with what that is, but so I'd love to know more about what those programs do and what they are.  

Clara Berridge: Yeah, so it's a program that has been quite successful in terms of supporting people who need, um, a relatively high level of support and care during the day, but, but don't live in nursing homes, or they may live in assisted living, they may live in the community, they may live in their own homes.  

But it's a program that allows people to, you know, come--they they arrive for the day. They get to spend the day at this place surrounded by health and social care workers. There's activities designed for them. And so there it's really about sort of a wraparound service to support their health and their well being. And so it appears to me that this company has been selling this product to some PACE programs. 

Emily M. Bender: All right. Thank you. So they say on their, on their website here, "What we can do:" um, and they've got this brief paragraph. "Unlike other digital health solutions, the Care.Coach platform combines the high touch, high touch ideal experience of having a live in family member with the clinical efficacy payer/provider goal alignment of a care manager, and the scalability of AI and digital technology." 

What does that mean, Clara?  

Clara Berridge: Yeah, well, to me, right away, I just see how that they're they're tapping into, um, a discourse. And this is a discourse that Louis Neven and Alexander Peine have called, like, it's the triple win of the aging and innovation discourse. They're naming this triple win. And, um, this is something that you might not have heard called the aging and innovation discourse, but you have heard it, right? 

It's, it's this discourse that has a rhetorical power that legitimizes really large investments in technology for the care of older people because aging is positioned as this imminent crisis that's going to affect whole societies, um, you know, and investing in technological solutions is then positioned as a way to generate benefits on multiple levels on the individual level, the societal level, the economic level. 

So that's what these scholars are calling the triple win. And digital technologies and AI are, of course, sold under the umbrella of resource scarcity. And so these authors have pointed out that this discourse that with technological innovation, we are all winners, is really morally charged and it can become very difficult to argue with. 

And then that's built on top of decades of what's been called alarmist demography or apocalyptic demography, right? You've all heard the crisis weather metaphors. This is a silver tsunami, it's an age wave. Like it's implying that we're going to be drowned by old people. And these discourses, um, that position aging and older adults themselves as the problem really contribute to something that I think is very much under the radar and under the radar pressure or, um, a sort of moral responsibility that an older adult should subject themselves to whatever form of monitoring or surveillance in the name of care. 

And it's becoming the right thing to do. Right. So I think it's important to understand from the get go, there's a lot of forces at play that help to normalize surveillance of everyday life, the biomedicalization of aging and older adults. And that that's very reductive, right? 

The alarmist demography and this aging and innovation discourse. 

Emily M. Bender: Those metaphors that you're talking about, so the silver tsunami and so on, like when you said we're, that they positioned it as if we will be drowned by old people, that is so dehumanizing because it excludes the elders from that 'we,' right? 

When, when in fact, if anything, we should be focusing on the needs of these people who need care and how do we meet those needs rather than how do we protect ourselves from their needs, I guess. 

Clara Berridge: Yeah. And it, you know, it's often used, it's deployed very intentionally to reduce public support for social insurance programs. 

They're about spreading risk across generations, right? So we, we tend to lack a multi gen, a multi generational perspective when we're using this kind of alarmist demography.  

Alex Hanna: Yeah. And I see these kinds of discourses. I mean, it's of course deployed in the US, often in Europe. Um, also I think Japan also really is, I mean, it seems to be, I hear a lot about Japan of having this aging society and sort of this going to be this kind of like a silver boom, and it does this particular sort of work where it's, it's like, well, in the great society, you know, we were able to care for elders in such a way, but now people are getting older. And now we kind of have this narrowing base of younger people to pay into social insurance programs. 

And so what do we do about this? Um, and so, you know, you have people across the kind of political spectrum, kind of utilizing those for kind of different gains.  

Emily M. Bender: Yeah. So shall we jump into their, um, solutions? There is two that we want to talk about. There's Avatar, um, "our 24/7 human AI avatar connected software," and then a follow up there, which is "AI-powered automated chat software." 

So let's start with Avatar. Um, and I have to say, I hate that web design. This looks like a dropdown menu, but in fact, it's just a button. Um, all right.  

So we have here Avatar, TM. Um, and it's, we're looking at a, what looks like a phone or a tablet and on top there is a cartoon dog. And on the bottom there's a graphic of a person doing some exercises. Um, looks like leg exercises. And the text here says, "Meet Buddy, our virtual companion software using new technology and AI for optimal relationships." We don't optimize relationships, but okay. I'll do a little bit more before we react more than that. "Our virtual companions extend the reach of both the provider and health plan into the home. By engaging with the member throughout the day, an average of 22 minutes each day, Care.Coach provides psychosocial support and gains rich data for the health plan that can be used to improve member experience and optimize capitated payments."  

So who, who are we serving here?  

Clara Berridge: Yeah, there's a lot, there's a lot going on here. 

Maybe it would help to just kind of explain what's actually going on in the product. Like, so a person, so this avatar is called Buddy. It's a dog. It can be a cat too, I think. And there is a human being, um, behind it, powering it, watching and listening through a tablet, and typing messages to the person that gets converted to audible speech. 

And as shift change happens, because, you know, it's 24/7, so it can't be the same person. The, um, the user actually has no idea that a new person is now driving the avatar and has the notes from the previous shift. So this is how this is working. Um, and so, you know, 'optimal relationships' absolutely hit me because since when does an optimal relationship involve multiple forms of deception, right? 

Emily M. Bender: Right. Right. Good relationship, even. Like I'm, I'm itchy at, at trying to optimize relationships, but, but a good relationship does not involve deception.  

Clara Berridge: Good point.  

Alex Hanna: Yeah. And I think it's not clear that there's this kind of human thing behind us that is changing. They, in the ad copy, they say "with the fusion of AI and human connection, we have created the optimal service of a caregiver and the speed of digital products." 

So, you know, what are they doing on the back end? I mean, are they doing trying to identify particular sorts of instances of what the person is saying and then presenting it to the person on the other end. I'm really I'm really chuffed by the, um, the kind of features that they have down here that they have in little icons. 

So they say "Avatar features, uh, in home companionship, periodic check ins, daily conversation, socialization and entertainment, COVID support, um, caregiver support, health and wellness services, appointment reminders, medication management, and home services confirmation."  

Emily M. Bender: Yeah.  
 

Clara Berridge: That home services confirmation, sorry, Emily. 

Emily M. Bender: No, go ahead.  

Clara Berridge: That, that makes you wonder, are they surveilling the workers who are coming in?  

Alex Hanna: Yeah, that's a good point.  

Emily M. Bender: I was, I was struck by 'socialization' instead of socializing, like this suggests that the, the elder is somebody who needs to be socialized as opposed to someone who needs to socialize. 

Alex Hanna: Yeah.  
 

Emily M. Bender: And I'm also, so one of the things about this, there's the deception, there's, um, surveillance of home care providers probably, and there's just this like blatant, we're doing this to collect data for you, right? 

So this, uh, the thing about, uh, "gains rich data for the health plan that could be used to improve member experience--" Okay, fine. "--and optimize capitated payments." I went and looked up capitated payments and landed in like this really confusing part of the way we manage the, the finances of healthcare in this country. 

Um, but basically this sounds to me like, uh, grab this data, so you can, you know, squeeze the most maybe out of, out of Medicare for this patient or something?  

Clara Berridge: Yeah, that's, that's pretty much it. So like with, um, Medicare Advantage plans, for example, capitation is going to be like a set amount. It's a fixed amount based on some risk calculation for that particular patient group that you're going to get. And you can kind of spend it how you want, but you bear the risk if you go over.  

So it enables companies or providers to provide services that Medicare isn't explicitly allowing, like maybe this, right. So they can try to save money and keep you out of the ER through this.  

Emily M. Bender: Yeah. 
 

Alex Hanna: Got it, yeah, that's, um, yeah, good Lord. 

I mean, that's, every time I read about Medicare Advantage, I am horrified about what of it, what a, what a program it is and how it's just become this place--and, and, and Care.Coach has one of these products to, um, which is called Auger, which is above, and it's more of a traditional risk management prediction system, basically, but it's basically saying, you know, you know, they, their language is you know, "we use a machine learning approach to process member information, claims data, uh, actually predicting," in bold, "each member's likely healthcare costs are influenceable through improved ambulatory/primary care in the near future." 

So, I mean, "our deep learning neural networks process every bit of data--" (laughter) This is really, really ominous. "--and can, for example, predict a likely hospitalization based on the interaction between a member's chronic conditions, position, encounter history, and prescription refill pattern."  

So it's just, you know, I mean, I know these, I think these are sorted alphabetically in terms of product, but it's, you know, in terms of the most sort of, um, dystopian of them that, that I think takes the cake.  

Emily M. Bender: Yeah. And I have to say the, the name of this is very on the nose, "Auger," right? This is a prediction based on magic, but we're gonna do prediction based on uh, "deep learning neural networks."  

And I just want to point out to any CS people who say, 'oh, those are just technical terms, it doesn't matter.' This is how it matters, right? Those technical terms turn into ad copy, and it sounds very fancy. Um, it's, there's nothing neural about it. It's not really learning in the sense of a human thing. And the deep is, is, you know, completely misleading. It has to do with the size of the network and nothing else. 

So. Grr. 

Um, all right, so we have a couple of comments from the chat, um, about the, um, the Avatar. Medusa Skirt says, "A cartoon dog with worse design than common VR chat avatars, LOL." (laughter) It's like, clearly they did not put their, um, you know, hard fought VC dollars into graphic art in this case.  

And Abstract Tesseract says, "Wow, if I was going to make a list of how I want my community to support our older neighbors, it would be, uh, the opposite of all of this." 

Clara Berridge: Yeah. Yeah. VC dollars, but also National Institute on Aging dollars, um, and support with, from AARP.  

Um, so another thing I just wanted to highlight, even though, so it's not clear in this particular, the Avatar how AI is actually used because there is a human in the loop in this.  

But on another page, I can't remember which page it's on, but it says it's supported by a 24/7 global team of health advocates. 

They're calling the technic, like the people on the back end, the health advocates, which is interesting. But I think it's, I mean, more accurate would be like Global South team because they're dependent on outsourced care work in the Philippines and Latin America, right, who they can pay less than the people if they worked in the same country as a user. 

And then on the company page, I just, it's really good, at the bottom, at the bottom of the company page. So you can find it behind, behind the founder story. Yeah. Under that. There it is. Our software.  

So, "Our software is engineered with love in our hearts and HIPAA on our minds in San Francisco, Bay Area, California. Our devices are proudly designed, assembled and shipped from Lawrence, Kansas."  

So they're thinking about HIPAA in California, thinking about it, and proudly-- 

Alex Hanna: Only thinking about it. 

Clara Berridge: Yeah. (laughter) Proudly producing the device in Kansas. But what about the 24/7 workers that are, you know, manning the Avatars? Why aren't they proudly advertising what, where that work is happening? 

Um, so I think it's interesting. They're dehumanizing the workers in a couple of ways. One, they're recruiting them into cats or dogs. And two, very, they're not allowed to talk about themselves to the user. They, you know, um, so they're sort of filtering out parts of themselves. They're not allowed to answer questions when the user says, well, how are your children doing? 

They're not supposed to answer that. Um, and there's--  

Emily M. Bender: So much for socialization, right?  

Clara Berridge: Yeah. And like robbing people of reciprocity in those relationships. Care is not one directional. They're trying to make it that way. Or they're envisioning it as something like that. Amanda Lazar is a great researcher at University of Maryland, and she's researched this, this very product. 

And she's found that older adults don't want to just be objects of care, surprise, surprise, like they also want to make genuine connections with people and have opportunities to care about the caregiver, to inquire about their lives.  

Emily M. Bender: Yeah, absolutely. I'm thinking about my, my grandmother who towards the end of her life, um, was fortunate to be able to hire people for in home care and she had great relationships with those people. 

And, and the fact that she could hear their stories and connect with them was like oftentimes the highlight of her day. So, but also to think about it from the care worker's perspective, I think that, that Is one of the, the big sort of pluses of that kind of work is the connections you make with people. 

Um, so, all right. So, but should we take the people out of the equation altogether and go look at Fara here? Although, are the people really out of the equation or are they just now in the background like annotating stuff rather than interacting directly at all?  

Alex Hanna: Yeah, this one's called-- 

Clara Berridge: I think with F-- 

Alex Hanna: Yeah, yeah.  

Emily M. Bender: Yeah.  
 

Clara Berridge: I think with Fara there, they use the Buddy data to develop Fara. So those workers were involved in the creation of Fara.  

Emily M. Bender: So Fara, TM. The headline here is, "Develop person centered relationships, coach behaviors, and discover care needs at total population scale." How is it person centered relationships if it's not a relationship between two people? Like, ahh.  

Okay, and then we have actual transcript from a live demo to an AARP executive role playing as a Medicare member. 

"Member: Do you think it's okay to ask him to come?  

Fara: I think it's a great idea to ask him to come, to come over and watch the game with you.  

Member: Maybe I can make his favorite cookies for his two kids, too.  

Fara: That's a great idea. I'm sure they would all love it.  

Member: Well,ell, I really like to make the cookies, but now that I think about it, I've just been really tired lately. I don't know if I could do it.  

Fara: It's okay to feel tired, Anne. Maybe you can ask Josh if he can help you make the cookies. I'm sure he will be happy to do it.  

Member: Fara, you clearly don't have kids. My son cannot cook. You got a better idea? Ha ha.  

Fara: Well, how about this? Why don't you make the dough ahead of time, and then Josh can help you with the baking. That way you can still make your delicious cookies and spend time with your son."  

That's interesting. So, (laughter) um, there's this, uh, there's a lot of stuff to be said there that we probably don't have time for about how the AARP member imagines, um, what this person would be doing, or sorry, AARP exec.  

Um, but, uh, back to the, the, uh, outside the demo, "Imagine the type of deep personal relationships that individuals develop with their Care.Coach avatar, thanks to its trained human staff controlling each avatar device. Now, imagine you could immediately and cost effectively scale these relationships across your entire population. How many more healthy, independent years of life and how many fewer visits to the hospital could we drive through improved self care behaviors and better understanding of each person's needs?" 

I-- 

Alex Hanna: Jeez, yeah, this is, this is really, this is really, absurd. I mean, the next, the next sentence is saying, you know, they're jumping on the hype train.  

They say "Care.Coach Fara is the leading relationship driven healthcare support solution powered by generative conversational AI, the same type of technology that powers ChatGPT. But unlike big tech companies, whose AI training data is unspecific to healthcare and caregiving, Care.Coach's data comes from 10 plus years of supporting Medicare and Medicaid members living with complex psychosocial and medical challenges and a million plus conversations per year conducted through Care.Coach Avatar.  

Clara Berridge: Yeah, it's like heads up Buddy users, your weird conversations in your home, that you probably imagined were private between you and your Avatar, have been used to train this company's AI.  

Emily M. Bender: Yeah. More deception, as you point out. Um, I, this next sentence also really got me, uh, "Through safe healthcare-tuned AI, alongside traditional care management teams, we can work together to ensure that healthcare isn't left behind yet again by technology, and that historically underserved populations have access to the benefits of the seismic shift in what is possible." 

So there's sort of two things going on here. One is the, um, sort of leaning on the actual problems faced by underserved populations to motivate what they're doing. And the second thing is this weird presupposition that healthcare has been left behind by technology, um, rather than, you know, being one of the few sectors where we have some regulations in place that prevent the tech pros from going in and immediately breaking everything. 

Like. (laughter)  

Clara Berridge: Yeah, absolutely. I, I really, um, didn't like the sentence either, the historically underserved populations having access, right? I think this is really problematic. Um, you know, we don't want to spend on expensive human care for them. So this is what they're going to get. And we're going to cause it positive--we're going to call it positive innovation. This is progress, like thinking of Ruha Benjamin's critiques. 

So give older adults who the systems are not adequately serving a tool that doesn't stand up to basic ethical principles. And in the mean--it also generates a lot of data and profit. 

When I, when I talk to older adults about virtual companions, people will equate not being provided the desired support from humans, like if they want actual human care and interaction, with being treated as a lesser person because the implication is you don't deserve care from other human beings.  

Alex Hanna: Yeah, that's a that's a really great point. I mean, that's the thing that I think we come across so much in AI hype where there's a discussion of equity, but the discussion of equity is that people have unequal access to tech synthesis machines, which is really, really bizarre. I mean, this happens a lot in education and healthcare, which is, it's, it's exactly the opposite. I mean, the people who feel left behind are the people who aren't getting human contact.  

Clara Berridge: Absolutely.  

Emily M. Bender: Yeah.  
 

So just to finish off the blurb here, "Care.Coach Fara, TM, builds real relationships with members."  

No, it doesn't. Right? I mean, if anything, yes, yes, people can create relationships with non people. I mean, certainly with, with pets and other animals and, and also with inanimate objects, but the thing on the other end isn't doing any relationship, right? It's all on the person side.  

Um, "State of the art technology generates conversations about topics of personal interest to each member. FARA builds upon data from Care.Coach Avatar to engage entire healthcare populations through empathic conversational AI."  

Ah, what kinds of nonsense is it going to be saying? 

Clara Berridge: Yeah, and why are we trying to outsource empathy to machines that aren't capable of it? Why for this particular population, right? There's a lot of ageism and ableism going on enabling that.  

Emily M. Bender: Yeah.  
 

Alex Hanna: I'm curious on this, in your opinion, this turn or phrase, Clara, what did they mean when they say 'entire health care populations'? 

I mean, what is that stratification? Is it on age? Is it on condition? Um, I'm curious on what that turn or phrase is indicating.  

Clara Berridge: It seems to me like they're targeting, if I have the right product, they're targeting to older adults primarily, but then they've also identified individuals with intellectual and developmental disabilities, I think, as a target group. 

So they may be referring to those large two groups, but they also may be referring to like, we're going to help people manage their diabetes. Um, you know, it could be sort of broken down by chronic condition. I'm not sure which. But people are defined by as healthcare populations. This is, this is not uncommon. 

There's biomedicalization of aging, right?  

Alex Hanna: Yes. A hundred percent.  

Emily M. Bender: Yeah. Yeah. Oof, all right. So any, any final thoughts about Care.Coach before we switch to our second artifact?  

Clara Berridge: I actually do because the founder, gets interviewed a lot and I was listening to a recent interview and something he said, I think it reveals pretty directly the product's calculations and it relates to this like underserved populations thing. 

So he said that you can think of privacy like a currency. Um, like if it's worth what you gain, you'll trade it. And obviously I don't agree with that take on privacy. It's, you know, privacy is inextricably linked to all these other values, like autonomy and feeling free and trust and all this that you know. 

But if privacy is a currency, you can see then that it's the people who don't have actual currency or funds to choose the privacy preserving option. They're going to be the targets of the product. So lack of access to needed care in this way makes you vulnerable. You can be backed into a corner to trade your privacy. 

Emily M. Bender: Yeah. And if privacy is a currency, then that sounds like companies like Care.Coach might be trying to amass that currency, right? Of course it stops being privacy at that point. It turns into data. But basically they're, they're seeing it as a source of wealth and income that they can just gather together and, and, and steal from very large healthcare populations. 

So that even if it's a little bit at a time, they get a lot.  

Alex Hanna: There are people listening to the pod. Uh, uh, Emily has just like a shocked look on her face and I just look disgusted, but it's, it's, you know, when I, I mean, it, it, it's, to me, it's, it's this kind of a particular sort of ideology of kind of a propertarian kind of notion of data. 

It's like, yeah, you could sell your data if you don't actually have currency, and then you just have to be bent over the barrel in the right way.  

Emily M. Bender: And I think the thing about privacy as currency is that there's this big asymmetry in what an individual's privacy is worth to them versus what an individual's data is worth on the other side. 

So the company is gathering lots and lots of people's data, and each of those pieces might not be hugely valuable, but together they are. And for the individual, it's enormous, it's their privacy. Right. That, um. (sighs)  

Clara Berridge: Not to mention that the informed informed consent is not voluntary in these situations, generally, there's no meaningful disclosure that an older adult actually understands or is presented with--like a caregiver is often sort of the person who might see it on the website and is making decisions.  

So that's a problem.  

Emily M. Bender: If this is put in by, you know, some company saying that we're doing this for all the people we're providing care for, then the people for whom maybe that, that company was their only option for care really don't have a choice. 

Clara Berridge: Yeah. Bingo. Yeah. Same in residences. If somebody, if a company or an organization that provides, housing wants to implement a device, they're going to want it implemented across the rooms, right? They're not going to want people to have individual choice about what they do and do not want to subject themselves to. 

Emily M. Bender: Yeah. So there's not even opt out much less opt in in this case.  

Alex Hanna: Yeah.  
 

Emily M. Bender: All right. There's, uh, let's switch over to this thing in the New York Times, where there at least seemed to be some kind of individual opt in opt out, but it's pretty awful. Um, so this is an article, um, by Erin Nolan from July 6th of this year. 

Headline is, "For older people who are lonely, is the solution a robot friend?" What is it? Betteridge's law of headlines? 'If it's a question, the answer is no.'  

Alex Hanna: I haven't heard that. No, that's great. Yeah.  

Emily M. Bender: All right. Subhead here, "New York officials believe a robotic companion named ElliQ, which can discuss complicated subjects, is helping older residents feel less alone. Critics are concerned about data collection." And this is like such, so typical of New York Times coverage of tech where it's like, here's the hype, oh, by the way, critics say this. And there's, they sort of put the critics in the critics box instead of this, you know, this could have been written another way. 

Right. Um, people, advocates paying attention are concerned about data collection under the guise of robot companions. Like it could have been framed the other way around and put that first, but of course they never do that. It's always the sort of tech co's perspective first in these articles.  

Alex Hanna: Yeah. The reporting here is pretty abysmal. 

I mean, it's, I mean, it's very it's very limited. Even the critics are, you know, have a very kind of muted sort of framing on them. And it, you know, ends on a very hopeful point. So anyways.  

Emily M. Bender: Yeah, the shape of these articles is basically sob story, splashy tech solution, critics say, but then you get the, the last word goes back to the people selling the tech. 

Alex Hanna: Yeah. It's very typical of the Times.  

Emily M. Bender: Abstract Tesseract says, in quotes, "'Critics, who clearly want older adults to be lonely and care workers to be exploited, are being mean to AI visionaries who are just trying to help.'" (laughter)  

Alex Hanna: Yeah. A hundred percent.  

Emily M. Bender: Yeah.  
 

Clara Berridge: One thing that's kind of unique about this piece compared to how this topic is usually covered by the media is that they do emphasize the data privacy problems and the need for regulation. That's rare. 

Um, and it's a real concern of older adults. Um, you know, I've done surveys of like over 800 people on this topic, on companion robots, and do you, who do you want to get the conversation data? Do you want it recorded, et cetera? They do not absolutely, nobody wants it to go to insurance companies. Nobody wants it to go to the tech company. 

Nobody wants people accessing the data that they don't have control over, right? Um, but we don't have laws in place to, to make sure that's not happening.  

Emily M. Bender: Yeah. Yeah. And there's, I guess I want to skip over the sort of sad story at the beginning of this that introduces, is it ElliQ? ElliQ? What's the--  

Clara Berridge: I pronounce it ElliQ. 

Emily M. Bender: Okay. Um, it's, it's sort of, in a very gross way, it has IQ embedded in the name, which is not surprising.  

Alex Hanna: Just for extra grossness. Yeah.  

Emily M. Bender: Yeah, exactly. Um, so, "ElliQ, a voice activated robotic companion, is powered by artificial intelligence." I'm sorry. "--powered by artificial intelligence, is part of a New York state effort to ease the burdens of loneliness among older residents. Though people can experience feelings of isolation at any age, older adults are especially susceptible, as they're more likely to be divorced or widowed and to experience declines in their cognitive and physical health."  

I mean, true, right? This is, this is as usual, there's a real problem in the world that's being identified. And then that is being used as an excuse to throw some surveillance tech at it.  

Alex Hanna: Yeah, and and then they go ahead and the next, I mean, the next part is the shocking part. So, "New York, like the rest of the country is rapidly aging and state officials have distributed free ElliQ robots to hundreds of older adults over the past 2 years." 

And so it's, I mean, that the fact that New York state is purchasing this and distributing it is pretty shocking. Um, you know, they link to this and they link to this kind of aging, the silver, did you call the silver tsunami at the top of the pod? But the kind of population bomb kind of thing here, you know, and they, you know, they link to this kind of thing, um, about-- 

Emily M. Bender: I'm interested, where it says "New York, like the rest of the country is rapidly aging" is an interesting framing because it's, it almost goes in the right direction, right? We as a population have a higher average age than we used to. So you can see that as we, as a population have to work out how to provide care for those who need it. Like that, that is a statement that sort of brings everyone together, I think. And then it's like, okay, so let's buy some robots. 

Alex Hanna: Right, exactly. And it's, yeah. And so it's got this, you know, these population demographers linked here and talking about where this is and, um, kind of, uh, we can go into that later, but yeah, I mean. And then jumping into it and talking about the company. So "Created by the Israeli startup Intuition Robotics," So, "--ElliQ consists of a small digital screen and a separate device about the size of a table lamp that vaguely resembles a human head--" (laughter) It doesn't look like a human head at all. "--but without any facial features. It swivels and lights up when it speaks."  

And you look at the picture at the top. It's it looks, it does, it looks like a tablet and it looks like a lamp to call it a human head or anything looking like a human head is, um.  

Emily M. Bender: It's giving eye of Sauron. 

Alex Hanna: It is giving eye of Sauron. It's giving maybe HAL, um, from 2001 Space Odyssey, but human head is quite a, quite an overstatement and a weird jump in anthropomorphizing.  

Emily M. Bender: Yeah. All right. So then, "Unlike Apple's Siri and Amazon's Alexa, ElliQ can initiate conversations--" Great. So it can like intrude, it can start the conversation. And, wait, sorry, "--and was designed to create meaningful bonds. Beyond sharing the day's top news, playing games, and reminding users to take their medication, ElliQ can tell jokes and even discuss complicated subjects like religion and the meaning of life." Uh, right, so it can certainly output text on those topics, um. 

Yeah, I'm not, I'm not super, uh, comfortable like the, they do later on, say in the discussion of privacy, well, we're careful to make sure it doesn't do anything with bank data, but we're going to--  

Clara Berridge: To make sure people don't think it's not private. Right.  

Emily M. Bender: Right. Yeah. Or any kind of invasion of privacy they have to worry about. Um, okay. So--  

Clara Berridge: With these, you often see like this mixing of functions. So there's like these instrumental tasks that it can do. And some of these are, you know, they facilitate somebody calling a human being, like they might facilitate a video call. Right. Um, they might, you know, generate jokes. Like there are some things where it's like, okay, that might be useful and good, but they're all kind of lumped together and then they're supposed to provide companionship, like a human, right, then there and then the data, what's happening the data.  

So they there a problem, but they can be kind of sold under, well, look at all these--what's wrong with, you know, telling jokes or, you know, um, make-medication reminders? 

Of course, there already are medication reminders and reminder technologies.  

Emily M. Bender: Right. And you could have something that's special purpose. I mean, you can set an alarm that tells-you don't need this whole thing.  

Clara Berridge: You don't need AI.  

Emily M. Bender: Do you know if this is like continually listening or if it has a wake word that it's listening for? 

Clara Berridge: ElliQ doesn't have a wake word, um, because it can initiate conversations. Um, so I'm not sure what the privacy settings are on this one, but, um, we know that people want privacy. Older adults very much desire the opportunity to turn things off and to have privacy. Um.  

Emily M. Bender: Yeah.  
 

Clara Berridge: Yeah.  
 

Emily M. Bender: I feel like this is a question that journalists should have asked. This thing is sitting in your home. Is it, is it always listening to everything that happens? Right. Is it always recording all of that? Um.  

Alex Hanna: Yeah, I mean, the kind of things that we have, and we know that, I mean, if it's, if it's listening or you have to opt out, I mean, many of these folks are probably not going to have technical acumen to do this, I mean, that's, that means a big assumption, but I mean, you know, it's, these things should be, if you want the data, I mean, you need to treat it with the kind of respect that it needs.  

And then on the data tip, they've got two quotes. I mean, they have ""Assemblywoman Lindsay Rosenthal says, 'It's clear that technology is leaps and bounds ahead of the law--'" Which is maybe overstating that. "--it always is, so we have to hurry up and pass some guardrails so this technology doesn't take off with all our data information and data and use it in ways we wouldn't otherwise permit.'"  

Emily M. Bender: Okay, two things there. I really appreciate that Assemblywoman Rosenthal is on the case and looking at this, but I wish--so first of all, as we're both reacting to, "leaps and bounds ahead of the law." 

No, it's not. It's that the tech companies are asserting that the existing laws don't apply to them. Um, and too many policy makers and enforcement agencies, unlike the FTC, are sort of willing to go along with that. Um, but also, we have to hurry up and pass some guardrails so that "this technology doesn't take off" with all of our information. 

No, it's not the technology that's taking off, it's the tech company that's gathering that information and doing stuff with it. So, I'm glad that she's on the case here, but I wish that she would sort of look at it as actions by the companies and not by the technology.  

Alex Hanna: Well, it's also the case that both Rosenthal and they also quote Kristen Gonzalez, um, who's the chair of the Internet and Technology Committee? Um, I've spoken with with the senator, Senator Gonzalez before, who is, um a younger representative and is really like looking at some of the technology they say, um, "'It's incumbent on state government to act and say, how are you storing, protecting and using that data? And how are you making sure it's not being used in any way that can negatively affect users?'" 

And the thing is, I mean, the kind of there's the focus on data. I mean, the focus on data is valid, but I mean, the kind of refusal to kind of look at the ecology of elder care, or at least of the, the journalist writing about it. I mean, I think that's the kind of, it seems to me that's more of a, I mean, I don't know, I'm assuming that the angle that the journalist, uh, kind of went with here, but I mean, the, I mean, it's not just the data, it's just everything around how we treat elders in the state and in the country. I mean, that's, that's what needs to be focused on.  

Clara Berridge: I feel like that comes through kind of subtly. I wanted to point it out in the next, the story they give about Monica Perez. 

Um, yeah, you just, um, so, uh, I saw, uh, I think it's up a little bit. They kind of start with, yeah, there we go. So they, they introduced Ms. Perez's story with "Following a move to a new apartment, her health and well being started to decline." Right. And so just, I guess, as a gerontologist, I'm picking up on that because it is so common. 

This is such a common problem, that people have to move far from where they, their communities are because of lack of access to affordable and physically accessible housing for older adults. And so I've done research in New York. They're referencing, you know, moving potentially out of Manhattan. I'm not sure if Manhattan is usually just the geographical reference point here, but like she now lives an hour and a half north and is not close to any friends or family. Um, and is spending weeks or months alone in the apartment, right?  

So what is causing being, like moving away from that established community? Why aren't we then focusing, okay, the problem here with the loneliness is caused by this lack of affordable, physically accessible housing in her borough. The wait list for housing in New York, last time I was there doing research, was eight years for senior housing, that's affordable. Eight years. And then transportation does--it's not considered safe, reliable, and accessible to a lot of older adults. 

And so I've talked to many people who said, 'I haven't visited the neighborhood where I raised my children and had this great community for, you know, so many years since I moved to this housing that I needed because I, I need, I had, I had stairs and this doesn't have stairs. And I, my name finally came up on the wait list. I had to move to another borough.'  

But this is really common across the country. The underlying problem is that we're not enabling aging in community.  

Alex Hanna: That's a great point. And our producer, Christie Taylor, who is a New York resident, um, points out that the subway uh, MTA in New York has an ADA exception for catching up on accessibility. 

And I know that MTA is notoriously inaccessible.  

Emily M. Bender: Yeah. And so this, this is sort of such a typical thing where the tech bros say, well, we have to have all this data in order to provide the service. What are you going to do? You need the service. And, you know, as you point out, Clara, no, what you need to do is you need to pull back and say, okay, what's the actual cause of this problem? And let's put our efforts towards addressing that rather than using the problem as an excuse to do all this invasive, you know, privacy, uh, harming, deceptive activity.  

Um, so I wanted to hit the, um, um, the part where they, they talk about banking just because, um, okay. So "Dor Skuler, the chief executive of Intuition Robotics, said that ElliQ remembered every conversation and exchange it had with the user. The ability to retain so much data about a person's life, health, and relationships was critical to how ElliQ functioned, he said."  

So we can't build this tech without the massive privacy invasion. Um, "But the company opted not to give it the ability to assist with tasks that require payment or banking information, in part to reassure users that their data was safe." 

So you were not collecting your bank information so that you believe that the rest of the data is safe, I guess. As, as you're pointing out there, it's just, it's just more deception.  

Clara Berridge: Absolutely.  

Emily M. Bender: Yeah.  
 

Clara Berridge: And would you mind scrolling down to where we see this ethicist talk about, yeah, um, yeah, the worst case scenario. 

So, right. Um, this ethicist says, "The worst case scenario is that it makes people not even want to interact with other relationships or their friends. They're not actually able to enjoy the beautiful reciprocity that emerges from social interaction."  

So I agree that that's a really bad and very possible scenario, but I want to challenge us to think of those scenarios in structural terms too. Like, reducing opportunities for people to have those social interactions. You know, what are the opportunity costs when solutions absorb the resources, or they provide the band aid that enables the box to be checked?  

You know, we've got AI companionship for social isolation in New York now. Um. We have a context of, you know, nutrition programs, meals on wheels, housing, as I mentioned, all kinds of programs, senior centers are chronically underfunded, libraries. You know, Seattle, here in Seattle, they've reduced their hours because of funding problems. Older adults, use libraries. Like places where people can come together and have community and get a sense of belonging and have that sense of meaning, which we know people want, um, those are chronically underfunded, often with waitlists. 

Emily M. Bender: Yeah, yeah. And so if we're, if we're putting resources into so called AI companions, those resources can be better spent on building community spaces. But also if we, you said you put that Bandaid there, it's like, okay, maybe you've got somebody who really is facing transportation difficulties, like as we were talking about and they can go out and see friends, but it's kind of hard.  

And so it's like, well, here, here, here, use this AI companion so that you don't feel quite as much motivation to go out. And then I could see that sort of turning into a bad spiral, right? Because it's harder to figure out the transportation and on and on and on. 

Um, so as, as you were saying, I think before we started recording, huge opportunity cost here. Yeah.  

All right, shall we briefly look at our third artifact? Three is always ambitious.  

Alex Hanna: I, I think, I think we, do we have time?  

Emily M. Bender: Let's just at least say what it is. So this is some academic work, um, by, uh, Cristina uh, Getson and Goldie Nejat, uh, and the title is, "Care Provider's Perspective on the Design of Assistive Persuasive Behaviors for Socially Assistive Robots." 

So from the title, it sounds like they're actually talking to care providers, which sounds like a good first step, um, but maybe not going in the direction we want. (laughter) But what happens--  

Clara Berridge: I think, I also think, why aren't they talking to their, the population target population here is people with mild cognitive impairment, who can absolutely understand these technologies and form preferences about them. 

Why aren't they starting with those, those folks? They're starting with the caregivers.  

Alex Hanna: Yeah. And they have the design. "We developed 10 unique robot persuasive assistive behavior strategies for the social robot Pepper," capital P, "using both verbal and nonverbal communication modes." And then there's some kind of behaviors of principles of persuasion that they followed there. 

And then they assess this based on a perceived argument strength Likert scale.  

Emily M. Bender: Mm, "Care providers' perceptions of the quality, strength, and persuasiveness of these robot behaviors." So basically, they're asking the care providers, how good will this robot be at getting the people you're providing care to, to do certain things? 

Yeah.  

Clara Berridge: Yeah. And the robots are saying things like, uh, Pepper is saying, 'It would make me happy.' It was paired with some kind of gesture, right? They're testing, like, 'it would make me happy if you would do this activity with me again tomorrow,' with its arms up. Right? Uh, very, yeah, very interesting.  

Emily M. Bender: Yeah.  
 

Alex Hanna: "The results show, yeah, the results show robot assisted behaviors using praise with emotion, along with emotion with commitment are most positively rated by care providers." 

So exactly what you're saying, Clara. "Qualitative responses indicate robot body language and speech quality were influencing factors in how a person perceives assistance in human robot interactions."  

Emily M. Bender: Yeah, this whole thing just feels so gross in the context of, we are fobbing off the elders who we should be caring for onto these robots. So let's make the robots more fun or more persuasive.  

Clara Berridge: Yeah, I think there's some real, so I got some insight into this. I, um, there's an article, a newer one by Daniela DiPaola and Ryan Calo um, on socio-digital vulnerability that I found really useful to thinking about this in terms of, like, the socially persuasive technology and how it can open people up to autonomy harms because, you know, in particular, in this case, somebody is not fully aware of the techniques that are being used to gain their trust or to persuade them.  

Um, and so they're also, it's also like collecting data about them through AI, right. And then using that for nudging purposes and persuasion and getting compliance. So that could clearly violate someone's autonomy, right.  

Alex Hanna: A hundred percent. Yeah.  

Emily M. Bender: Yeah. Yeah. Um, um, and I mean, this is, it's, it's always interesting to look at academic work because like we need a broad range of investigation into this, but academic work that sort of starts from the presupposition that robots are a good way to do this, it seems sort of difficult that it could be, difficult to imagine that it could get to good outcomes if it's starting with, we're going to be using robots. Um, and so 'let's ask some questions about how we can make them work better,' as opposed to 'let's figure out if there's actually any appropriate ways to use the robots and keep no on the table, sort of collectively and individually.' 

Clara Berridge: Yeah. Yeah. I'd love to see you do an episode on funding because I mean, there's so many pressures to, to not keep no on the table. Right. That academics and researchers face.  

Alex Hanna: Yeah, a hundred percent. Free dissertation idea. If you want to look at the funding calls at NIH or National Institutes of Aging and how much of those are having to do with AI or robots. Yeah, there you go.  

Emily M. Bender: Yeah. Excellent. Alex, Alex dispenses free dissertation ideas on almost every episode. And I want to say, if anybody is actually picking up one of those ideas, we would love to talk to you.  

Alex Hanna: Let me know. (laughter)  

Emily M. Bender: Um, all right. Any final thoughts before we transition over to Fresh AI Hell? 

All right. Nope.  

Alex Hanna: Let's do it. Take us to hell.  

Emily M. Bender: Okay. Yes. Well, you get to do the taking, I get to tell you what the path is. Um, Alex, are we musical or non musical today?  

Alex Hanna: I've been singing a lot today. Let's do musical.  

Emily M. Bender: Okay. What's our genre?  

Alex Hanna: Uh, you know, let's say dad rock.  

Emily M. Bender: Okay. So, um, here's the deal. The, uh, Care.Coach team ran out of majority world workers to manage the avatar. And so they've started actually hiring the demons of Fresh AI Hell and throwing just completely enormous numbers, like caseloads on them. So you are, in dad rock, singing about what it's like to be a demon answering questions from 100 different uh, people on the other end of this avatar thing.  

Alex Hanna: There's so many layers to this. Yeah. All right.  

 (singing) Going to my job  

down at the Care.Coach.  

These people don't pay me enough  

that I can get out of my roach 

-filled hell apartment.  

I'm going to find a new job  

where I don't have to talk and pretend to be a robot dog. (laughter)  

Emily M. Bender: Excellent. Excellent. All right, here we are with our first entrant for Fresh AI Hell. 

This is an article in The Verge with stickers "Apple Tech" and "Artificial Intelligence" by Wes Davis on August 6th, 2024, and the headline is, quote, "You are a helpful mail assistant, end quote, and other Apple intelligence instructions." Uh, subheads, "Some pre prompt instructions for Apple's upcoming AI features are stored on your Mac, saying 'do not hallucinate' and to 'avoid negative themes.'" 

So basically what happened here is someone managed to get access to the, the pre prompt instructions. So the prefix that is sent in, um, before whatever question the user is asking as the prompt that the, that the system is supposed to be responding to. And the thing that is the most hilarious to me here is that someone at Apple believes that if you tell the large language model, 'do not hallucinate,' that'll solve the problem. 

Alex Hanna: It's incredible. I just, it's such a funny thing to do. Like as if the thing was like, 'Oh, okay. I was going to make up a lot of shit, but now I won't. Okay.'  

Emily M. Bender: Yeah. Okay. You got me.  

Alex Hanna: You got me! (laughter)  

This next one is some sheer, sheer hell. So this is from MIT Tech Review by Eileen Guo, and the title is, "The U. S. wants to use facial recognition to identify migrant children as they age." The subhead reading, "A previously unreported project is intended to improve how facial recognition algorithms track children over time." And this is just an awful article, I mean, go ahead and read it.  

It's, it's a pretty long article, but basically they're saying because there's effectively no public data sets on children aging, um, CBP, Customs and Border Patrols, basically saying that they're going to start doing this for on for, for migrant children, um, a huge percentage of, of whom are unaccompanied, um, and then use that as a research data set or or somehow release it.  

I mean, it's, it's, it's pretty, it's pretty harrowing.  

Emily M. Bender: Just incredibly exploitative. Like let's take people who are already like facing so much and just, you know, not, not even treat their privacy as currency, right? We're just, we're just going to grab it. It's, it's ours with, with nothing in return. 

All right. Um, Also in, uh, pretty dismal, um, and unsurprising news. So this is a tweet by Robert Evans, parentheses, the only Robert Evans, from August 17th, um, or possibly August 16th, since I'm looking at this from Japan. Um, and it's quote tweeting, um, something from relationships.bluesky.Social, um, saying "Family poisoned after using AI generated mushroom identification book we bought from major online retailer." 

And, uh, Robert Evans comment was, "The thing everybody knew was going to happen, happened."  

So this is a story from Reddit, um, by a parent who basically said, yeah, we bought this mushroom guide online, did not know it was auto generated, used it, and ate a mushroom that put us all in the hospital for a week. 

Um, and it's like, that was totally predictable as soon as those things hit the shelves. And it is just horrific that they stayed up long enough for this to happen. Um, and I really hope that this person, um, is going to take legal action and try to get some clarity about where the accountability lies in the situation. 

Alex Hanna: Yeah. So, I mean, what they're trying to do here is that they're basically, they didn't know it was AI generated. Um, but then they read it closer and basically the language had this kind of fourth wall breaking language. Some of these tells, which are like, 'let me know if there's anything else I can help you with.' 

And then, um, these people are effectively trying to find some kind of accountability, um, with the major online marketplace. I mean, you could probably guess which one, um, it might be another one, might be surprising, um, and see if there's any kind of recourse that they have, which, unfortunately, there's often not in these situations. 

But, hey. I hope they can sue the pants off them.  

Emily M. Bender: Yeah, hope so too. I hope they're feeling better. Um, and I'm glad that it was, you know, only a matter of hospitalization and not worse. Um.  

Alex Hanna: Yeah.  
 

So this next one is uh, from CommonDreams.org and the title is "Corporate Greed is Out of Control," which is a quote by Elizabeth Warren, and she says, Warren, uh, it says, "Warren Slams Kroger's AI Pricing Scheme." 

Um, and the subhead, "Kroger is--" This is a quote from Warren. "'Kroger is trying to pull a fast one on us by using digital price tags, a move that could let them use surge pricing for water or ice cream when it's hot out.'" And it's by Julia Conley. Um, and basically what's happening here is that Kroger, the huge grocery chain, is trying to use Is these tools basically surge pricing, um, for different types of products. 

Um, so talking about the surge pricing for water and, uh, ice cream. And then the, in, in the pre show you said they were also trying to do something around like using like gender and, uh, age recognition to also offer new things.  

Emily M. Bender: So this says, "As the senators explained, the EDGE shelf helped Kroger gather and exploit sensitive consumer data." Um, Edge, I'm skipping ahead a little bit, um, "Kroger plans to place cameras at its digital displays which will use facial recognition tools to determine the gender and age of a customer captured on camera and present them with personalized offers and advertisements on the EDGE shelf." So that is, um, saying we're going to do bespoke advertising based on this surveillance, but like it's a tiny step from there to we're going to set the price based on the surveillance. And this whole thing is about how basically the point of this is to, um, set the price as high as the customer would be willing to pay automatically based on all the surveillance data. 

It's gross. We probably need to do a longer one on this sometime, but I have two palate cleansers for us today. I'm excited. Um, I can describe the first one. So you get the second one. Alex. Okay?  

Okay. So this is, um, uh, from, uh, USA Today by Emily DeLetter, uh, four days ago. "Video shows Waymo self driving cars honking at each other at 4am in a parking lot." 

So there's this parking lot that's full of Waymo cars moving slowly and honking at each other. Because they are running into each other or getting into each other's space and they're trying to park. And it's going on at 4am, um, surrounded by what looks like maybe an office complex, maybe a residential complex. 

And this is just hilarious to me because it says so much about like UI design. And one of the futures that keeps being sold to us about self driving cars, that the self driving cars will be able to communicate with each other and get out of each other's way. But not by honking.  

 (laughter)  
 

Alex Hanna: Yes. Sorry. I, I had to put something in the, in the, in the Twitch chat. I said, "It's a beautiful night in the neighborhood and you are a horrible car."  

Emily M. Bender: Isn't there some angry goose meme too?  

Alex Hanna: Yes. That's exactly what I was referencing. Untitled Waymo game. (laughter)  

Emily M. Bender: okay. And then finally you get this one.  

Alex Hanna: Yeah. So this is a, uh, a tweet by Orikron, uh, and it's, "There's a trend on Chinese social media where they imitate AI videos and it's beautiful."  

And it's going to be hard to describe this for the pod, but basically there's two people. One of them kind of like leaves a door--or like--I'm going to wait until the video restarts to describe it.  

Um, so basically one, yeah, one person opens the door, pushes somebody else. The other person tries to strike and it looks like, and then makes this just uncanny face, pulls back. A different person shows up with a, a glass of wine and the original person pulls out a bowl of noodles and then starts to shove it into their face. Just like, completely making ridiculous faces.  

And it's this perfect encapsulation of starting with this frame and then making these goofy ass faces the way that these AI generators are supposed, video generators are supposed to be impressive in some way, but just having no idea what is actually happening here. 

So I, I love that. I love these enactment of it. Just incredible. Just top notch internet here.  

Emily M. Bender: Yeah. And these, these actors are doing such a great job with like the weird sort of slow mo, like dream like motion that comes out a lot of these things. It's hilarious. Abstract Tesseract says "Life imitates AI imitates art." 

Alex Hanna: Yeah. One hundred percent.  

Emily M. Bender: Something like that. Yeah. Art imitates all that. All right. I need to take us to the outro because we are at time. Um, and sorry, I've got to, I've got to find the right window. 

That's it for this week. Dr. Clara Berridge is an associate professor at the University of Washington School of Social Work, focusing on digital technology and elder care. 

Thank you so much for joining us today, Clara.  

Clara Berridge: Thanks for having me and for covering this topic.  

Emily M. Bender: Super important. Thank you for bringing it to us.  

Alex Hanna: Yes. Our theme song was by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor, and thanks as always to the Distributed AI Research Institute. 

If you like this show, you can support us by rating and reviewing us on Apple Podcasts and Spotify, and by donating to DAIR at DAIR-Institute.org. That's D A I R hyphen institute dot org.  

Emily M. Bender: Find us and all our past episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/DAIR_institute. Again, that's D A I R underscore institute.  

I'm Emily M. Bender.  

Alex Hanna: And I'm Alex Hanna. Stay out of AI hell, y'all.

People on this episode