Mystery AI Hype Theater 3000

Episode 34: Senate Dot Roadmap Dot Final Dot No Really Dot Docx, June 3 2024

Emily M. Bender and Alex Hanna Episode 34

The politicians are at it again: Senate Majority Leader Chuck Schumer's series of industry-centric forums last year have birthed a "roadmap" for future legislation. Emily and Alex take a deep dive on this report, and conclude that the time spent writing it could have instead been spent...making useful laws.

References:

Driving US Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States

Tech Policy Press: US Senate AI Insight Forum Tracker

Put the Public in the Driver's Seat: Shadow Report to the US Senate AI Policy Roadmap

Emily's opening remarks on “AI in the Workplace: New Crisis or Longstanding Challenge” virtual roundtable

Fresh AI Hell:

Homophobia in Spotify's chatbot

StackOverflow in bed with OpenAI, pushing back against resistance

OpenAI making copyright claim against ChatGPT subreddit

Introducing synthetic text for police reports

ChatGPT-like "AI" assistant ... as a car feature?

Scarlett Johansson vs. OpenAI


You can check out future livestreams at https://twitch.tv/DAIR_Institute.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

 Alex Hanna: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.  

Emily M. Bender: Along the way, we learn to always read the footnotes. And each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. 

I'm Emily M. Bender, a professor of linguistics at the University of Washington.  

Alex Hanna: And I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. This is episode 34, which we're recording on June 3rd of 2024. And I regret to report that the politicians are at it again, namely Senate Majority Leader Chuck Schumer, who is also head of the Senate's AI Working Group. 

Remember his series of quote 'AI insight' forums last year? Dozens of experts, plus tech CEOs, were invited to discuss topics like privacy issues, copyright, and 'guarding against doomsday scenarios.' His team also spent the fall in private briefings, including one that was officially confidential. 

Emily M. Bender: And, as one does, he's now out with a report outlining what he sees as the most important policy needs for our allegedly inevitable future with large language models. 

But as you might expect from a document that starts with the words, "driving US innovation," this report is yet another example of misplaced priorities that perpetuates hype and serves tech companies far more than it serves anyone else. So that's our artifact for today. Let's pull it up.  

Alex Hanna: This report was really universally panned by anybody involved in this space, focusing on things like racial justice, uh, workers rights, um, rights of migrants, uh, rights of anyone that wasn't a tech CEO or really invested in these doomsday scenario preventions. 

So, I'm glad we're taking it on, but man, what a piece.  

Emily M. Bender: I know, this is a bummer to read. Do have to say that there's one or two places in there where it's like, okay, that's actually a good idea. And I sort of feel like the whole process, I don't think the policymakers really understood what they were getting into. 

And they basically just constructed a process that was in no way going to be informative, but managed to sort of get a few people who had some good ideas in there and hear some of what they had to say. So the few places where that comes through. But overall, it's a bummer. Um, and well, we'll get to it. 

But the first thing I want to do actually is point out the name of the file. Um,  

Alex Hanna: Oh yeah. I saw this. It's kind of a roadmap. It's roadmap underscore confidential underscore LD underscore 5. 1 underscore edits hyphen 1:10 PM. Like big 'final, no, this one's final, no, no, no this one's final dot docX.'  

Emily M. Bender: Yeah. And it's still confidential. 

Like somebody didn't--and I assume we only have access to this PDF because that's the one they decided to put up, right?  

Alex Hanna: Incredible stuff.  

Emily M. Bender: Yeah. Yeah. 1:10 PM on May 14th. Okay. So this is dated May, 2024, uh, and the title: "Driving US Innovation in Artificial Intelligence: A roadmap for artificial intelligence policy in the United States Senate." 

And like, just the fact that the idea is, innovation is--and I think that Schumer's quoted all over the place talking about 'the North Star' of this--is just so far off from what we need. Um, and I want to I want to put the shadow report just sort of in the common grounds. I'm going to keep referring to it. 

Um, so that must be this one. Yes. So there's a great reply that came out really rapidly. Um, also May 2024. Um, put together by this group of people, Accountable Tech, AI Now, the Center for AI and Digital Policy, Climate Action Against Disinformation, Color of Change, Economic Security Project, Electronic Privacy Information Center, Friends of the Earth, Just Futures Law, Open Markets Institute, the Surveillance Resistance Lab, Tectonic Justice, and the Workers Algorithm Observatory at Princeton University. 

Um, so those folks got together and, um, put together this great response. And one of the sort of main messages that I took from this is they're basically saying, not only did Schumer and his crew basically ignore a decade worth of research that's pointing at the actual, um, problems, but also they basically in spending a year doing this, just kind of wasted time. 

And the, the tech CEOs, you know, got a, just a big gift there with the Senate twiddling their thumbs.  

Alex Hanna: Yeah. And they have a really nice roadmap where they compare other regulators. So one thing they do is that they look at the, uh, parallel actions in, uh, the EU. They also point to some folks in which we talked about in this podcast before, uh, different regulatory agencies in the US like the FTC, DOJ, CFPB, the Equal Opportunity Commission. 

Um, and they point out things where other folks have jumped into action. Meanwhile, the Senate is twiddling its thumbs, meeting with Elon Musk, uh, saying that we must move quickly. Um, uh, and even though, you know, it's, it's a big, it's a big feature of something's got it. Somebody has got to do something about this. 

And then really just at the end of this dog and pony show, that was the insight forums going ahead with a very, industry friendly report and roadmap. Well, not even a roadmap. It doesn't even have kind of benchmarks of what to do. It's really just having a series of these things with, uh, very, um, token invitations to certain types of civil society, union representatives, racial justice organizations, uh, workers rights organizations. Um, but really nothing to show for it.  

Emily M. Bender: Yeah. So of course the introduction dives in with some real hype. Um, so the second sentence, "AI's capacity to revolutionize the realms of science, medicine, agriculture, and beyond; the exceptional benefits that a flourishing AI ecosystem could offer our economy and our productivity; and AI's ability to radically alter human capacity and knowledge." 

So sorry, that was actually a list of noun phrases. That was the end of the first sentence talking about "the profound changes that AI could bring to our world." And so clearly this is, this is not a group of people who are starting from thinking about how do we protect people's rights in the face of. The massive concentration of data, the massive use of compute and its effects on the environment, and the sort of exploitative labor practices that are being hidden under this sheen of, of AI. 

Like if this is where they're starting, they're not concerned with that. And that's a big problem.  

Alex Hanna: Yeah, completely. And I mean, the second part of this paragraph really flags that because they say, "We each recognize the potential risks AI can present, including altering our workforce in the short term and long term--" which is just, really understatement.  

"--raising questions about the application of existing laws and an AI enabled world--" And I want to say, I want to foreground that many of the kind of discussions of enforcing existing laws basically have to do with the discrimination and the disparate impact things. They're like, well, we already have laws in that. So we're not going to, we're not going to focus on that--yeah but they're not actually being enforced with regards to AI tools. 

Uh, and then where it really, you know, really, what they really want to talk about, "changing the dynamics of our national security and raising the threat of potential doomsday scenarios."  

Emily M. Bender: All right, so remember when that ridiculous AI pause letter came out and we pushed back and said, this is a distraction, don't go this way. And some other folks were saying, no, no, no, it's getting the attention of the lawmakers. You should build on that attention and do something good with it. Well, this is what happened with that attention. Right. It got, you know, this and the fact that we now have the, um, one of those AI doomers at NIST heading an AI safety working group. 

Like it was bad to put that out in front as what we're worried about here. Um, okay. Um, I think there's nothing really that much more to say except that this, "We resolve to bring leading experts into a unique dialogue with the Senate."  

As you said in the intro, Alex, they found some experts and they also talked to a bunch of CEOs. 

Alex Hanna: Yeah, we talked about this when we had Justin Hendrix on the program, who is the head of, um, and the founder of Tech Policy Press. And you know, because they didn't basically have any place where they're reporting who's invited to these things. Tech Policy Press had these, you know, fantastic tracker where it showed, um, who was actually invited to these things. 

And the very first forum, it was nearly all of the tech industry. With the, and some of the motion picture, uh, they have Motion Picture Association, uh, one unionist that's Randi Weingarten of the American Federation of Teachers. Um, three civil society organizations. One of them being Tristan Harris of Center for Humane Technology. 

Um, they also had Liz Shuler there, uh, from the AFL-CIO and also Meredith Steihm from the Writers Guild, and then our friend Deb Raji um, who was stuck out like a sore thumb in a crowd of, you know, bizarro, you know, tech CEOs.  

Emily M. Bender: I'm not finding the list of forum one attendees here, but yeah, it's, it was clear from the beginning that this was really, you know, like that first one, especially, wasn't that one where we were saying like, oh yeah, basically they just wanted to get their, you know, take selfies with the, uh, the tech celebrities. 

Alex Hanna: Yeah. It was basically mostly tech CEOs and then Deb and three union leaders. Um, and then, which I think, which, which was hilarious and sad. Um.  

Emily M. Bender: I feel bad for Deb. I mean, she, she was brave and I'm sure she was a fantastic contributor there, but like, it sounds like an awful experience.  

Alex Hanna: Yeah, completely. Well, let's get into it. 

So this report has nine parts. All of them basically, um, corresponding to the fora. "Inaugural forum; um, supporting US innovation in AI; AI in the workforce; high impact, impact uses of AI; elections and democracy; privacy and liability; transparency, explainability, intellectual property and copyright; safeguarding as AI risks; and, national security.  

Yeah. So I, so I don't have anything until the, we start getting into the first one, which is, but if you have anything you want to cover before getting into that, I mean.  

Emily M. Bender: Well, this was bugging me. So this is, um, this is still in the introduction. So, "In each forum, our aim was to include representation from--" And the first bullet point here is, "Across the AI ecosystem, encompassing developers, deployers, and users of AI from startups to established companies." 

Alex Hanna: Mm.  

Emily M. Bender: There's a glaring hole there, right? What about the people it's being used on?  

Alex Hanna: Yeah.  

Emily M. Bender: Like that--and the fact that that was missing really bothered me. And again, coming back to the shadow report, if they had paid attention to the decade of research, then that would have been front and center. Um, and then the other thing here is, uh, "helped inform a policy roadmap." 

Um, and of course, as you're pointing out, this is, this document doesn't really feel like a roadmap, but also the, the folks behind the shadow report said yeah, we don't need a roadmap at this point. We've got that. We need policy.  

So.  

Alex Hanna: Yeah. Yeah. I mean, it is high time to start making some legislative proposals, right? 

Emily M. Bender: Yeah.  

Alex Hanna: Um, yeah.  

Emily M. Bender: Okay. So you said, um...  

Alex Hanna: So yeah, the first one, like, like we were saying, which is about innovation. And, and so the only commitment that they've got, so the first, the first one here is, uh, the first point of this is "supporting US innovation in AI." So if you go down in the doc, that's the first one, basically what they guarantee here is, you know, they want to, with advisement from the National Security Commission on Artificial Intelligence, uh, to, uh, "provide $32 billion per year for non defense AI innovation."  

So kind of the most concrete thing that comes out of this is a number, uh, to give to quote 'AI innovation.' And of this, they're naming, um, they name, uh, the Department of Energy, Department of Commerce, the NSF, Uh, NIST, National Institutes of Health, and NASA, um, and they want this basically to go to biotech, computing, robotics, um, and then they give this kind of, they said, "Foundational trustworthy AI topics such as transparency, explainability, privacy, interoperability, and security." So very much focusing on this on a very, very technical level of any type of, um, focus of, um, quote unquote 'innovation.' 

And then they also want to then direct it to the, um, the recently passed CHIPS and Science Act, um, in which they effectively want to, um, you know, what, this is effectively a way of saying, well, we want to find a way to produce more GPUs stateside. And so we don't have to rely on, um, on mostly Taiwanese production here. 

And so then they say, "Funding as needed... specific to the design and manufacture of future generations of high end AI chips," um, dot, dot, dot, that can be implemented domestically.  

So--  

Emily M. Bender: Alex, do you know what DOC stands for? I see lots of government agencies here. And that was one that I didn't recognize. 

Alex Hanna: Department of commerce. Yeah, it's, it's, it's cited above in the report. Yeah.  

Emily M. Bender: Thanks.  

Alex Hanna: Yeah.  

Emily M. Bender: So there was one thing in this part that I actually thought, this is good. And that's, um, uh, okay, first of all, it's after this terrible thing that we will get into in a second. But the one sort of good thing in this part was, "Funding for AI efforts at NIST, including AI testing and evaluation infrastructure." 

That part's good, but not the USAI Safety Institute. So the existing work at NIST, before they ended up having to house the safety nonsense, they were actually doing some pretty good stuff with their risk management framework. Um, and seeing that supported seemed like, okay, that's a good idea. Um, you know, there's few needles in a haystack, but just above that, they want to fund a series of AI Grand Challenge programs, "such as those described in Section 202 of the Future of AI Innovation Act--" It's a Senate bill. "--and the AI Grand Challenges Act," another Senate bill, "drawing inspiration from and leveraging the success of similar programs run by DARPA, DOE, NSF, NIH, and others like the private sector XPRIZE, with a focus on technical innovation challenges in applications of AI that would fundamentally transform the process of science, engineering or medicine and in foundational topics and secure and efficient stor--software and hardware design."  

Um, which is like, okay, so these people actually want to fund the something like the, uh, Turing, what was it? The Nobel Turing challenge that we were talking about a couple weeks ago. It's just, it's so ridiculous. And I have to lift up a comment from Abstract Tesseract in the chat, "American AIdol."  

Alex Hanna: Yeah, I mean, they're really focusing. I mean, given that they're making references like the XPRIZE, I mean, it is, when we say that Chuck Schumer is like an Elon Musk fanboy, I mean, it is very, it's not, it's not hyperbole. 

I mean, these things make their way into this. Um, so they've got other stuff in here. I mean, the, and they're making, as you mentioned that, that like the Nobel Turing. 

Emily M. Bender: And here they say "autonomous laboratories." 

Alex Hanna: Um, yeah, so they say in this, in this point, "Supporting a NIST and DOE test bed to identify, test and synthesize new materials to support advanced manufacturing through the use of AI." 

So that is kind of an oblique reference to DeepMind's, um, GNoME project, a materials investigation project, which we talked about a few, uh, weeks ago in which some material scientists looked at. through a bunch of these materials and said, well, these aren't actually helpful or useful. They're kind of trivial formulations. 

Then autonomous laboratories, which, you know, the, the, the, the, the AI capital S Scientists, um, and then "AI integrations with other emerging technologies, such as quantum computing and robotics." 

And it's just kind of throwing stuff at the wall, just chasing this. And I, and it's just wild that this is, of these insight forums, you know, there's the most amount of time granted to focusing on innovation. 

Emily M. Bender: And that little throwaway is like "providing local election assistance funding to support AI readiness--" Not sure what that means. But, "--and cyber security through the Help America Vote Act election security grants."  

Okay, um, I, assuming this isn't, um, assuming this is real election security and making sure that people have access to their polling places and not the other nonsense. 

Okay, fine. Like, every once in a while there's something in here that seems reasonable. Um, but, then there's stuff that's frightening. So, "providing funding and strategic direction to modernize the federal government and improve delivery of government services, including through activities such as updating IT infrastructure to utilize modern data science and AI technologies and deploying new technologies to find inefficiencies in the U S code, federal rules and procurement programs." 

This sounds to me like the sort of automating austerity stuff that we keep seeing too. Um, could be otherwise. Uh, "smart cities and intelligent transportation system technologies." Um, uh, smart cities sounds like, uh, luxury surveillance, um, intelligent transportation system technologies, i, you know, there's certainly room for certain kinds of automation in handling, like, you know, making sure the public transit runs more smoothly maybe, but I'm not going to call that AI, you know.  

Alex Hanna: Mmm, yeah.  

Emily M. Bender: But yeah. 

All right.  

Alex Hanna: Let me launch into the next part of this, which is. pretty much wholly on national security. So--  

Emily M. Bender: Right but national security gets its own whole topic. So we have to see it twice.  

Alex Hanna: Yeah. And it appears here for some national, so the "National Nuclear Security Administration testbed and model evaluation tools. Assessment and mitigations of chemical, biological, radiological, nuclear AI-enhanced threats by DOD, DHS, DOE." 

Um, Increased, um, funding for DARPA's work, I'm trying to see, uh, uh, "development and deployment of Combined Joint All-Domain Command and Control and similar capabilities by DOD," and et cetera, et cetera. I mean, it goes on, I mean, it's mostly a big chunk on this. Um.  

Emily M. Bender: This was the most worrying part to me, "Trustworthy algorithms for autonomy in DOD platforms."  

Alex Hanna: Right.  

Emily M. Bender: And thinking back to the, when we had Charlie Jane Anders and Annalee Newitz on the show and Charlie Jane made the brilliant point that, um, a time bomb is an autonomous weapon.  

Right. Anything, right?  

Um, so yeah, yeah. Um, very nervous making.  

Alex Hanna: Yeah.  

Emily M. Bender: And also "reducing silos between existing data sets and make DOD data more adaptable to machine learning and other AI projects." 

Um, don't love that, you know, um.  

Alex Hanna: I mean, there's, you know, it is the federal government. So you expect a certain amount of of course, discussion of defense, but this really reads as, I mean, enabling sufficient types of stakeholders, or I shouldn't call them stakeholders, uh base, basically enabling huge public private partnerships with AI oriented weapons manufacturers. 

And so, I mean, it's worth pointing out that one of the, you know, partners that has appeared consistently, um, with, um, uh, in this has been Palantir. Uh, they were invited to the very first AI Insight Forum, Alex Karp, their co founder, um, I'm trying to see who else. I also think they invited, uh, some folks from, um, Booz Hamilton Allen. 

Um, I'm just going through the list here. I'm like, who else is in the mix, um, that appears consistent consistently, um, but very little, any discussion of civil liberties, any discussion at all of surveillance, discussion of, I think maybe one cast off the side discussion of free speech and the first amendment. 

Um.  

Emily M. Bender: But in a very strange way, I think we'll get to that one.  

Alex Hanna: Yeah.  

Emily M. Bender: Yeah.  

Alex Hanna: Yeah. I just wanted to say Palantir got invited twice. Eric Schmidt got invited a few times. They did invite someone from the ACLU, um, on the same panel as the person who, uh, was of both Eric Schmidt and Alex Karp. Um, and then, um, you know, members of the air force, Jack Shanahan, um, and, uh, and then a whole slew of defense tech people. 

So, yeah, if you have to be on the same panel, if you're on the, ACLU and have to be on the same panel with a bunch of military tech folks who are getting their coin from huge government contracts, yeah, maybe rethink that and actually how much space you're affording to those people in those rooms. 

Emily M. Bender: Absolutely. And I want to point out mostly a note to myself that, um, where we started talking about leveraging public private partnerships here, this is now popped back up a level. "Furthermore, the AI working group," and we're still on the initial sort of inaugural forum. So this isn't specifically about defense spending anymore. 

It's just that, you know, that a lot of it's going to be, um, and this bullet point still under the sort of furthermore thing, um, "the AI working group encourages the relevant committees to address the unique challenges faced by startups to compete in the AI marketplace, including by considering whether legislation is needed to support the dissemination of best practices to incentivize states--" 

And then it's just like all the stuff that we have to make sure they can compete. And the last bullet is "a report from the Comptroller General of the United States to identify any significant federal statutes and regulations that affect the innovation of AI systems," which sounds to me like that person, and if they did, this report, is going to go find every last thing we have that actually protects privacy and civil liberties and says, well, this gets in the way this gets in the way. Right.  

Alex Hanna: Yeah.  

Emily M. Bender: We want that to get in the way.  

Alex Hanna: Yeah. And I mean, this one was also funny. The second part of that, which was, "including the ability of companies of all sizes to compete in artificial intelligence." 

So, you know, it's, it's kind of this maybe this cast off sort of statement. We don't want this to accrue to a few big tech players. We want everyone to have AI.  

Emily M. Bender: Yeah.  

Alex Hanna: I'm going to take, hold on. I know we're live, but like, man, my cat is really yowling so Christie, Christie edit this out on the podcast. Unless it's funny. 

I'll be right back.  

Emily M. Bender: All right. It's a cat break. I am broadcasting from a different location while my office space upstairs is actually being worked on. So there are no cats happening in my space today. Um, so we have extra cat attention. Here comes the kitty cat. Um, and. All right. Now that Alex can hear, Alex, we got to pick up the pace here because that was just forum one of nine. 

Alex Hanna: Well, lucky, lucky for us, they basically gave short shrift to everything else. So the next one is on AI and the workforce and i, you know, uh, this is one. So this is two pages long and gosh, I just hate everything about this. So they say here, um, this is the one where they invited kind of the most workers advocacy organizations. 

Um, they say, you know, you know, "During the insight forums, there was wide agreement that workers across the spectrum, ranging from blue collar positions to C suite executives, are concerned about the potential for AI to impact our jobs."  

It's going to affect everyone. Okay. Well, put some proposals on the table to replace CEOs with AI tools, and then we'll talk. 

And so then they go on and they say, "Therefore, the AI working group encourages efforts to ensure that stakeholders, from innovators, employers to civil society, unions and other workforce perspective perspectives, are consulted as AI is developed and then deployed by AI users, end users."  

Um, the thing that really sticks in my craw is the third point, which says, "Development of legislation related to training, retraining and upskilling the private sector workforce to success successfully participate in an AI enabled economy." And so this, this is really a watchword, this kind of reskilling and upskilling discourse, that tech people love to use, that tech bros love to use. 

Um, and especially because it's basically like a foregone conclusion that these things can do your job better than you can. And so just seeing who is, who's invited to this forum, uh what the third forum was, um, was this the second forum or the third forum where they talked about the--  

Emily M. Bender: I think it was the second forum.  

Alex Hanna: Okay. Yeah. I'm seeing like who they, yeah. Who is it in the, in the, yeah. No, the second one. Oh, the second one is innovation. The third one. Yeah. So they invited here, um, they invited like a really smattering of people, so they invited a few union heads, including the head of National Nurses United, um, the IBEW, which is the electrical workers. Um, the secretary treasurer of UNITE HERE, um, and also the legislative director of United Food and Commercial Workers, but then they also invited, um, the CEO of Indeed, the director, director of education policy and programs at Microsoft, the senior managing director of Accenture, um, and then the CPO of MasterCard, and the director of economic policy studies at the American Enterprise Institute. 

So you have kind of, and then they also have a lobbyist, of a lobbyist organization, the Information Technology and Innovation Foundation. I believe that's a lobbyist organization. Um, um, so I'm just, I'm just kind of thinking, what, what are you, what are you actually getting done here?  

Um, but this kind of idea of reskilling is a very common thing that certain types of labor economists really like to talk about, you know, effectively with automation, you have such growth of productivity that you can effectively replace, uh, a certain number of workers, and then they need to be reskilled to participate in the economy.  

And it's this thing that also goes hand in hand with some of the kind of pipe dreams of UBI, the kind of perverse notion of universal, universal basic income that that Sam Altman and folks like Andrew Yang have have really talked about, but with a really with a focus, like a pretty perverted focus on what UBI is or what it should be.  

Emily M. Bender: Like Altman saying recently that everyone should have access to universal basic compute.  

Alex Hanna: Oh, gosh. Yeah. Well, that's, yeah. I mean, I think it really misunderstands what the notion of UBI is and ought to be as a, you know, it's not, it shouldn't, I mean, it's, it's more about what, what, what are you setting as a floor of, of income that people can effectively access, not as a feature of kind of losing ones job or being replaced by one's job because, but more of what are our ways that you're going to have, you know, and certain kinds of experiments have shown different ways of radiating income um, you know, to supplement certain types of income sources.  

Um, but this reskilling kind of meme, I want to say, is this pre--is this incredibly frustrating to you know, like continue to see. One, because these things are not going to do a good job of replacing these other people, but people are going to get really fucked over and probably not going to be reskilled and reenter the workforce in any kind of meaningful way in which they were employed prior to that. 

Emily M. Bender: No, they're going to end up with, with gig work as the only thing they can do. And yeah--  

Alex Hanna: Yeah.  

Emily M. Bender: Yeah. More precarity.  

Alex Hanna: And the rest of this is the, the, the next two points are just frustrating. So the, the next one, "Exploration of implications and possible solutions to the impact of AI on long term future of work as increasingly capable general purpose AI systems--" And this case, I think is the only place in this doc that general purpose AI actually is, is, um, appears. So that makes me feel like it is, uh, effectively cribbed from OpenAI's copy, which is, you know, GPTs as GPTs.  

And then, and then the next, the next sentence, which is really a turn, which is basically it's, "Consider legislation to improve the US immigration system for high skilled STEM workers in support of national security and to foster advances in AI across the whole of the society." 

And so US immigration policy is, you know, of course, a shit show. They're effectively saying, well, maybe because of AI, what we need to do is effectively take the Canadian strategy of of of immigration, um, and that means specifically instead of promoting policies that would say, um, you know, uh, that are focused on family, uh, unification or reunification or not splitting up families, which, you know, historically prior to maybe the last 10 years was more oriented towards that. Uh, although I'm not an immigration policy person, but it had been a bit more progressive prior to the past, you know, pretty, pretty much in the post 9/11 era.  

Um, and then moving more towards, we need more STEM workers. So for national security purposes.  

Emily M. Bender: That is, that is so infuriating to basically say, we're going to, uh, there's a certain kind of immigrant who's the right kind of immigrant. 

And right now it's high skilled STEM workers, but also at the same time, the number of students from China who are coming here to study STEM things who end up with like extra scrutiny or locked out of the country or unable to leave once they're in because of all of this weirdness with respect to China in particular. 

Like it just, the whole thing is so frustrating.  

Alex Hanna: Yeah. 100%. 100%. Yeah.  

Yeah. Um.  

Uh, okay. So that's, that would happen. And so. 

Emily M. Bender: Yeah. All right. So I'm taking us down to the next, uh, top level heading, which is "High impact uses of AI." Um, and I, I actually liked the first sentence here on first read anyway. So, "The AI working group believes that existing laws, including related to consumer protection and civil rights, need to consistently and effectively apply to AI systems and their developers, deployers, and users." 

Yes, apply the existing laws. Um, but then they talk about how, so further down the paragraph, "We encourage the relevant committees to consider identifying any gaps in the application of existing law to AI systems that fall under their committee's jurisdiction, and as needed, develop legislative language to address such gaps. The language should ensure that regulators are able to access information directly relevant to enforcing existing law, and if necessary, place appropriate case by case requirements on high risk uses of AI, such as requirements around transparency, explainability, and testing and evaluation."  

It seems to me that none of that's necessary. That what the FTC and EEOC and the DOJ and was it CFPB the other one?  

Alex Hanna: Mm-Hmm.  

Emily M. Bender: They said there's no AI loophole. They're right. Right. Just because somebody's using a certain kind of automation to do something does not give them the ability to skirt the laws. And if the laws require transparency into things and the automation is not transparent, then the automation is not legal. Period. 

Alex Hanna: Yeah, and the way this is framed too is, is, is, I mean, it's, I, what really kind of pisses me off about this one is that they really are, they're saying that, well, we're already taking care of this, you know, where they're saying, well, you know, there's existing laws and, you know, they say, "the AI Working Group acknowledges that some have concerns about the potential for disparate impact, including the potential for unintended harmful bias." 

And it's what a, what a way to hedge, you know.  

Emily M. Bender: 'Some have concerns.' There are people out there. We're going to, um, sorry, I've got my two copies of the PDF not quite lined up. Yeah. I had that highlighted too. Some, some of you are worried about this.  

Alex Hanna: Some of you, some of you care about racial justice, and-- (laughter)  

Emily M. Bender: Yeah.  

Alex Hanna: And we should--  

Emily M. Bender: Nothing else is hedged in this way, right? 

Alex Hanna: Yeah.  

Emily M. Bender: The, the X risk stuff isn't hedged in this way, but, um, yeah, no, that, that was really, really appalling. Yeah.  

Alex Hanna: Yeah.  

Emily M. Bender: Yeah.  

Alex Hanna: It's, it's really, it's really some nonsense. Um, but then in the, in the, in the periods here, there's, there's, they, they, it kind of bounces around and what's, what's in, what's used in the same breath is, is kind of fascinating. 

And it speaks a lot to, I think, maybe how chaotic this entire thing is. So this is forum four, which was a fascinating forum to look at, because not only did they invite the CEO and founder of Clearview AI. They also included Meg Mitchell, who we had on last time, they included the head of the National Fair Housing Alliance, the head of the National Urban League uh, Arvind, uh, Narayanan from Princeton, who, uh, him and Sasha, um, um, um, Sayash, um, also, uh, are publishing their AI snake oil book, Julia Stoyanovich, um, and Surya Mattu, who's a, is a data journalist. 

And so it's and Cathy O'Neill author of um, of, um, of, um, uh--  

Emily M. Bender: Weapons of Math Destruction.  

Alex Hanna: Thank you. I saw the book in my brain and then I couldn't say the words. And so it's like this really, really like, are you going to get, I mean, did they want a cage match or something? Because it's sort of like, well, this is what's the highest risk of things, right?  

Emily M. Bender: And also Epic here. This is the health data Epic, and not the Electronic Privacy--  

Alex Hanna: That's right. Yeah.  

Emily M. Bender: --Center. Right.  

Alex Hanna: Different Epic.  

Emily M. Bender: Different Epic. Very different Epic. Yeah. Um, yes. So, um, let's see. Can you take us back to where we were?  

Alex Hanna: Yeah. And this, so they had a few things. So they said, "The working group supports and develops, uh, supports the development of standards of use of AI in our critical infrastructure, uh and, um, encourages the relevant committees to develop legislation to advance this effort." So I'm assuming that means things like the electric grid, um, and, and tele telecom security.  

Uh, they also have "Encourages energy, the Energy Information Administration to include data center and supercomputing cluster energy use in their regular, regular voluntary surveys." 

So this is like, oh, so you have no mention of, of climate change or energy usage of AI. Okay. But you could, you know, if people want to fill out the surveys, they can say how much their data centers are using here.  

Emily M. Bender: Yeah. Voluntary surveys.  

Um, in the same part, they, uh, it's in their "furthermore," I don't understand why these things are like broken down like this, but, um, they also in this forum are talking about the very difficult to discuss issue of child sexual abuse material. 

So the abbreviation is CSAM, and they say "Develop legislation to address online CSAM, including ensuring existing protections specifically cover AI generated CSAM." That seems like an important thing to be working on. I am a little bit nervous that, um, you know, I guess we have something called KOSA, which keeps being proposed, which is going to be a bit of a privacy disaster and like figuring out how to do the legislation so that you are appropriately constraining CSAM without, um, getting into everybody else's business is difficult and, uh, they don't have a great track record, but yeah, it's an important issue that should have been, I'm glad it was called out here. 

Um.  

Alex Hanna: Yeah.  

Emily M. Bender: So, yeah. Oh, this is the one. Alex. (laughter)  

Alex Hanna: Oh, where are you? Oh, yeah, this is, yeah. So this is the, in this statement again, this is under, this is under "high impact use cases." So, it goes, talks about the opportunities, the risks of AI in the housing sector, professional content creator CSAM, but then also at the end of page 12, "continue their work on developing a federal framework for testing the development of autonomous vehicles across all modes of transportation to remain at the forefront of this critical space. This effort is particularly, particularly critical as our strategic competitors, like the Chinese Communist Party, CCP, continue to race ahead and attempt to shape the vision of this technology."  

And I, I have, I've been reading a lot of federal documents, uh, over the past couple of years, and maybe it's the first time we've seen the Chinese Communist Party called out by name like this. 

So I just thought, man, what a blast for the past. Are we, we're just back to, we're back to the Cold War.  

Emily M. Bender: I'm pretty sure this came up in the congressional hearing that I was a part of.  

Alex Hanna: Oh gosh. Okay.  

Emily M. Bender: Yeah. Like it's definitely a meme right now and yeah, super, super awful as a, as a discourse. Um, all right, I want to bring us down to this bit down here. 

Um, uh we're talking about AI in healthcare.  

Um, so, uh, "AI is being deployed across the full spectrum of healthcare services--" Shudder. "--including for the development of new medicines, for the improvement of disease detection and diagnosis, and as assistance for providers to better serve their patients. The AI working group encourages the relevant committees to:" Um, and then to this first point, un is "Consider legislation that both supports further deployment of AI in healthcare and implements appropriate guardrails and safety measures to protect patients as patients must be front and center in any legislative efforts on healthcare and AI." 

There's nothing about privacy. So the last part of that bullet point is, "This includes consumer protection, preventing fraud and abuse, and promoting the usage of accurate and representative data." What about privacy? Right. And then the next bullet point talks about, um, research. So, "Support the NIH in the development and improvement of AI technologies. In particular data governance should be a key area of focus across the NIH and other relevant agencies, with an emphasis on making health care and biomedical data available for machine learning and data science research while carefully addressing the privacy issues raised by the use of AI in this area." 

So privacy comes in, but only in the context of research data sets and not with like all the health information that's getting like sold everywhere. Um, so that, that really bothered me.  

Alex Hanna: Yeah. And we're already seeing kind of a nightmare in this, in the UK, right? With NHS and they, you know, they've contracted the NHS is, um, kind of data fusion to Palantir, um, and you know, there was a great report written by, uh, Connected By Data, who is uh, uh, kind of a nonprofit research organization, basically about the worry, all the privacy concerns in people's healthcare in these national data systems. Um, again, this is the one where the Epic, you know, Epic, big healthcare electronics records, uh, computing interface, um, which is used, I want to say almost everywhere. 

Based in Madison, Wisconsin. So we're, so our producer, Christie Taylor, and I know a lot of people employed at Epic.  

Um, and, and so there's, um, you know, like, you know, where are you focusing on this kind of you know, these kinds of, the dangers of having massive data harmonization databases, um, and the potential privacy leakages that come out of that. 

Emily M. Bender: Yeah. We are going to get to a privacy heading, um, but first we have exactly two paragraphs on elections and democracy.  

Alex Hanna: Um. This is wild. I can't believe that, that, I mean.  

Emily M. Bender: Yeah.  

Alex Hanna: Do y'all care about democracy? You are literally in the Senate. Maybe you don't. That's fine. Do what you want.  

Emily M. Bender: But, but the one part they do kind of care about is like related to them. 

So this first sentence here says "The AI working group encourages the relevant committees and AI developers and employers to advance effective watermarking and digital content provenance--" Yes, I want that. But the sentence isn't done. The rest of the sentence is, "--as it relates to AI generated or AI augmented, AI augmented election content." 

No, we actually need watermarking and digital contact provenance everywhere.  

Alex Hanna: Yeah.  

Emily M. Bender: But they're only suggesting it with respect to elections. But I think we should get down into privacy and liability, um.  

Alex Hanna: Which is three paragraphs.  

Emily M. Bender: Oh. And, okay. Um, um, I got, I got a lot to rant about here--  

Alex Hanna: Yeah, do it.  

Emily M. Bender: --in these three paragraphs. 

Alex Hanna: Go on.  

Emily M. Bender: Okay, so. "The AI working group acknowledges that the rapid evolution of technology and the varying degrees of autonomy in AI products present difficulties in assigning legal liability to AI companies and their users." First of all, this stuff does not evolve. It is not biological. So I don't want evolution there, but also the systems don't have autonomy, right? 

People choose to rely on automation or not. The systems don't just have autonomy. But then finally, "present difficulties in assigning legal liability to AI companies and their users"? Not at all, right? If the companies are automating something, they are liable for what they're automating. And maybe there's this question of how do you allocate that between the AI companies and their users? 

But yeah, um, so, uh, they're encouraging, uh, whether there's a need for additional standards, um, and "to hold AI developers and deployables, sorry, deployers accountable if their products or actions cause harm to consumers, or to hold end users accountable if their actions cause harm, as well as how to enforce any such liability standards." 

Like, why do we need new laws here? Right. You're making a product. It causes harm to consumers. Don't we already have regulation about that?  

Alex Hanna: Yeah.  

Emily M. Bender: Like, yeah. Um, so then second paragraph, "The AI working group encourages the relevant committees to explore policy mechanisms to reduce the prevalence of non public personal information being stored in or used by AI systems, including providing appropriate incentives for research and development of privacy enhancing technologies." 

And my note here, when I got to the end of this paragraph is all caps. How about privacy laws, you cowards?  

Alex Hanna: Well, then they, well, then they, they do say, and I will say this is probably the best paragraph in the piece, which is, "The AI Working Group supports a strong comprehensive federal privacy law to protect personal information. The legislation should address issues related to data minimization, data security, consumer data rights, consent and disclosure and, and data brokers." And this is probably the biggest amount of credit I would give to this AI working group, which again, because this is a mishmash of, you know, it's a bipartisan group. 

It's a mishmash of, um, pro business priorities, defense priorities, national security priorities, with a sprinkling of concern for civil rights and and civil liberties. Um, the fact that they do say that we do need a strong federal privacy law is heartening. And, you know, that is something that has been in consideration. 

Um, but it's like, that should be the starting point. And that should have been passed yesterday.  

Emily M. Bender: Right, exactly. And, and there was like, put that first, if you're heading is privacy and liability, um, none of this, like policy mechanisms to reduce the prevalence of non public personal information. Like, privacy law, like start there. 

Okay. Uh, next main heading is "transparency, explainability, intellectual property, and copyright." But actually, before we get into that, I want to raise up from the chat. WiseWomanForReal says, "Effective watermarking is impossible, isn't it?"  

Um, there's actually a lot of really good stuff that can be done with watermarking. 

There's a really cool paper from ICML last year, um, and I need to learn the first author's name. Um, ut the person who was talking about it, I think it was the last author on a podcast, I want to say Tom Goldstein, talking about a method for watermarking, effectively watermarking LLMs has to be done at the source. 

Also, there's interesting ideas about how do you come up with a way to vouch for online content. So, uh, basically, um, uh, certificates of authenticity for authentic content and so on. There's a lot that could be done. Um, yeah.  

So, okay. "Transparency, explainability, IP and copyright." Um, is there anything you want to go off about in this one, Alex? 

Alex Hanna: Uh, it's, we're getting to time. So I'm just trying to see what, what the biggest howlers are. I should have ranked these in some way. I'm just highlighted a bunch of things.  

Emily M. Bender: The first amendment stuff is here, but I want to, I want to just call out this phrasing, "consider developing legislation."  

Alex Hanna: Yeah.  

Emily M. Bender: You wasted a whole year with these forums when you could have been developing legislation. 

Alex Hanna: Yeah. There's a few things on data here. So, "Consider federal, federal policy issues related to the data sets used by AI developers to train their models, including data sets that might contain sensitive personal data," AKA all of them, "or are protected by copyright," AKA all of them, "and evaluate whether there's a need for transparency requirements." 

Emily M. Bender: There is.  

Alex Hanna: There's, there's such a, there's such a, that's so much hedging there, I mean, it's, it's, it's this an incredible amount of hedging and you're like, well, you know, you could draw a line in the sand and say this, these things need to be disclosed and that data subjects effectively needs some kind of mechanism, um, or protection here. 

But that is, I mean, insofar as you know, this, it's, it's, it's, it's a very, very weak. And the fact that they kind of mishmash transparency, explainability, intellectual property and copyright into one forum is indicative in and of itself.  

Emily M. Bender: And here's the one shout out to the First Amendment, right? "Consider whether there's a need for legislation that protects against the unauthorized use of one's name, image, likeness, and voice consistent with First Amendment principles as it relates to AI." 

So, uh, there's already some law about name, image, likeness, and voice, right? That's, that's, that's the existing regulation.  

Alex Hanna: Well, we're going to talk about Scarlett Johansson a little later. So, I mean, use of one's name, image, likeness, and voice are kind of considered what are rights to personality. Um, but those are also pretty, pretty weak. 

Like making a claim on rights of personality, unless you're someone who is Scarlett Johansson, are very hard for most people, um, to make claims to.  

Emily M. Bender: So you're saying it could be good to have stronger legislation here, but what's the First Amendment doing here? That it violates someone else's First Amendment rights if I say you can't use my voice? 

Alex Hanna: I think it's, I think it's more about, my understanding, it's more about it is whether one has the freedom to create something that may look like someone's, um, likeness. So it's, so it's related to speech in particular.  

Emily M. Bender: Yeah.  

Alex Hanna: For instance--  

Emily M. Bender: You want to allow political satire, right?  

Alex Hanna: Yeah. Yeah. So if you, so I, if I say, you know, Elon Musk is a smelly baby and, and then I do it, like I do like a, uh, a voice impression of Elon Musk, and I don't actually care enough to know what he sounds like to develop a, um, um, kind of a imitation of him. 

Um, you know, other than putting my say Elon Musk voice sound of son of son of an emerald miner. Uh, you know, 'I'm Elon Musk. I'm a smelly baby.' You know, that is covered in the first amendment.  

So it's sort of the conflict conflictual notions of rights of personality and rights of free speech. Again, I'm not a lawyer, but yes. 

Emily M. Bender: Yeah. But also it's interesting that that's, that's the only place that first amendment comes up, I think. Um, okay. We do have to spend some time on this part. "Safeguarding against AI risk." This is where the doomer stuff comes in. Um, and there's a couple howlers in here. One is, um, let's see.  

"The AI working group encourages companies to perform detailed testing and evaluation to understand the landscape of potential harms and not to release AI systems that cannot meet industry standards." 

I'm like, what industry standards? Like, that's not a thing. Um, and, um, let's see. "The working group encourages the relevant committees to consider a resilient risk regime that focuses on the capabilities of AI systems, protects proprietary information, and allows for continued AI innovation in the U.S." 

Which was gross. Um, sorry. Um, they talk about short, medium, and long term risks, which is code for we're taking the doomers seriously. Um.  

Alex Hanna: Well, one thing they also talk about is the risk regime thing here, and they talk about what the risk regimes are. And this specifically is kind of an AI safety talking point. 

So they say, "Multiple risk regimes, potential risk regimes are proposed from focusing on technical specifications, such as the amount of computation or number of model parameters to classification by use case." Right, and so effectively they're, they're mashing up the former two, which amount of comp, like computation is effectively, you know, this is taken effectively from the FLI letter where, you know, you, you know, we shouldn't develop anything, you know, and so the number I think is whatever, I don't even know, there was like 10 to the 29, you know, flops, um, and then, or number parameters. 

Rather than the classification by use case, which is much more a civil rights, um, orientation. And that's what they did in the EU AI act. And that's, you know, and that's kind of the argumentation that I think has been focused on in state level like legislation that focuses on civil rights and surveillance. 

But the amount of computation stuff is just, you know, that's a complete AI safety, uh, you know, uh, doomer mechanism.  

Yeah.  

Emily M. Bender: Yeah. Yeah. All right. We are, we are getting to the point where we're not going to have time for Fresh AI Hell, but, um, I have to, I have to bring us to one thing here, um, which is, this is under the national security. 

So even though we did all that DOD stuff above, there's a whole thing about it down here. Um, And, uh, there's a whole bullet point here on, uh, "the significant level of uncertainty and unknowns associated with general purpose AI systems achieving AGI." And it ends with, "the magnitude of the risks that AGI development would pose and develop an appropriate policy framework based on that analysis." 

What a waste of time. Do not, um.  

Alex Hanna: There's also something great here about using AI to risk mitigation of space debris, which I just think is hilarious. Um, I mean, it's not funny. Space debris is an actual problem because we're just leaving so much shit in orbit. But also like using AI models to improve the management of it. 

I'm just like, I don't know, man. That seems like, it seems like you could just get up there and, and I, I mean, I'm not an engineer in this, you know. 

Emily M. Bender: I mean there might be some cases where pattern matching is helpful for like detecting stuff against maybe, maybe the radar systems have some gaps in them or something. 

So sure, yes, we could, we could apply some automation there and probably get some benefit out of it, but like, that's not AI. None of this is AI.  

Um, I want to pull out this one last bullet point here too, which is, um, "The working group encourages the relevant committees in collaboration with the private sector to continue to address and mitigate where possible the rising energy demand of AI systems--" 

Okay. That makes some sense. Why would you want to do that? Well, the rest of the sentence is, "--to ensure the U. S. can remain competitive with the CCP and keep energy costs down."  

 (laughter)  

Alex Hanna: Yeah. Yeah.  

Emily M. Bender: You know, what about the fact that we aren't powering down coal fired power plants because the data centers need so much electricity? 

That's the reason to be doing this.  

Alex Hanna: Yeah, totally.  

Emily M. Bender: Okay. So. Yeah?  

Alex Hanna: I will say the last, the last bullet point here is just, you know, I want to read it just because it's incredible stuff. Um, so the, the last bullet point, the second part of it, "as Russian and, as Russia and China pushed their cyber agenda of censorship, repression, and surveillance, the AI working group encourages the executive branch to avoid creating a policy vacuum that China and Russia will fill, to ensure the digital economy remains open, fair, and competitive for all, including for 3 million American workers whose jobs depend on digital trade." 

And I'm like, man. It's just, it's very, you know, just incredible Cold War you know, saber rattling, um, you know, the US has to maintain this bastion of, of, of, of policy freedom as if, you know, the policy, you know, um, you know, domain here is, is, hasn't been dominated by the few, you know, domestic players who are only focused on wealth accumulation and, um, you know, really fucking over workers. 

Emily M. Bender: Yeah.  

All right, Alex, I've got your AI Hell improv prompt.  

Alex Hanna: All right.  

Emily M. Bender: Um, turns out in AI Hell, they've got, um, they put a lot of energy into maintaining an ice rink so they can do winter Olympic games.  

Alex Hanna: Oh my gosh.  

Emily M. Bender: And you are the color commentary announcer as Chuck Schumer is doing pairs ice dancing with the CEO of Palantir.  

Alex Hanna: (laughter) Oh my gosh, amazing. 

Alright. (acting) Now we have up Schumer and Alex Karp. Alex Karp getting an incredible amount of training as we recall in his, in his college days at Humboldt studying with Juergen Habermas. And, uh, oh, and they're pairing. And, oh, Schumer's going for the triple axel. Oh, oh, and he does not land it. Oh, actually goes through the ice. 

Oh. Right. Oh, he actually knocked down to the seventh layer of hell. Uh, looks like that pit fiend is going to throw him back up. They're, judges are really going to dock points for that. Karp now going for the double salchow coming. Nails it! Oh, oh, but his pick got stuck. And it looks like, oh, it's detachable. 

And it turned into a robot dog with a gun. And it hops off. We've never seen that one before, but I really remember that Henry Kissinger had a really good showing last year when he whipped out a thermonuclear warhead to finish his routine. 

Emily M. Bender: (laughter) So brilliant. Thank you. And to our audience, Alex does not know what's coming with those prompts. She is rolling with it brilliantly.  

Alex Hanna: Thank you.  

Emily M. Bender: All right. We've got six quick items for fresh AI hell, do you want to do this first one? Um, from someone named Redacted on X?  

Alex Hanna: Sure. So this is just in time for Pride month. 

Uh, so someone named Redacted says new, "the new Spotify AI is homophobic." So they typed in "music for gay sex." And then the Spotify AI replied, "That's a spicy note you've hit. Let's turn it down a notch." And it goes, and so then the users inputs, "music for sex". "Crafting a playlist for those intimate moments filled with sensual and smooth tunes. Swipe left to remove any songs you don't want as you continue refining your playlist." 

Yeah. So yeah, no gay sex for Pride month, unfortunately on Spotify.  

Emily M. Bender: And yeah, just so it's such an obvious double standard and it's like, this is all hard coded. Right? Like someone, someone made a decision here. I don't think this is even just like the biases of training data poking through. This looks hard coded to me. 

Alex Hanna: Yeah.  

Emily M. Bender: Um, okay. Uh, "Stack Overflow bans users en masse for rebelling against OpenAI partnership. Users banned for deleting answers to prevent them from being used to train ChatGPT." So Stack Overflow got into bed with OpenAI. Um, and the users, there's a, there's a, thing where if your answer has a certain level of popularity or uploads, you can't delete it. 

So people went in and started editing their answers in various ways to try to make them still be useful, I think, to people while reducing their value for training. And, um, of course they just got banned. But I, I definitely appreciated that, uh, that approach that active resistance seemed valuable.  

Alex Hanna: Yeah, absolutely. 

Emily M. Bender: You want this one?  

Alex Hanna: Yeah, this, so this one's on 404 Media, written by Jason Koebler. The title, "OpenAI, mass scraper of copyrighted work, claims copyright over subreddit's logo."  

And so this was basically about there's a, I think this is R slash ChatGPT. Um, and um, very funny that, um, they messaged the mod mod moderators. 

And so that OpenAI final filed a copyright infringement. Well, they didn't, I don't think they filed a suit, but they filed a complaint. Very ironic. Also kind of related to the past thing about Stack Overflow doing data poisoning, users.c Um, there was also that I was listening to this also from 404 Media, um, where on their podcast, they were talking about this incident in which when Reddit uh, closed their API, there was actually a lot of moderators that actually protested by basically, um, making some of the subreddits go dark or, um, establishing really wild rules, like you could only publish pictures of, um--  

Emily M. Bender: It was John Oliver memes, I think.  

Alex Hanna: Yeah. And so then, so it's really wild how in getting bed with OpenAI or Google or whatever, um, these content sites are really waging war on their users.

Um, just incredible.  

Emily M. Bender: They're a content site. The users are what's valuable.  

Alex Hanna: Exactly. Yeah.  

Emily M. Bender: Yeah. I realize I forgot to give the publisher and a journalist for that one. So the stack overflow story was by Dallin Grimm, published on a site called Tom's Hardware.  

Um, I'm going to do these next two real fast so you can get the Scarlett Johansson one. 

Um, so more 404 Media reporting, uh, this time by Joseph Cox, May 17th. And the headline is, "Here's what Axon's bodycam report writing AI looks like." Um, so Axon is developing, um, system that supposedly takes police reports based on body cam. Sorry, takes the, the, the audio from body cam footage and automatically generates a police report. 

And again, uh, 404 Media has a great podcast. And I listened to their story about this. And basically the idea was that um, it gives a set of bullet points. You select the ones you want, you being the cop, um, and then it outputs the thing and then the cop is supposed to sign off on it. And it's just like, you know, that police reports are already biased towards what the police want to be in the reporting and here, um, they're just, it's going to be even worse, right? 

So, uh, so reporting in the Verge on May 23rd by Umar Shakir. Um, headline is, "The Kia EV3 will have over 300 miles of range and a ChatGPT-like AI assistant." Why would you need chat GPT in your car? Like, I don't even know. But I wanted to get to this one. Um, It's for you, Alex.  

Alex Hanna: Yeah. Well, thank you. So this, I mean, this one hit out because this is effectively after the release of ChatGPT, was it 4o? Um, this is a statement from Scarlett Johansson, um, and this is being reported by Bobby Allyn from NPR. 

So there's a tweet and ScarJo says, "Last September, I received an offer from Sam Altman who wanted me to hire me to voice the current ChatGPT-4.0 system. He told me that he felt by my voicing the system, I can bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI. He said he felt my voice would be comforting to people." And he said he'd, and then she declines the offer. Um, Uh, and then, uh, "Nine months later, my friends and family and the general public all noted how the newest system named "Sky" sounded like me. When I heard the release demo, I was shocked, anger, and disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine. And my closest friends and news outlets could not tell the difference."  

"Mr. Altman--" And this is incredible. "--even insinuated that the similarity was intentional, tweeting a single word, "Her," in referenced in the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human." 

Uh, and so this is, you know, for the lack of time, you know, ScarJo is hopefully just going to, you know, sue the absolute shit out of OpenAI, uh, and just, you know, nuke them from orbit. Um, and just, it's, and I just, the thing that surprises me about this whole ordeal is you, you not only try to, um, go, you effectively screwed over one of the, someone who is known for being very litigious. 

It is very much, probably one of the best people situated to, uh, sue, um, for rights of personality, um, who successfully sued Disney. Um, um, which is arguably more powerful as a media entity then OpenAI, uh, and, um, and, and just really fuck them over.  

And then, and then you you also implicated it by tweeting "Her," and in interviews Sam Altman had said his favorite movie is Her and, and there's this piece by Bryant Merchant about kind of the obsession that, um, tech, you know, tech CEOs have with kind of dystopias, uh, and, and, and, and kind of, um, and in particular, effectively, what's like kind of like a relationship bot. Um, so just the kind of effective libidinal relationship of like tech bros to like feminine, the feminization of AI itself is fascinating. Hey, if you want to write a PhD on this, there's an idea like.  

Emily M. Bender: Yeah, Alex is throwing out PhD topics again. 

So, back in form. But also, yeah, you know, pro tip, if you are stealing somebody's voice, uh, don't go on record linking the stolen voice to the movie that the voice came from or the idea for using that voice. Like it. (laughter) Yeah.  

Alex Hanna: Right. Just have a modicum of sense, but you know, that's, that's not what, uh, maybe that's what AGI is. 

AGI is something that has a lick of sense. Yeah.  

Emily M. Bender: Yeah.  

Alex Hanna: Uh, all right. Well, I think that's it. Wanna call it here?  

Emily M. Bender: I think we'll call it here. Yeah.  

Alex Hanna: Well, that's it for this week, our theme songs by Tony Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor, and thanks as always to the Distributed AI Research Institute. 

If you liked this show, you can support us by rating and reviewing us on Apple podcasts and Spotify. And by donating to dair at Dair-Institute.org. That's D A I R hyphen institute dot org.  

Emily M. Bender: Find us and all our past episodes on Peertube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. 

That's twitch.tv/dair_institute. Again that's D A I R underscore institute. I'm Emily M. Bender.  

Alex Hanna: And I'm Alex Hanna. Stay out of AI Hell, y'all.

People on this episode