Mystery AI Hype Theater 3000

Episode 22: Congressional 'AI' Hearings Say More about Lawmakers (feat. Justin Hendrix), December 18 2023

Emily M. Bender and Alex Hanna Episode 22

Congress spent 2023 busy with hearings to investigate the capabilities, risks and potential uses of large language models and other 'artificial intelligence' systems. Alex and Emily, plus journalist Justin Hendrix, talk about the limitations of these hearings, the alarmist fixation on so-called 'p(doom)' and overdue laws on data privacy.

Justin Hendrix is editor of the Tech Policy Press.


References:

TPP tracker for the US Senate 'AI Insight Forum' hearings

Balancing Knowledge and Governance: Foundations for Effective Risk Management of AI (featuring Emily)

Emily's opening remarks at virtual roundtable on AI
Senate hearing addressing national security implications of AI
Video: Rep. Nancy Mace opens hearing with ChatGPT-generated statement.
Brennan Center report on Department of Homeland Security: Overdue Scrutiny for Watch Listing and Risk Prediction
TPP: Senate Homeland Security Committee Considers Philosophy of AI
Alex & Emily's appearance on the Tech Policy Press Podcast

Fresh AI Hell:

Asylum seekers vs AI-powered translation apps

UK officials use AI to decide on issues from benefits to marriage licenses

Prior guest Dr. Sarah Myers West testifying on AI concentration


You can check out future livestreams at https://twitch.tv/DAIR_Institute.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

ALEX HANNA: Hello and welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find. 

EMILY M. BENDER: Along the way we learn to always read the footnotes and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, a professor of linguistics at the University of Washington. 

ALEX HANNA: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 22, which we're recording on December 18th, 2023. And today we're talking about all the hearings the United States Congress has been having about what's next for quote "AI" in the US. These hearings, where our legislators invite various experts and heads of companies to explain how everything works and what we should do about it. 

EMILY M. BENDER: They're rich sources of hype, both the praising kind and the more critical but still exaggerated claims. And I should note that I was asked to participate in one of these back in October. I did my best there to bring a dose of reality to the proceedings. 

And we've got a guest today to help us examine the way these hearings can shape policy and public opinion. Justin Hendrix, editor at the wonderful Tech Policy Press, a nonprofit media outlet that focuses on how tech intersects with power, ethics, racism and democracy. Welcome Justin. 

JUSTIN HENDRIX: Thank you very much for having me. 

EMILY M. BENDER: Thanks for joining us and thank you um all folks online for your patience as we got started. Um I am going to start us off actually with an audio artifact this time um and this comes from a hearing in the House um. And full disclosure, it ruins the punchline a little bit to tell you all this ahead of time, but I feel very strongly about not subjecting people to synthetic media without their knowledge. So what we're going to hear first is Representative Nancy Mace reading some ChatGPT output. And then we're going to hear her owning up to having read ChatGPT outpu--output. 

The sound fades out a little in the middle because it was a lot of repetitive ChatGPT output and so we're giving you just a flavor. And here we go. 

[RECORDING] NANCY MACE: "The field of artificial intelligence is rapidly evolving and one of the most exciting developments in recent years...As we me--move forward we must also ensure that AI is used for the benefit of society as a whole. While AI has the potential to improve efficiency, increase productivity and enhance the quality of life, it can also be used to automate jobs, invade privacy and perpetu–perpetuate inequality. We must also work together to ensure that AI is used in a way that benefits everyone, not just a privileged few. In conclusion, the emergence of generative models represents a significant step forward in the development of artificial intelligence. However with the progress comes responsibility. 

We must ensure that AI is developed and used in a way that's ethical, transparent, and beneficial to society and the federal government has an important role in this effort. I look forward to working with my colleagues, on both sides of the aisle, um on this committee to ensure that the US remains a leader in the development of AI technologies. Thank you for your time and attention. 

Now before I yield back, I'd like to note that everything I just said in my opening statement was you guessed it written by ChatGPT, an AI. Uh the advances that have made--been made just in the last few weeks and months uh have been radical, they've been amazing and show the technology uh is rapidly evolving. Every single word up until this sentence was uh generated entirely by ChatGPT. And perhaps for the first time in committee hearing, I know Jake Auchincloss did a a statement on the floor a couple weeks ago, but I believe this is the first opening statement of a hearing generated by ChatGPT or other uh AI models." 

EMILY M. BENDER: All right. That was fun wasn't it? [Laughter] Um so when I came across that I was--it was before my hearing in October and I was kind of curious how these things work, so I was looking for an earlier one to watch. Um and that one I think had Merve Hickok in it. I thought okay great I'll go listen to that one and I listened to this opening statement and I kept double-checking, I'm like that's a Republican saying those things? 

Like what's going on? And then she gets the bit where she owns that it's not actually words that she wrote, and I thought how just like odd it is to me that people are willing to let synthetic text speak for them in any kind of impactful environment, um. And then listening to it again in preparation for the stream, it was clear to me first of all how repetitive it was, I missed that the first time, and secondly how foreign those words were to her and how hard it was for her to say them. But any any thoughts Justin from you on on that opening statement? 

JUSTIN HENDRIX: I think it's interesting, we've seen so many different law lawmakers across the country, also across the world who have essentially used this party trick over the last several months of taking either text that that ChatGPT has generated or using audio--synthetic audio of their own voice or in some cases even using video. Uh there was one Massachusetts lawmaker, I believe a state senator earlier this year who actually uh you know introduced a a piece of legislation uh that was at least partially written by ChatGPT or some AI chatbot. Um and we've seen you know in different parts of the world uh not only legislators but also judges who have utilized ChatGPT and other AI chatbots in making decisions uh and you know trying to discern between different legal arguments, um so you know it's not just uh Nancy Mace in this case, Representative Mace, but many others who--I guess they think this is interesting you know something that will get the attention of folks on social media or in the media. 

ALEX HANNA: Well it's just flooring about how much that--oh sorry Emily. 

EMILY M. BENDER: I was saying I hope one day we can look back on that and go, 'Wow, that was so 2023.' Like it'll just go out of style, that'd be great. But you were gonna say Alex? 

ALEX HANNA: I hope I hope so I hope it just becomes meme-ified, I mean but it's you know it's it--what floors me is that you know there you know--is there any other technology I mean where it feels like regulators are are are are just kind of walking advertisements for the technology itself? I mean would you I mean there surely there's a lot of lobbying that's going on but would you import or have something and say look at this, you know and and really play into the hype machine you know? Um so it's it's it's so wild how it just perpetuating that. 

EMILY M. BENDER: Yeah and you can say the word 'perpetuating.' [Laughter] 

ALEX HANNA: I can say perpetuate. It's because I use it a lot. 

EMILY M. BENDER: Yeah, yeah. 

JUSTIN HENDRIX: I would say it reminds me a little bit of the period of time when the selfie first emerged. Do you remember that? When when folks were uh uh first taking selfies, very uh you know--putting them out there, trying to kind of like be seen to take a selfie. There was the Ellen selfie uh at some award show. And then after that everybody wanted to do a selfie. 

EMILY M. BENDER: Yeah, yeah. 

ALEX HANNA: It reminds me of that US copyright case on the the monkey that had taken the selfie, and and uh whether whether monkeys could could actually hold copyrights, right. And they ultimately you know the judge ruled and the uh or--or the copyright office, I don't I don't know what entity did--ruled in the uh photographer who had given the monkey the camera's favor. But you know ruled definitively that a non-human entity uh could not hold a copyright, right. 

EMILY M. BENDER: Yeah, so want to pick up on from the chat here. Um Monkiju says, "Of all the types of speech one could generate, wouldn't say-nothing politispeech be some of the easiest? It's designed to not actually communicate anything, just pure vibes." 

But what was particularly interesting to me about this case is that the vibes were the wrong vibes for a Republican representative. Like that was that was jarring. All right before we get into the next artifact, I want to be sure to point out that it's not all hype up in Congress, that that there are some folks asking good questions, exploring good angles and what we're doing in our limited time together this hour is we're going after the hype.  

So we are we are painting a picture that is maybe a bit pessimistic because it's our job to go after the hype, and we just want to maybe shout out to those um staffers and representatives who are thinking hard about this and really keeping the people in the frame, and say we see you um and so that's why we're not ridiculing you. We're just going after the stuff that needs ridiculed. So the next artifact um comes from the uh charter document for the hearing that I was a part of. And we have three short paragraphs here, um giving background for that hearing on what AI is. Alex do you want to take us into the first couple paragraphs here, because they're kind of your bailiwick? 

ALEX HANNA: Yeah totally. I mean this looks a lot like the the kind of testimony we were reading from Yann LeCun the other day, but you know uh it's it's in the introduction given by um the uh House Science and Technology Committee. "Artificial intelligence refers to computer systems capable of performing tasks that typically require human intelligence, such as decision-making or content creation. 

The term AI includes a range of technologies, algorithms, methodologies and application areas, such as natural language processing, facial recognition, and robotics. Despite its recent popularity, AI is not a completely new technology. Quote 'Narrow AI,' (footnote one) or AI that targets singular things--" Amazing statement there. "--has been widely deployed for decades in various applications like automated warehouse robots, social media recommendation algorithms, and fraud detection in financial systems." 

So let's maybe pause there. Um there's a there's really interesting language here, just in terms of 'singular things,' uh and and and and the kind of variation of the colonizing maneuver that AI takes on NLP uh and robotics, even though those things have been uh uh kind of separate or or have much longer histories than than our current moment, right. 

EMILY M. BENDER: Yeah. Oh it drives me nuts when people refer to the ACL conferences, Association for Computational Linguistics, as 'AI conferences.' It's like no they're computational linguistics conferences, and language technology conferences. And there's plenty of reasons to be building that kind of technology that have nothing to do with the project of AI. Um so I think um yeah that's that's frustrating, um also uh "refers to computer system systems capable of performing tasks that typically require human intelligence."  

It's like why that framing? Well it's because they need it to be able to to claim artificial intelligence right. But we can certainly have um useful software useful tools um that do things  like um--well I don't love their examples here--um fraud detection in financial systems. Okay, right um that's that's a useful thing um I don't think it's helpful to frame it as something that "requires human intelligence." That doesn't that doesn't help us understand what the technology does, um and it doesn't help us build the technology better. 

ALEX HANNA: Yeah absolutely but that's the tie um that they always wrap into it. Um let's go to the next graph. So, "The term 'artificial intelligence' in quotes was first coined in 1955--" I think that date is wrong, wasn't it '56? Uh, "--by Emeritus professor--Stanford professor John McCarthy as quote 'the science and engineering of making intelligent machines,' end quote. Um since then the field progressed slowly until the quote 'machine learning' end quote (ML) approach was popularized in the 2000s, a shift enabled by the proliferation of data on the internet. Unlike older AI systems which were pre-programmed to follow set rules, ML uses mathematical algorithms to learn patterns and data, to make classifications or predictions. For example ML is the mechanism powering search engine results on Google, recommending new series to watch on Netflix, and the brain power behind voice assistants like Siri and Alexa." 

So our metaphors here are really getting really getting great here so uh the 'brain power,' of course here and then the kind of um you know the uh the sort of uh uh quote 'the science and engineering of making intelligent machines,' which is uh yeah itself uh you know. Well you--checkout episode 20, and we tore apart the whole document, I mean that was more of a off-the-cuff remark rather than any kind of thorough definition.

EMILY M. BENDER: Yeah, absolutely. And I I think it's important that you call that 'brain power' there too. It's like voice assistants like Siri and Alexa are software programs, and they are useful to a certain extent and they fall down in other places. They don't have any brain power, I mean to the extent that they have brain power that's referring to the software engineers who are creating them, and the data workers who are creating the labeled data that is used in training.  

Um not ML um. And 'ML is a mechanism' is also a bit surprising to me. Okay third paragraph: "AI systems have led to a wide range of Innovations with the potential to benefit nearly all aspects of our society and support our economic and national security." Don't need that like ra-ra hype. Right it's all about the potential, it's all upside. Uh, "Recognizing this development, Stanford researchers popularized the term quote 'foundation models' in 2021, highlighting these new models' foundational role for building next generation AI applications." 

Super frustrating. Um the folks at Stanford claim that they call it 'foundation model' rather than 'foundational model' because they don't want to assert that it is a strong foundation, like they sort of wanted to somehow keep highlighting that it might be a shaky foundation. I don't think they succeeded with like they think this this um--you know I really I think that the the term foundation models was an attempt at claiming academic ground that gave the Center for Foundation Models sort of the encompassing view of everything, um and I unfortunately they seem to have succeeded with that, and I think it um you know is detrimental. "Foundation models form the basis of quote 'generative AI,' models that can generate sophisticated writing, images, and other forms of content with minimal human input.  

Generative AI, including ChatGPT, has been one of the most noteworthy areas of advancement in AI." So we're talking about things being sophisticated, and advancing. "Underpinned by a type of AI called a large language model (LLM), ChatGPT is trained on a significant amount of text to understand and generate humanlike language." No it doesn't understand, but apparently our representatives don't understand that it doesn't understand. "LLMs are useful for a wide range of natural language processing tasks such as chatbots, language translation--" Sorry I've got Zoom in the way. "--and text summarization." So this is an interesting list to me. 

Um, LLMs are useful for chatbots if you just want sort of like the entertainment of chatting with something and you don't care about the uh actual truth of the content, and you don't mind if the content that comes back is biased or toxic and you also don't care about the uh exploitative labor and data theft and environmental impacts behind all of it. Um yes they can be used for text translation and text summarization, both of those are basically um text-to-text translation tasks. Um they are not necessarily the best fit technologies for these, they're going to be wrong some of the time. They're going to sound super fluent even when they're wrong. And you could probably do really good machine translation with something smaller and really good summarization with something smaller than a ChatGPT. Um, that's very inside baseball. I wouldn't expect um you know members of the House of Representatives to know that level of detail, but I needed to take issue. Anything else about this framing? 

ALEX HANNA: Thoughts on this Justin? 

JUSTIN HENDRIX: Yeah, I'd just volunteer that uh you know when I'm editing things at Tech Policy Press um and generally kind of poring over folks describing different uh functionality of artificial intelligence uh systems or large language models or what have you, it can be really hard sometimes to you know grab or or catch you know any instances where folks are either attributing human uh capabilities to these systems, uh or brain power for that matter myself uh talking to just lay people about quite a lot. Um this idea that the machine is is simply making predictions, you know, based on prompts. 

Um it's not got any kind of understanding or anything that even approximates understanding um built into it. And I do I do find that to be a very hard thing for for most folks to kind of get, the the kind of you know sense that you're having an exchange with this thing which appears to interpret what you're saying and respond in kind. Uh for most people that's good enough. 

EMILY M. BENDER: Yeah and that-- 

ALEX HANNA: I have question Justin your--oh sorry go ahead. 

EMILY M. BENDER: No no go ahead, yours is more relevant, I can tell. 

ALEX HANNA: [Laughter] I have a question, Justin for you--sorry as Anna, my cat, is being very snuggly today. Um is--what do you find policy makers or legislators or regulators, when they say that, what's a how do you--what's kind of an effective response to them do you think? What's a what's a good way to counter? Because I know Emily and I faced this like in many guises. I'm  wondering how you handle that. 

JUSTIN HENDRIX: Oh gosh. I I think that you all are doing good work in this regard. Um I think that have been some efforts uh by journalists and I'm thinking uh in particular of a project that uh Garance Burke at the Associated Press led, um-- 

ALEX HANNA: Yes. 

JUSTIN HENDRIX: --in kind of redoing the AP style uh style book you know recommendations around how to talk about artificial intelligence. You know, there are folks who are very sympathetic to some of your critiques here or your general mode of critique in this show, um and who put a lot of thought into thinking about about how to make sure or do your best to ensure that you're not sort of repeating uh hype or ascribing capabilities to these technologies that don't exist. And unfortunately exactly what you're doing here, which is painful you know sentence by sentence deconstruction of the semantics of AI hype. I mean that--I think unfortunately that that's that's sort of what it takes in this day and age um and it's hard to do so you know we're--we'll talk in a couple of cases I suppose more specifically about what some uh legislators are saying and doing and the types of statements they're making um but boy is it hard. You're you're dealing with a flood of material and a flood of discourse and dialogue, um. And a lot of it backed by billions of dollars in marketing, you know um and venture capital and the rest of that uh that wants to plump it up. And so it's hard. 

EMILY M. BENDER: Yeah, about a year ago I was talking with a journalist and I forget who um and she mentioned sort of offhand, and I thought that rings so true, that there's plenty of people who are being paid to hype up AI. None of us are being paid to counter that hype. Right. We're doing it as part of our work, I mean like this is--I consider this part of my day job but there are there's no um you know anti-AI hype lobbyists or marketing firms.  

And it's so it's it's a big uphill battle. 

JUSTIN HENDRIX: Well and you look at-- 

ALEX HANNA: And I wanna-- 

JUSTIN HENDRIX: Oh please. 

ALEX HANNA: Go ahead Justin. 

JUSTIN HENDRIX: Oh no I was just gonna say, you look at some of the uh dialogue that comes out of Silicon Valley. I'm thinking in particular about some of the manifesto or even kind of political activism out of firms like a16z, um Andreessen Horowitz, you know that basically say you know hey if you're if you're a naysayer, if you're not an optimist, you know um you're not only holding back AI but you know you're contributing literally to the murder of future humans uh who will not benefit from the advances in artificial intelligence that you know whatever you did to slow its advance and development uh and its I suppose eventual supremacy over everything, um you know you're you're you're holding us back. 

EMILY M. BENDER: That people could say such things [cross-talk] in this critical moment, we're dealing with climate change right? It's like that if we're worried about future generations and physically harming them, then all hands on deck for climate change. Sorry Alex go ahead. 

ALEX HANNA: No no, you're good. I I tried to do a full read through and and take down of the Andreessen Horowitz er uh Mark Andreessen's techno optimist manifesto and it just was melting my brain. I'm glad that Paris Marx, who's over at uh the Tech Won't Save Us pod, you know did did did an article over on his blog on that. But it's absolutely yeah it's it's hard work to do this. And I said this in in in chat in uh in our group chat and Emily said we need that on some merch, which was our job, which is "ridicule as praxis." And really um identifying and and doing what that identifying what that hype is is exhausting, but you know it's even not the language of how we talk about metaphor and what that even is. Just because metaphors take are are taken upon and and run off and we see that just in here about foundation models, right. 

EMILY M. BENDER: Yeah. So I think I'm going to move us over to the Insight Forum tracker. This is a wonderful thing um that Tech Policy Press is providing um because uh Senate Majority Leader Chuck Schumer has been doing um a series of uh forums on um AI, and they are all covered in this long tracker. Um so Justin can you tell us a bit about the tracker here? 

JUSTIN HENDRIX: Yes absolutely. And I just want to call out Gabby Miller uh staff writer at  Tech Policy Press, who uh for those of you watching on Twitch at the moment um I don't think you uh quite see this screen yet, but perhaps you will. Um Gabby has put a lot of effort into this over the last uh bit, and has really paid close attention to this series of nine forums that were hosted by the Senate Majority Leader and the gang of four um so four uh senators who helped put these together.  

And you know they really covered the gamut. Um they went across a bunch of different issues, uh really invited folks in from a range of backgrounds. I'm sure we'll talk about the composition of uh that group and especially as you kind of step back now uh from uh it and you can kind of see the arc of all the forums that took place over the course of the fall, all the people that were involved. We can talk about you know what emerged as sort of patterns or trends about who gets an invitation to the Senate to talk about these things. 

Um but this was a labor of love that Gabby really did an enormous amount of work on. Um these were closed to the press um so it's not like reporters were there, they weren't able to watch uh these forums take place um, there are no official transcripts of you know the discussion that took place there. A lot of it was essentially on Chatham House I believe.  

Um and so you know uh Gabby is reporting, talking to people after the fact, what they were hearing the room, um you know that sort of thing, is some of the only record we've got about what actually happened inside these forums. 

EMILY M. BENDER: Yeah, this is amazing and I hope you can see what I'm sharing here. So this is the um the large view of this wonderful table that tracks all of the people um. And I  noticed that the um--there's 55 who are listed as 'tech industry,' and then there's also a few venture capitalists and there's a 'defense industry' and there's consulting um uh 'corporation' I guess is also industry. Uh 'tech research'--like this this is heavily um you know VC, or probably some of that research stuff is you know uh effective altruist philanthropy-backed stuff. Um but there are also some folks listed as um 'civil society' and I saw labor unions um and a bunch of academics, um so this is an extremely valuable resource and I I kudos uh to Gabby and to Tech Policy Press for this overview. 

Um I think maybe it would be fun to go--"fun" in quotes uh--to the discussion of p(doom) in this uh uh latest one. Let's see Forum Eight was I guess the the fora episodes--the forum episodes  were organized by topics um so it's probably also interesting and that table has it to look at who got to go to which forum. Um but "Risk, Alignment, & Guarding Against Doomsday Scenarios" was the topic of Forum Eight. Which you know, big sigh um, and who's there? Um uh we've got the CEO and co-founder of Pindrop, um Vijay Balasubramaniyan. Um Amanda Ballantyne um from the AFL-CIO Technology Institute um, Okezue Bell from Fidutam, which I don't recognize. Yoshua Bengio from U-Montreal. Um I I think I don't have the time to read all these but folks can look later, we will certainly post the link. And um, huh, um. 

ALEX HANNA: It's a really interesting mix, honestly, because they've got some people uh like Renée Cummings and also Janet Haven for um who are more civil society oriented um but then I mean are there to talk about I mean intervene hopefully and interrupt this kind of discussion of p(doom). But yeah Emily you really want to get into this question uh uh so I'll let you read it. 

EMILY M. BENDER: Yeah okay so um picking up from the middle of this reporting: "It started with a question posed by Senator Schumer--" As in Senate Majority Leader Schumer. "--asking the 17 participants to state their respective quote 'p(doom)' and quote 'p(hope)' probabilities for artificial general intelligence in 30 seconds or less." That's the statement in 30 seconds or less, not the um uh the you know AGI happening in 30 seconds or less. Um but this is this weird discourse that comes out of the longtermist crowd right. So p(doom) is what's your completely made up number that expresses your your guess as to the probability of AGI coming about and killing us all. And I guess p(hope), which is a new phrase to me, is the like it brings it all all all wonderful things. Um Alex take a look at the chat because you're going to have to sing what Abstract Tesseract put there for us. 

ALEX HANNA: [Laughter] Yeah he says "to the Pink Panther theme." [Sings] 

EMILY M. BENDER: Thank you. Okay so continuing with the article. "Some said it was quote 'surreal' and quote 'mindboggling' that p(dooms) were brought up in a Senate forum." Um sorry just a nitpick, this is the 'p(dooms)', that 's' should be outside the parenthesis. Anyway um, quote, "It's encouraging that the conversation is happening, that we're actually taking things seriously enough to be talking about that kind of topic here." And that's, "Malo Bourgon, CEO of the Machine Intelligence Research Institute (MIRI) told Tech Policy Press." So there's one of your longtermist AI doomers feeling happy that this discussion is happening um in the US se--Senate. So the question is, what's the p stand for? It stands for probability. 

Um uh, "Bourgon provided a p(doom) somewhere in the double digits but doesn't think it's particularly useful to try and nail down an exact number. 'I don't really think about it very much.' In quotes uh, 'Whether it's a coin flip or two heads in a row or Russian Roulette, you know any of these probabilities are all unacceptably high in my view,' he said." [Sigh] "Others rejected Senator Schumer's question outright. Quote, 'The threat of AI doom is not a probability, it's a decision problem and we need to decide the right thing. We're not just trying to predict the weather here, we're actually deciding what's going to happen,' said Stuart Russell, professor of computer science at University of California Berkeley in an interview with Tech Policy Press um." And Russell then was one of the uh main signatories of that petition in March um calling for the--the AI pause letter.  

It's what the next paragraph says. So it's frustrating to me that the voice objecting to the p(doom) question is still a doomer right. So Stuart's objection here--Stuart Russell's objection here is not that um we shouldn't be talking about AGI killing us all, because that's ridiculous and we should instead be talking about the actual harms that companies are are perpetrating um and perpetuating in the name of of AI--um but but instead uh Stuart Russell's point is well, we should just think about whether or not we're going to bring the thing about. So he's still entirely a doomer, he's just thinking about it non probabilistically. 

ALEX HANNA: And I would say that also I mean the people that they got involved, so MIRI is this wild organization. This is Eliezer Yudkowsky's organization. And this is you know this guy is very very deep in in the kind of the longtermist just very you know--I mean he's one of these people that is really outside you know of academia, doesn't have a doesn't have a degree in in kind of anything related to AI or tech, uh but has been really kind of this go-to person um and this organization is kind of wholly uh committed to this. Um and then you have you know your others kind of in in in this arena, um Bengio who's taken kind of a a a doomer turn um. And I will say give prop I will give my rare props to Andrew Ng who did recently tweet something that said, 'Why are we really focusing on doom? There's plenty of other harms that we need to focus in the here and now.' And I'm like oh, a thing that I actually agree with Ng on, which is we need to focus on harms, um but finally kind of drew a line in the sand after I would say letting this play off way too long. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: Justin what do you what what's your thoughts on on-- 

JUSTIN HENDRIX: Well, I'd point out a couple of things. I mean one I mean as you've already noted, the fact that this conversation's happening uh in United States Senate um and that the the people who are involved in this are some of the you know leaders in at least the--both the industry and academic uh space um. And even kind of looking at some of the titles is interesting to me, of some of the folks who participated in industry. I don't know if you scroll back up there to the list of folks but but if you kind of go around and you look at some of these things me I think it's interesting for instance that OpenAI has someone that's got the title of 'head of preparedness,' um you know whatever that might mean. Um, you know just it just shows you sort of how these things kind of go through the uh uh I guess the the mill of uh being addressed in industry and the extent to which uh there are structures being built around ideas like you know doom and hope [LAUGHTER] when it comes to artificial intelligence. Um and you know clearly there's at least some uh thought amongst uh the senators that that you know we need to balance out uh our approach in order to hopefully optimize I suppose for hope um but I don't know whether this conversation quite got there or not um.  

Unfortunately this is one of the ones that I do wish there was a complete transcript of, that it  had been open to the press uh because what an interesting uh artifact it would be to to hear this one. Um I did want to point out you know when it comes to kind of like these more perhaps philosophical or or kind of uh almost like you know late night conversations perhaps about artificial intelligence that you know this one's a good example of, um there was a much uh I suppose different hearing um in the Senate homeland security committee this fall that was you know chaired by Gary Peters who's a a Democrat uh from Michigan on the philosophy of artificial intelligence um and it did focus much more on kind of you know actual problems with AI that we're we're facing at the moment, you know questions around labor uh in particular, questions around uh other implications of of these technologies. 

And I do think it's important for lawmakers to ask big questions, you know about which way society is going to go and what we want to happen and what do we not want to happen, that kind of thing, um but yeah I think I probably at the end of the day agree with you all and your critique here that maybe adopting uh some of this uh strange terminology and and way of looking at this is not as helpful as you might like. 

EMILY M. BENDER: Yeah it really does feel like a distraction, that that we want our policy makers thinking about protecting labor rights, about protecting privacy, about um thinking about what is a uh you know how what what are the--what's the onus on companies that are proposing to automate something um to be able to test how well the automation works and what kinds of transparency is required--like that's where the energy should be. Not you know having a sophomore dorm room conversation in the Senate. JH I would say go back to--oh go ahead, sorry. 

ALEX HANNA: I was gonna I was going to plug that you know--further plug the the folks that were on that philosophy of AI uh uh panel, because it was there was three people right. It was it was Shanon Vallor, who has a great book um which is called "Technology and the Virtues," and really talking about what it means to kind of have a virtue ethic around technology in particular. And uh who was the second person we were talk there was Margaret Hu but now I'm completely blanking on the second person. 

JUSTIN HENDRIX: Uh Darren Acemoglu uh whose name I hope I've gotten right, from MIT. 

ALEX HANNA: Yeah, Darren Acemoglu, who I think was really who's um written a lot on kind of the labor economy of AI and kind of labor displacement. And I think is is is is someone that has you know has written plenty on that and I think has focused on on things that are in the here and now um of kind of the most important threats that AI poses currently. 

EMILY M. BENDER: And I think that speaking of of using language carefully and and cultivating good habits and also providing examples that people can follow--um because we've got the doomers in the Senate, um because someone has taught Schumer the phrase p(doom), I think we have to be careful about phrases like um 'threats from AI' and we should talk instead about you know 'threats from corporations doing massive data processing,' and it's wordier but it's worth it um to to put the the people and the corporations in the frame. 

JUSTIN HENDRIX: I would also just add you know if you go back to Senator Schumer's original language, uh introducing these forums. You know one of the things that he said you know is that we have a North Star you know in this inquiry, and the North Star is innovation.  

Right so that was always going to be the kind of underlying idea behind these forums, that no matter what we do we're not going to get in the way of innovation. That's that's where we start and end uh with this. Um it's not you know a more just and equitable democracy you know, perhaps those things are things that we'd like to achieve, but first and foremost it's innovation. 

EMILY M. BENDER: Yikes. 

ALEX HANNA: You--that's a that's a good segue to to our uh audio clip from Schumer, eh? 

EMILY M. BENDER: Yeah I'll do that and then we'll hopefully have time to come back. Um so um we do have an audio clip from Schumer, and in this case it is his own words um not not ChatGPT um output that he's reading. Um and do we know the date on this? 

JUSTIN HENDRIX: So these are comments that Senator Schumer gave um I believe shortly after that last uh set of forums, um so they were just in the last uh couple of weeks. I'll find the exact date for you um while you play it. 

EMILY M. BENDER: All right, so here it goes. 

[REPORTER ON RECORDING]: Senator, have you talked to Sam Altman or Microsoft about the OpenAI board drama and what do you think-- 

[SENATOR SCHUMER ON RECORDING] I've talked to Sam and um he's back back in charge and uh I think most people feel that's a good thing because he's such you know we want to stay in the lead on AI, um OpenAI has been leading the charge, there are other good companies big and small uh but I think there's a sigh of relief in the industry and probably in the country that Sam is back there. 

EMILY M. BENDER: Um so yeah, um we've got Senator Schumer talking about how great Sam Altman is um and that's you know alarming, and sort of speaking of him as if it's as if he's somehow a national treasure who is um in fact a savior, that is really important for him to be at OpenAI for the the interests of the people of this country. Like no it's not. 

ALEX HANNA: And I--Justin in the in the in the pre-show you were talking--well we I think we were all talking is that we didn't even know why he was kicked out yet, and you know there's been other things that that really awful things that have come out like, uh years ago his sister Annie Altman accused him and his brother of childhood sexual abuse. There was also the reporting by Nitasha Tiku in the Washington Post, talking about the really awful working  environment at OpenAI, uh just kind of rife with kind of this awful management style. Um and then the kind of there was this although within tech the sort of the sort of the the the kind of theme that happened, the favorite conspiracy theories of people think within AI is that there was this huge class between Sutskever and and and Altman where Sutskever you know saw something, he saw this program called Q* that exhibited these super intelligent sort of traits, and he said we need to slow down and Altman said no we need to push ahead. So to for a for a sitting legislator, especially one that's at the head of the Senate to come out swinging in favor of Altman is really alarming. 

JUSTIN HENDRIX: It's interesting. This was you know November 29th, uh so you know as far as I'm aware uh no one really knew at that point uh why Sam Altman had been fired, apart from the board members. I remember maybe a day or so uh before or after, right around the same time you know, that kind of I suppose now slightly infamous uh Elon Musk appearance on stage at a New York Times event with uh Andrew Ross Sorkin, he was being asked you know do you know why uh you know Sam Altman was fired? 

And he said you know basically, no, you know it's all hush hush, very held very close to the chest. And um I don't know yeah I--I happen to  agree, I think it was odd to see Senator Schumer essentially endorse this individual as if he uh  himself was a sort of you know pillar of American artificial intelligence exceptionalism, uh you know is somehow in in a national asset uh who you know having him restored somehow makes the country uh safer or more likely to succeed when it comes to artificial intelligence. I think that's odd.  

You know, I mean on one level maybe it's not that hard to explain Sam Altman has a history of giving large donations uh to Democrats uh up and down the ballot, both at the local, state and federal level, um so I'm sure he's well known to lawmakers and uh when he appeared before the Senate earlier this fall, uh there were multiple senators who you know appeared to know him quite well, they'd had dinner with him the night before, they referred to him not as Mr. Altman but as Sam. Uh so you know uh obviously someone who's well known there. 

EMILY M. BENDER: I think Senator Schumer used 'Sam' in that clip just now too. Um so yeah, that's all very very alarming and I think just underscores the extent to which we have to be on guard for uh situations that we seem to have where the legislatures are looking to safeguard the interests of American corporations instead of the people in this country. Um and that's-- 

JUSTIN HENDRIX: Well of course, what is the difference, right? Um in this case uh if OpenAI, succeeds then America succeeds apparently. Uh if Sam Altman succeeds then that's good for the country, you know, um um at least that's the insinuation. 

EMILY M. BENDER: So, given the late start should we go back to that one last artifact, should we transition to Fresh AI Hell, what do we want to do today? 

ALEX HANNA: I think we're I think we're uh according to my clock 47 minutes into the stream. So I think we got to go into the into Hell Time. 

EMILY M. BENDER: Okay, wait a minute I I promised you a a reasonably doable improv prompt this time, Alex. 

ALEX HANNA: Oh gosh, okay. 

EMILY M. BENDER: Okay, um so you are um in that conversation where everyone's getting asked their p(doom), um and you are um the only person in the room who can see it for the nonsense that it is. And you are also a reporter for Fresh AI Hell News. 

ALEX HANNA: Okay. [LAUGHTER] Uh okay, so Schumer turns to me says, 'Alex, reporter for Fresh AI Hell News, what's the what's your p(doom) and what's your uh p(hope)?' And and I I say I have--first I reject the premise of the question and I'd rather talk about the probability of Hell, which is 100 percent. I know that's not a probability but there we go. 

EMILY M. BENDER: Excellent, thank you. 

ALEX HANNA: And then they then they kick me out of the hearing, whatever. 

EMILY M. BENDER: Yeah. Okay, so here we are, a little bit of Fresh AI Hell. Um and so, "UK officials use AI to decide on issues from benefits to marriage licenses." And this I think is a call back to what we were talking about at the beginning about how it's not just our you know representatives doing opening statements of hearings, but all sorts of elected officials and others--and I sorry my apologies for all these blinking advertisements um--want to use this to do their work because it's I guess the new hotness? Um so uh, "findings show uncontrolled and potentially discriminatory way technology used in Whitehall and some police forces." Um just you know--why?  

So, "An algorithm used by the department for work and pensions, which an MP believes mistakenly led dozens of people having their benefits removed--" Who could have guessed? "A facial recognition tool used by the Metropolitan Police has been found to make more mistakes recognizing Black faces than white ones under certain settings." Of course, uh shout out to Dr. Joy Buolamwini's and others' work on that, Dr. Timnit Gebru as well. Um, "An algorithm used by the home office to flag up sham marriages, which has been disproportionately selecting people of certain nationalities." And so on. Um it's like you--it's 2023, 2023 is almost over, this this article is a month old, 2023. We know that this is going to happen and yet people keep using these so-called AI systems. 

ALEX HANNA: It's like the greatest--it's like the greatest hits of terrible things government could be us--using AI on. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: This one [crosstalk] this one's--yeah this one is uh I feel like you know we I think we had talked about this a little before. So this is an article from--what's this publication? The Guardian. So, "Lost in AI translation: growing reliance on language apps jeopardizes some asylum applicants--uh applications. Translators say that US--the US US immigration system relies on AI-powered translations without grasping the limits of the tools." And so this is some reporting talking about this--so we can scroll down in this article um. Talking--starting with um a  profile on a man named Carlos who had fled Brazil with his sisters and two nephews uh after his son was murdered in front of him by local gan--gang. 

"Upon arriving in the US, he was separated from his family and detained in a US Immigration and Customs Enforcement (or ICE) detention center." Um and so um after-- "Carlos, who is Afro-Indigenous, speaks Portuguese but does not read or write it. Staff at the Calexio, California detention center spoke only English or Spanish. The staff used an artificial intelligence powered voice translation tool to interpret what Carlos was saying, but the system didn't pick up or understand his regional accent or dialect. So Carlos spent six months in ICE detention unable to meaningfully communicate with anyone." 

This is awful this complete denial of livelihood um and asylum--asylum rights because of these AI-powered translations tools. And this is an organiz--uh there's an organization, um shout out to Response um Crisis Translation, RCS, which provides a lot of crisis translation work, especially in things like asylum claims. And you know because this is so uh--what they do is actually have human translators to help with these things, but this is so prevalent. I think we talked about uh Response Crisis Translation's work um before on the pro--on the pod. So yeah just absolutely life or death stakes here in these applications. 

EMILY M. BENDER: It's--there's a bit here saying, "It didn't recognize Belo Horizante as the name of one of the cities Carlos had lived in, instead translating it literally to 'beautiful horizon.'" And then of course the people on the other end, so you know asylum judges and case workers, are probably incentivized to find reasons to turn people down. And so if they get gibberish out, they are not incentivized to to go figure out what the person actually said. It's terrible. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: Um, more ads. Can I get rid of this one? Yes. Uh--  

JUSTIN HENDRIX: I'll just throw in something on that last one if you don't mind. 

EMILY M. BENDER: No, please. 

ALEX HANNA: Totally. 

JUSTIN HENDRIX: Um you know one of the um things that I uh had written about on Tech Policy Press about the uh Biden administration's AI executive order uh is the kind of sweeping mandate that it handed to the Department of Homeland Security, to essentially run uh the administration's policy on artificial intelligence. Um including you know uh obviously taking on issues around its application and uh all the various uh aspects of Customs and Border Protection, and you know the TSA and all other bits that DH--DHS operates um. And you know one of the kind of points I was sort of trying to leave the reader with was, you know here DHS has been essentially empowered, been given uh an enormous new set of responsibilities to lead--lead the country's uh you know development of these technologies and various applications. 

And yeah you look at sort of examples like this um and one would hope that perhaps the department you know uh will clean up its own act before it goes and essentially uh it sets policy perhaps for the rest of the federal government, or helps to set policy for the rest of the federal government. 

Um there was a Brennan Center report earlier this year as well that I'll just point out by Rachel Levinson-Waldman and José Guillermo Gutiérrez that looked at the DHS um and the problems with all of its sort of automated systems. The various flaws that expose lots of folks to um different forms of injustice and uh different perhaps infringements on their their rights and civil liberties. So um you know this is just one example I think of many um where uh automated systems or AI or you know large language models in some cases um have been applied by DHS and have brought people harm. 

EMILY M. BENDER: That sounds like a fantastic resource, that report you were describing so we are going to pester you for a link for that for the show notes to be for sure. 

Um there was a--DHS put on a briefing about the um executive order that I managed to get an invite to, and one of the things that stuck with me--first of all they did not talk about cleaning up this kind of issue at all um--they also talked about how one of the things that they are charged with is 'safeguarding intellectual property' and they made a point of saying 'we're not talking about the intellectual property of creators whose data is scraped and used to create these systems.' 

They're talking about the intellectual property of the companies and the trained models. Um and and it was interesting to me that they flat-out owned that they were doing one and not the other. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: Yeah. Um. Okay uh shall I take us to the um Fresh AI Heaven thing that was the promised ending point here Alex? 

ALEX HANNA: Yeah totally, let's do it. 

EMILY M. BENDER: All right. So this is this is for you Alex. 

ALEX HANNA: Yeah, so this is uh a nice call back to our last episode. This is the prepared testimony and statement uh for the record of Dr. Sarah Myers West, who's the managing director of the AI Now Institute. And uh Dr. West was on our last show um talking about open source AI, um and so the this is a testimony submitted for the uh judiciary committee, uh and the Subcommittee on Competition Policy, Antitrust, and Consumer Rights. Uh the title of it being, "The New Invisible Hand? The Impact of Algorithms on Competition and Consumer Rights." 

So this is a really great take, if y'all haven't read it I really um really advise you reading it. Um so kind of scrolling down, a few points that um Dr. West makes in this is that three areas of concern and then three things in which um recommendations to the committee. First, in this test--she says, "In this testimony I highlight three core concern--areas of concerns I urge this committee to consider as urgent priorities for intervention. 1. Concentration among firms producing and deploying AI and algorithmic systems--uh systems risks creating single points of failure through which flaws introduced in one system could have ripple effects throughout the economy. 2. Algorith--algorithmic systems distort the market by enabling companies with preferential access to data to charge higher prices." And then, "3. There is a risk that these systems enable groups in individuals to be excluded from access to the market, including on the basis of membership and protected classes thus scaling patterns of inequality." 

And then she says late uh down um, "I also offer three broad paths forward in terms of how we can start to address these harms proactively. 1. We need to use existing enforcement mechanisms to ensure strong oversight of this sector by robustly resourcing the agencies with existing authority." And this, for background Dr. West was an advisor--senior adviser for the FTC prior to being at open--at AI Now. She continues, "We already have a range of enforcement mechanisms that can be applied to anti-competitive and harmful uses of AI and algorithmic systems. 2. Second, we need specific bright line rules to curb AI use whereas has been demonstrated--where it has demonstrated harms to consumers and competition. The passage of a federal data privacy law, including a strong data minimization mandate, should be an urgent priority given that it serves as a potent antidote to arrange of algorithmically enabled harms, including harms to competition." And then, "3. Lastly, we need legislation to tackle the market structure and gatekeeping--gatekeeper power of dominant digital platforms, which hold an unprecedent amount of economic and political power."  

So really appreciate this, and this is based on Dr. West's research and kind of at--time at in federal enforcement. And I really like this because I think Justin you're hinting at this, the kind of need for some good legislation in the books that focus on privacy, the idea that privacy is something that we have a lot of demonstrated scholarship on, while these companies are rushing to build these tools and say 'the legislation--the regulation can't keep up.' Uh no it goes back to data and it goes back to privacy and stuff that we've talked about for years. 

EMILY M. BENDER: Yeah, absolutely. And I like having this here too at the end of our Fresh AI Hell segment as a reminder that there is also some good stuff going on in these hearings. There's fantastic experts like Dr. Sarah Myers West, um there are policy makers who are really focused on the rights of individuals and communities um and not focused on um protecting the ability of companies to do whatever they want um, and to make piles and piles of money at the expense of the rights of individuals and communities. So um that is as we said at the beginning uh we were dumping on that which needed to be ridiculed but there is also some good stuff going on. 

JUSTIN HENDRIX: I would just maybe double down on this idea that you know federal data privacy uh you know uh sorely needed and something that has uh come close to being possible just in the last uh you know year or so, uh with especially the American Data Privacy and Protection Act, um you know essentially finding bipartisan support. Uh never getting a vote in the Senate and again it's uh you know Senate Majority Leader uh Chuck Schumer um where apparently you know that that got held up alongside other Senate Democrats. Maria Cantwell, um the Congressional delegation from California skeptical about how it would sort of perhaps interfere with the uh California privacy law. So you know um unfortunately--maybe not to end on a a sour note but the if if the Senate and and other lawmakers continue to look for maybe exotic new legislation or exotic new regulations that should be passed to address AI uh while they consider exotic problems like p(doom) or whatever, um we're not doing some of the maybe basic things we need to do in this age of you know artificial intelligence, if we want to call it that, um which is perhaps just address you know basic privacy protection. 

EMILY M. BENDER: Yeah. Well–

ALEX HANNA: Totally. 

EMILY M. BENDER: Totally. Agreed. 

ALEX HANNA: [LAUGHTER] Amen. Well I think that's it for this week. Justin Hendrix is editor of the Tech Policy Press. Thanks so much  for joining us Justin. 

JUSTIN HENDRIX: Well thank the two of you uh and I also thank you for joining my podcast  earlier this year. Thank you so much. 

EMILY M. BENDER: We'll link to that episode too in the show notes absolutely.  

ALEX HANNA: Great to do a little tradesies. Our theme song is by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor. And thanks as always to the Distributed AI Research Institute. If you like this show you can support us by rating and reviewing us on Apple Podcasts and Spotify and by donating to DAIR at DAIR-Institute.org. That's D-A-I-R hyphen institute dot org. 

EMILY M. BENDER: find us and all our past episodes on Peertube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.tv/DAIR_Institute. Again that's D-A-I-R underscore Institute. I'm Emily M. Bender. 

ALEX HANNA: And I'm Alex Hanna. Stay out of AI hell y'all.

People on this episode