Mystery AI Hype Theater 3000

Episode 30: Marc's Miserable Manifesto, April 1 2024

Emily M. Bender and Alex Hanna Episode 30

Dr. Timnit Gebru guest-hosts with Alex in a deep dive into Marc Andreessen's 2023 manifesto, which argues, loftily, in favor of maximizing the use of 'AI' in all possible spheres of life.

Timnit Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Prior to that she was fired by Google, where she was serving as co-lead of the Ethical AI research team, in December 2020 for raising issues of discrimination in the workplace. Timnit also co-founded Black in AI, a nonprofit that works to increase the presence, inclusion, visibility and health of Black people in the field of AI, and is on the board of AddisCoder, a nonprofit dedicated to teaching algorithms and computer programming to Ethiopian highschool students, free of charge.

References:
Marc Andreessen: "The Techno-Optimism Manifesto"
First Monday: The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence (Timnit Gebru & Émile Torres)
Business Insider: Explaining 'Pronatalism' in Silicon Valley


Fresh AI Hell:
CBS New York: NYC subway testing out weapons detection technology, Mayor Adams says.
The Markup: NYC's AI chatbot tells businesses to break the law

The Guardian: DrugGPT: New AI tool could help doctors prescribe medicine in England
The Guardian: Wearable AI: Will it put our smartphones out of fashion?
TheCurricula.com


You can check out future livestreams at https://twitch.tv/DAIR_Institute.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Timnit Gebru: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in the age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.  

Alex Hanna: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. 

I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. 

Timnit Gebru: And I am not Emily Bender, as you might have guessed. I'm Timnit Gebru, Founder and Executive Director of the Distributed AI Research Institute, um, or DAIR. Emily will be back with us next time. Um, so this is episode 30, which we're recording on April 1st of 2024. 

And I am so excited to announce that, um, we are going all in on AI safety.  

Alex Hanna: That's right. We finally seen the light. AI really is all powerful. It does pose an existential threat to humanity. And we're going to put all of our resources into making sure it doesn't replace us as a dominant species on planet earth.

The new name of this podcast is Mystery AI Hope Theater 3000.  

Timnit Gebru: I couldn't even get the words out of my mouth. I thought this would be a cute April Fool's joke, but that's just about as much as I can handle, so.  

Alex Hanna: Oh, thank God. I was about to pass out. What are we actually talking about today?  

Timnit Gebru: Well, our favorite people, billionaires like Marc Andreessen, would have you believe that the opposite of doomerism and AI safety is what he is preaching, techno optimism, the belief that limiting AI's uses will only hurt humanity in the long run, um, that societies, quote unquote, "grow or die, like sharks," he wrote, and apparently it also gets sharks completely wrong. Um, and he put out a manifesto last fall that outlines all of this and more in rather breathtakingly grandiose language.  

Alex Hanna: So much like sharks ourselves, we've been waiting for our chance to savage this manifesto to death, and we also want to take a closer look at the group of beliefs that tech CEOs, our would be overlords, have been using to guide disastrous decisions about what technology is worth investing in. 

You've probably seen the acronym TESCREAL, coined by Timnit and Émile Torres, by now. We're going to break it down and explain why it's all nonsense. But dangerous nonsense. So let's go ahead and get into this thing. I have to say from the top, this was very difficult to read.  

Timnit Gebru: I'm very resentful that you made me do this. Resentful.  

Alex Hanna: Look, look, look, you know what? It's, it's, you, when you agree to come on the pod, you gotta play by, you gotta play by the rules.  

Timnit Gebru: Yeah.  

Alex Hanna: Uh, so this is on, this is on a16z's website, um, so which is, uh, Andreessen Horowitz's, uh, um, uh, the venture capital firm's website, and, uh, it starts with three, um, epigraphs. 

The first is, um, I'm going to read these in full. because they're bad. Um, so the first is by Walter, uh, Walker Percy, who if you don't know him, there's no surprise. He is, uh, well, actually I, I, this is someone else that I was going to mention, but this one reads, "You live in a deranged rage -- more deranged--" 

Timnit Gebru: Rage. Should be raged, but--  

Alex Hanna: Yes. "More deranged than usual, because despite great scientific and technological advances, man has not the faintest idea of who he is and what he is doing." 

Oh dear. Okay, and then the second one by Marian Tupy. "Our species is 300,000 years old. For the first 290,000 years, we were foragers subsisting in a way that's still observable. Among the Bushmen of Kalahari and the Sentinelese of the Andaman Islands. Even after homo sapiens embraced agriculture, progress was painfully slow. A person born in Sumer in 4,000 BC would find the resources, work, and technology available in England at the time of the Norman Conquest, or in the Aztec Empire at the age of Columbus, quite familiar. Then, beginning in in the 18th century, many people's standard of living skyrocketed. What brought about this dramatic improvement and why?" 

Timnit Gebru: 'Many' is doing a lot here. Many peoples.  

Alex Hanna: Yeah, it's, it's, this is by Marian Tupy who is, is, uh, owns the, the domain HumanProgress.org and also a fellow at the right wing Cato Institute.  

Timnit Gebru: Fun. Wonderful human.  

Alex Hanna: Yeah, great, great, great human. And then lastly, Thomas Edison saying, "There's a way to do it better. Find it." Oh my gosh.  

Timnit Gebru: They have found it, I guess.  

Alex Hanna: I mean, there's already a lot there in this, in these epigraphs, so yeah, like, what do you think, Timnit?  

Timnit Gebru: I, I just, you know, um, uh, I, uh, the Bushmen, right, um, and then, you know, apparently, before colonization, everybody was, uh, In the dark, not doing anything, starving, and then colonization came, and many people's standard of living just skyrocketed. 

Alex Hanna: Right.  

Timnit Gebru: And it's gonna continue to skyrocket, boundlessly.  

Alex Hanna: Yeah. And I mean, it's, it's, it's just like beyond endemic here. Colonization happened, we had Europe. What happened? I mean, yeah, colonization. Uh, the industrial revolution and the development of capitalism. Um, and then everybody's, everybody's, you know, everybody's cost of--everybody's improved.  

Timnit Gebru: Lies. We are being lied to. We're being lied to.  

Alex Hanna: So we're jumping in. So we're going to read the other first bit. So it's very, very dramatic. I mean, this, first off the writing of this is absolutely horrible. So the first subheading is "Lies." "We are being lied to. We are told that technology takes our job, reduces our wages, increases inequality, threatens our health, ruins the environment, degrades our society, corrupts our children, impairs our humanity, threatens our future, and is ever on the verge of ruining everything. 

"We are told to be angry, bitter, and resentful about technology. We are told to be pessimistic. The myth of Promethe--Prometheus and various updated forms like Frankenstein, Oppenheimer--" I found this to be funny because Oppenheimer was like a real guy and not a myth."  

Timnit Gebru: Yeah, Frankenstein, uh, Termin--then Terminator. 

Alex Hanna: Then Terminator.  

"--haunts our nightmares. We are told to denounce our birthright, our intelligence, our control over nature, our ability to build a better world. We are told to be miserable about the future." 

Timnit Gebru: I mean, we're not told to be miserable about the future. We're told to not have the future that these dudes want and like are making for us, right? 

We're saying we can be in control of our future and not just hand it down to these bozos over here. Like, I don't know. When I was reading it, I was trying to imagine how he wrote it. It just feels like this, this like, (aggressive tone) " Raaah, hadra ra ra ra ra rah, lies, toll, technology, go, let's go!"  

Like, that's how I'm feeling, you know? 

Alex Hanna: Yeah. And it's just like, calm down, dude. Are you okay? And for those of you who, like, aren't seeing the writing, I mean, the writing is very, you know, is very jumpy. It's, you know, the line breaks are all, you know, you know, pretty, pretty, pretty ridiculous. I mean, it's just one sentence paragraphs. Um, yeah.  

Um, and just in the chat, yeah. I mean, the kind of MedusaSkirt points out that, "Oppenheimer is literally the story of, 'I did a thing and then I regretted it.'"  

And then IttyBittyKittyCommittee, great username, saying, "Since when is our birthright control over nature?" Yeah.  

Timnit Gebru: Right, we are in nature too, aren't we? I'm, I'm so confused why that me, what that means even. 

Alex Hanna: Well, it's a very particular sort of view of what the human is. Right. I mean, this is a very, you know, like, and just to outline, I mean, this is a kind of, you know, you know, just to give us some background and maybe you can tell us more about how these folks fit into the, the TESCREAL bundle to meet the kind of ideas like this document outlines for, for, um, Andreessen, for other folks, um, that call themselves, you know, effective, you know, accelerationists, um, that, you know, they are trying to effectively set a counter to existential risk people, but from within kind of the same, um, view of effectively saying, well, we still believe AI is very powerful. 

But we're going to use this for a particular sort of destiny that we're going towards, then. So to me, I'd love if you could even take the step back and say about, you know, how these, this type of folk, this, this type, these types of folks fit into the consistency of the TESCREAL bundle of ideologies. 

Timnit Gebru: Yeah, so, um, I wanted to write a paper with Émile P. Torres, who is, um, a philosopher and was in that TESCREAL bundle. So they know these, these kinds of thoughts, um, very well. And so I, you know, like the whole AGI, artificial general intelligence thing was giving me eugenics vibes. You know, and, um, and I had been around the effective altruists for a long time. 

And I kind of, you know, just try to get away with uh--from them. Like, I never really paid them that much attention until it just seemed to be their money driving the field of AI. And, um, this whole AGI thing basically has taken over the field of AI. So we started writing this paper just became a bit unwieldy to talk about all of these people. Like remember Sam Altman who used to be kind of an effective altruist but then talks about this cosmos thing but then talks of this this and I was like, look, I don't think we can write this paper because it's becoming so you know we have to remind everyone what like this person was believing, that person was believing. 

And then Émile was like, what if we just define the TESCREAL bundle first and we come up with this weird acronym, you can veto it because it's such a strange acronym. And then we kind of talk about what it is, and then we can just move on to that conversation. That's how we came up with this acronym. 

And so TESCREAL stands for transhumanism, extropianism, singularitarianism, and we, we're, we're going to get to the singularity part here too in this, in this manifesto. And then what's-- cosmism, and he talks about, you know, going to space and colonizing space and stuff here. Uh, rationalism, effective altruism, and longtermism. 

And it was very, um, useful to come up with this ter--with this term because you see that not all of them are exactly in one of these ideologies, but they're sort of like around them all the time. So the transhumanists, um, are basically direct descendants of the, um, Anglo American eugenics tradition, right? They, they want to not, they're--and they're more radical. 

So they're not only want to create a superior human species, but they want to create a superior just species altogether, altogether, a post human. So, you know, you have the extropians, the cosmists, the extropians. This sounds like the extropians to me, they have, they used to have, you know, they were like around in the 90s, they have a listserv. All your favorite people, like Nick Bostrom, Ray, uh, was there, was in it. Um, the, uh, what's his name? Oh, wow. I forgot his name. Oh, yeah. Elisar Yudovsky, who, you know, his claim to fame is writing a Harry Potter fanfic. And he has a blog post. And guess what? He can just write op eds on Time Magazine talking about, uh, kill, you know, bombing data centers. 

So they're all in there. Um, and so this whole conversation about boundless expansion, intelligence technology, they have these five principles or four or five principles. Um, and this to me sounds exactly like the extropians, the whole, you know, boundless expansion, infinite things, whatever. And so I wonder if they knew--the effective accelerationists knew about the extropians, that they would even come up with a new name, right? 

And then you have the rationalists and effective altruists and the longtermists. The longtermists sort of say that, hey, if we're going to have so many billions and trillions of people in the world, then they can live digitally happily ever after and colonize space. And so, even preventing the existence of those, you know, happy digital beings is, is, is, is immoral. It's, it's murder.  

And he kind of says that here too, which is really interesting. So, you know, these are the people driving our technological future. Isn't it fun?  

Alex Hanna: Yeah, right. And I mean, it gets--to get into, I mean, the next two parts are, are really, kind of get at the heart of it. So it's worth reading these in full. 

So the next part of this, the big title is "Truth." Um, our, so our, so this is, this is what they're saying. "The truth is our civilization was built on technology. Our civilization is built on technology." And I don't know why there's an emphasis on that. "Technology is the glory of human ambition and achievement, the spearhead of progress and the realization of our potential. For hundreds of years, we properly glorified this until recently. I am here to bring the good news." Like Jesus Christ himself and Paul and his apostles. Um, it is Easter, by the way. Um, we can, there's not, that's not in the text.  

"We can advance to a far superior way of living and of being. We have the tools, the systems, the ideas. We have the will. It is time once again to raise the technology flag and it's time to be techno optimists."  

And then we get into the thing about technology. I'm not going to read all this but there's a few things that are absolutely absurd.  

So-- 

Timnit Gebru: Is this where he talks about depopulation? 

Alex Hanna: Yes, yeah.  

Timnit Gebru: There are only three sources if you--  

Alex Hanna: Oh, let me read--yeah let me read that first and then I want you to read that. So tech so, "Technology" in big caps. "Techno optimists believe that societies, like sharks, grow or die." And this and this is set off our producer, Christie Taylor to be like, that's not what sharks do.  

And so, yeah-- 

Timnit Gebru: 'I hate bad science, that's not what sharks do.'  

Alex Hanna: Yeah. Bad science. We hate bad science over here on the pod. And, you know, we should also say throughout this entire, entire piece--Christie says in chat, "Ha, you captured my voice person-- perfectly." 

 (laughter) We should also say that throughout the piece there's, there's, there's no citations. There's just like a list of people you should read at the end of this.  

Timnit Gebru: Um, or music you should listen to. 

Actually, that's what I'm going to do now and in my papers. Yeah. Hey, have Missy Elliot or somebody, you know, that captures what I'm trying to say.  

Alex Hanna: Hey, I'm down, I'm down with that.  

So he says, "Techno optimists believe that societies, like sharks, grow or die. We believe that growth is progress leading to vitality, expansion of life, increasing knowledge, higher wellbeing. 

We agree with Paul Collier when he says economic growth isn't a cure all, but lack of a growth is a kill all."  

Um, and then going down, why don't you read this, there are only three sources piece, because, yeah, let's talk about this.  

Timnit Gebru: Yeah, "There are only three sources of growth. Population growth, natural resource utilization, and technology. Developed societies are depopulating all over the world, across cultures. The total human population may already be shrinking."  

This, uh, this reminds me of the pronatalists. I mean, he probably is one, you know, they're also in the TESCREAL bundle, right? Like, like Elon Musk and them. Julia Black did a really, she had a great article on when she was at Business Insider, on the pronatalists in Silicon Valley. 

They're like funding all of these, um, ways to, for of course, you know, white people to have kids.  

Alex Hanna: Yeah.  

Timnit Gebru: And it's not, it's not us that are depopulating. Right, we're overpopulating. It's probably how he's feeling about it.  

Alex Hanna: Well, that's what, that's what, that's what develops societies is, you know, that's a, that's effectively a watch word for, you know, he's being like Black and Brown people. Their populations are growing. The, the barbarians are growing, but the white, the white people are shrinking, right? I mean, that's effectively, uh--  

Timnit Gebru: The great replacement theory.  

Alex Hanna: Yeah. So it's effectively replacement theory, you know, you know, here. So, yeah, I mean, that's effectively saying, so it does have this kind of convergence with the pronatalists effectively, you know, we need to, we need to promote good, uh, you know, good eugenics, we need to promote good racial hygiene. 

I mean, it's very much, you know, in that, in that vein.  

Timnit Gebru: He's not being subtle about it at all.  

Alex Hanna: No, not at all. Not at all. Yeah. I mean, this is, and so then here down here, I want to hit on: "Economists measure technological progress as productivity growth." Um, which first off, that's not true because productivity growth is a measurement of how much output you have as a, as a function of inputs. Technology is one portion of it, but it can also deal with, um, how much labor you have and differ differing and labor market inputs. And so he's effectively, I mean, you know, selecting on a dependent variable by saying, you know, we're going to measure productivity and technology as ones. Um, so. That's effectively just wrong. And then he says, uh, "We believe this is the story of the material development of our civilization. This is why we are not still living in mud huts--"  

Timnit Gebru: All right, yeah.  

Alex Hanna: "--eking out a meager survival and waiting for nature to kill us. This is why we, our descendants, will live in the stars."  

Timnit Gebru: We see the cosmists. This is the cosmist um, stuff. Sam Altman also talks about the stars and colonizing the stars. Why can't they like think of a world they're not colonizing? 

I don't understand. Just like that imagination does not exist.  

Alex Hanna: Yeah, there always has colonization is always in this and I think further down he says something like 'there is no more physical frontier and there you know now we have to approach the technological frontier.' So effectively like okay there's no more native people to kill so we gotta we gotta do other types of colonization right um and so I mean it's it's really just incredible. 

I mean the, the, I mean the, the racism in here is not subtle at all. It's pretty, pretty overt. 

Timnit Gebru: Oh, the other part in this, in this section was like, "We believe that there is no material problem, whether created by nature or by technology, that cannot be solved with more technology." It's just like.  

Alex Hanna: Well, this is so, this is, this, this group is really great. 

So every line here, and I, I just, I cannot emphasize how disjointed this is. I mean, this man, like, surely gave--you know, went to a grad student said, you know--no, that's, that's too, he didn't go to anybody in, in, in, he wrote it, there was no one who edited it, you know, it's, it's very possible ChatGPT, you know, like had a hand in it. 

So, but this, this group here is, we had the star-- "We had a problem of starvation, so we invented the Green Revolution."  

Timnit Gebru: And nobody's starving anymore, don't you know?  

Alex Hanna: Don't you know? "We had a problem of darkness, so we invented electric lighting. We had a problem--we had--" Didn't we have fucking candles? Incredible. 

"We had a problem of cold, so we invented indoor heating." Didn't we have fire? "We had a problem of heat, so we invented air conditioning." Uh--  

Timnit Gebru: And then the air conditioning is too much. So you have to turn the heater on too.  

Alex Hanna: You have to turn the heater on, oh my gosh.  

Timnit Gebru: And that solved the air conditioning problem. 

Alex Hanna: "We had a problem of isolation, so we invented the internet."  

Timnit Gebru: And people are all so unisolated than ever, right? They're, they're as social as ever with the internet. They're not isolated at all anymore.  

Alex Hanna: It's such a, it's such a nonsensical thing to say. And it's such a wrong statement to say we had a problem with isolation so we invented the internet. 

First off, not why the internet was invented. Second off, I think most, uh, surveys of kind of, like, having trusted people, you know, you know, there's the kind of bowling alone hypothesis that Robert Putnam and people, um, in that tradition have written about in the general social survey that tend to show that like people have fewer close friends that they did, at least in the U.S. um, than they did 30 or 40 years ago.  

Um, there's things that urbanists, uh, have written about, about the lack of kind of third spaces.  

Um, so no, we didn't have a problem of isolation.  

Timnit Gebru: And they've been writing about how young people in, especially in the West, where they're always on their phones and stuff, are spending less time interacting with each other um than they did like 20 years ago, right? But we can't cite anything. Apparently that's the that's the rule here, we're gonna have to cite us, you know a song about it or--  

Alex Hanna: Yeah.  

Timnit Gebru: --just tell him like vibes, right? Like it's all it's all vibes here.  

Alex Hanna: The whole thing is vibes The whole thing is Nietzschean vibes. Incredible. Um.  

"We had a problem of pandemics, so we invented vaccines."  

Timnit Gebru: And now we won't have pandemics anymore.  

Alex Hanna: Right?  

Timnit Gebru: It's all fixed.  

Alex Hanna: We fixed the, we fixed the pandemics.  

"We have a problem of poverty so we invent technology to create abundance."  

Timnit Gebru: And now there's no poverty. It's over. It's fixed.  

Alex Hanna: We did all the thing. 

"Give us a real world problem and we can invent technology that will solve it."  

Timnit Gebru: The problem is he's blocked me on Twitter so I can't ask him to do this for us.  

Alex Hanna: I thought we, I thought you were, he might be lonely without your presence. 

Timnit Gebru: He blocked me. I can't, I can't ask him to build the technology that would solve my loneliness. 

His E, you know, they had, they funded this E girl, E boyfriend, um, site that they had, they called it E girl, but now they changed the name of it. Maybe they, at least they understood that probably that wasn't how they should advertise it, but that's how they're solving people's loneliness. 

Alex Hanna: Well, don't you, don't you, don't you know, love doesn't scale, which he says down in the next piece. 

So, so the next piece is called "Markets."  

Timnit Gebru: Yeah.  

Alex Hanna: And so it's effectively, you know, it's, it's capitalism, rah, rah. He cites all our fun, uh, favorite neoliberals, such as Frederick Hayek, Adam Smith. Um, and, and kind of Chicago. different Chicago school economists. Um, you know, he says, you know, we're not, we're opposed to centralized planning and communism. 

Um, but the piece that I mean, and this is, this is like a bit, you know, you know, a bit, uh, you know, we've seen some of this, but there's this piece here where he says, "David Friedman points out that people only do things for other people for three reasons: love money or force. Love doesn't scale so the economy can only run on money or force. The force experiment has been run and found wanting. Let's stick with money."  

Oh my dear, so there's so much, so much here. 

I mean, first off, I don't know who David Friedman is, but if you know, in the chat, please, please let us know. I imagine he's some, one of these terrible, um, you know, either Chicago school economists, or maybe it's just some dude with a blog, who knows. Um, but it's, it's so wild that, I mean, effectively the argument here being that, uh, people are not allowed to be altruists in any kind of sense. They are effectively, they only do things, um, basically for, you know, for these kinds of things.  

And I'm just like, that's, what a, what a, what a impoverished view of the human condition.  

Timnit Gebru: Of humanity-  

Alex Hanna: Right?  

Timnit Gebru: --in general.  

Alex Hanna: Yeah. It's just, I mean, right, it's, it's, they're really, yeah, they're, they're very much, they're very much telling us. 

Timnit Gebru: It's so interesting, like, the way he writes, you know, "David Friedman points out that X," and like, is it based on, is it based on what, just vibes, research, anything? It doesn't matter. We should believe it, because David Friedman says that, and he points it out. So we just gotta take it for granted, you know, that's the way he's citing his sources.  

Alex Hanna: Yeah, and ArcaneSciences and MedusaSkirt in the chat say so David Friedman is an anarcho capitalist economist. And MedusaSkirt says, "He's the son of Milton Friedman and follows in his footsteps as being the worst kind of economist." Yeah, so I mean this is a very particular type of type of individual in the citation.  

Timnit Gebru: Is this where he says we believe in market discipline?  

Alex Hanna: Yeah, so he says, so he says, "We believe in markets, to quote Nicholas Stern, we believe markets are the way we take care of people we don't know. We believe markets are the way to generate societal wealth for everything else we want to pay for, including basic research, social welfare programs, and national defense." So effectively saying that, you know, the markets are going to provide. So I mean, this is, a bit of a standard libertarian kind of vibe. 

You know, we don't actually need social welfare programs. We effectively need market incentives to find ways to help people when markets are, um, when markets are doing something that is helping everybody out.  

Timnit Gebru: But you know, the thing is like what, what drives me nuts with, with their arguments is that they have the highest government handouts. 

All these people who talk about the free market and how we just got to let the market decide get the most amount of money from the government, whether it is during wartime to create some military, um, related stuff, which is why they're all in on military and AI now, right? Like Marc-- Andreessen Horowitz is starting to invest a ton in this kind of stuff. 

Or, you know, like Elon Musk and what kind of tax, how much of our tax money he took. So there's all these people who talk about the free market and such. Well, maybe, maybe, okay, let's, let's see if the free market works then, don't take that government handouts if that's what you really believe in, but they get most of the money. 

Alex Hanna: Yeah, exactly. I mean, you have so much of the way in which there is government subsidy in making these organizations you know, successful. I mean, you know, there's so much in this, I mean, even kind of thinking about the development of like the early internet, you know, the early internet would not have existed without government, you know, the development of ARPANET, the kind of NSF investment in, you know, the internet. 

I mean, there's so much here in which there is foundational investment from government funds now. I mean, it's, it's very ahistorical to even say that, that, that it is not so, right?  

Timnit Gebru: And stealing, like stealing of everyone's data. So theft. We're not even talking about the free market here. It's actual theft that is currently going on. 

So, you know, but no he d--and, and I think he admits at some point, not in this one, that if you had to have copyright protection or anything else, that like the current, these current systems wouldn't exist. I'm like, okay, that's theft. That's not free market, sir.  

Alex Hanna: Yeah, yeah. So getting down here, there's a few other knee slappers I want to hit. 

So, "Technological innovation in the market system is inherently philanthropic--"  

Timnit Gebru: Oh, yeah.  

Alex Hanna: "--by a 50 to 1 ratio. Who gets more value from a new technology, the single company that makes it or the millions or billions of people who use it to improve their lives?" And what gets me is that he ends this with, "QED." Like--  

Timnit Gebru: I remember that. 

Alex Hanna: Like he just finished like he just finished a fucking proof.  

Timnit Gebru: Yeah, yeah, I was like, QED, proof. See music in references. 

Alex Hanna: Right.  

Timnit Gebru: Verse one.  

Alex Hanna: Yeah, I know. QED, assholes. Oh my gosh. 

Timnit Gebru: Merit. This is, this is merit. We see it.  

Alex Hanna: This is, this is merit. We're seeing it. Um. There's some other things I want to hit here, like the talk of universal basic income. "We believe a universal basic income would turn people into zoo animals to be farmed by the state. Man was not meant to be farmed. Man was meant to be useful, to be productive, to be proud."  

And I'm like, first off, mixing metaphors, zoo animals, are different from farm animals, first off. So.  

Timnit Gebru: As a farmer.  

Alex Hanna: As a farmer, I have, I have my chickens and I just put my plants in the ground. Very, and then, and then, "farmed by the state." 

How are they actually farmed by the state? Like, I don't actually understand what the state gets back from a UBI.  

Timnit Gebru: You feed them, you feed farm animals. I'm trying, I'm trying to work with him here and I, I don't understand. I mean, it's not like I'm into UBI. It's, it's the same sort of idea that's created by these people anyway. 

It's funny, like--  

Alex Hanna: UBI, different, different instantiations of UBI, right? I mean, because UBI on, on the merits has its, has its own merits, but their kind of idea of UBI is very much tied to--  

Timnit Gebru: I don't like Sam Altman's, idea of UBI where like they get to have all the money and we have the minimum little sustain--sustaining all of us money that just to consume their stuff, right? 

I like, you know, Michael Tubbs idea of UBI that he tried out in Stockton. But that, for me, this whole thing is like, um, infighting with a, in a, in a family that I just don't want to be close to, right? Like, he's infighting with other people in the TESCREAL bundle because they have their existential risk, let's pause AI, you know, it's, it's, it's gonna, dominate us and kill us all and all that. Maybe we should have UBI and all that.  

So he's like addressing them, you know, he's not even addressing us, like we're not really people to talk to, but, um, he's addressing all these people.  

Alex Hanna: Yeah, he's addressing kind of UBI advocates that come from a particular sort of tradition. 

He's addressing existential risk nonsense, etc. Um, Okay, we've got, we've got 13 minutes left to go through this. What, what are, to me, what are the other, like, best hits you want to hit before we end this?  

Timnit Gebru: I mean, I remember I was just texting you as I was reading this, like, why are you making me do this? Um, um, "Smart people in smart societies outperform less smart ones on virtually every metric we can measure. Intelligence is the birthright of humanity. We should expand it as fully and broadly as we possibly can."  

Alex Hanna: Yeah. So this is under the "intelligence" section. I also like marked this in bright red, like, oh my gosh. Yeah.  

Timnit Gebru: If I had to choose the best hit in this particular section is, "We believe artif--we believe artificial artificial intelligence is best thought of as a universal problem solver, and we have a lot of problems to solve." And like this is basically the whole AGI people, they're all of them write this in different instantiations, it's like if you go to one of DeepMind's blog posts it's like oh, you know, AI has already started, um, fixing climate change. 

And eventually it'll, we'll figure it out how to create the algorithms that will also solve all the other problems that we have, it's like, you're trying to create a machine god, right. Um, and you know, it's like, I can't solve the problems, but I'll make the thing that solves all the problems, if you give me 7 trillion dollars. 

Alex Hanna: Right. Or a hundred, a hundred billion dollar supercomputer or whatever that, that, uh, Altman and Satya are saying they're building. And the thing is this thing called the universal problem solver is, you know, like you've been posting a little to LinkedIn recently about like expert systems.  

Timnit Gebru: Yeah.  

Alex Hanna: And like, there's been things called the universal problem solver before. Right. And they failed spectacularly.  

Timnit Gebru: Ever since the inception of AI yeah.  

Alex Hanna: Yeah. Like, yeah, there's very much, you know, like there's different, you know, different aspects of that. Right. And so there's, there's been things called universal problem solvers. Right. Um, and you know, there's there's no such thing. 

Timnit Gebru: Yeah, it's like oh, hey, um, we can solve poverty, but that would make me make it such that I would have to give up some amounts of my privilege. It's a social problem. That's not fun. It's not fun. It's more fun to create a machine god. Like I want to think about that, you know? And so yeah, I don't know how to do it yet. 

Although, according to this, I think we had already fixed poverty. I'm kind of confused. He says we have a lot of problems to solve, but--  

Alex Hanna: I think we fixed it, and then, but then somebody thought somebody should get, you know, then there was a welfare state that came along, and then we had more poverty because people were zoo animals to be farmed. 

Um, there's some things here that also, you know, that also we gotta hit, like, "We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from-- Deaths that were prevented by the AI that was prevented from existing is a form of murder." This is a very long termist, you know, thing. 

Timnit Gebru: It's like, yeah, leaving, not letting the future digital colonizers exist is murder. It's murder. So, you're a murderer.  

Alex Hanna: It's a, it's a different, it's kind of a slightly different argument, but in the, you know, but the, you know, but it's more like, um, there are deaths currently that could be preventable by, by AI now. 

So, for instance, he would sort of say, instead of--yeah, instead of the 10, the 10 to the 4th--you know, 54th children as Arcane Sciences said in the chat, um, that many children that are potentially, it's more like there are people that may exist in the near future. But because we didn't allow an AI tool to, uh, you know, fix climate change somehow and prevent climate refugees, um, then, uh, you're actually murdering them. 

So it's a slightly different type of fake murder, but it is still fake murder.  

Timnit Gebru: Yeah, it's a cousin. It's a cousin of the--  

Alex Hanna: It's a cousin. Yeah.  

Timnit Gebru: Yeah. Yeah. Let's move to energy maybe.  

Alex Hanna: Yeah, let's go to energy. And then, oh gosh, all right, let's go to energy. And then so in energy, he's effectively saying, you know, we need--basically deregulation is the reason we don't have unlimited energy because we prevented nuclear fission and nuclear power plants. 

Of course, no discussion of any, uh, you know, of the nuclear catastrophes. Um--  

Timnit Gebru: This always upsets me because I remember Yann LeCun is always, he used to always boast about, you know, France and how like it's so green because nuclear energy, like it's 70 something percent is powered by nuclear energy. And like the uranium is not mined in the middle of the Louvre, right? 

Like they're not like, that's not--Niger is the one that is bearing the brunt of all of those issues, whether it's pollution, whether it's radiation and all of these things. And no, but these people, you know, that's not, those are not the things they think about.  

Alex Hanna: Right, right. And I mean, it's interesting to also sort of think, because I mean, there could be a discussion of nuclear energy that is smarter, but it's also, you know, like, it's also the fact that there's like, it's become kind of a stopgap. 

And I mean, he calls it here a literal silver bullet, right? So in terms of nuclear fusion, which I feel like nuclear fusion is, is similar to AI where we're almost five years away from always having fusion. But I, I'm quickly getting out of my depth about talking about fusion.  

Timnit Gebru: Yeah. I don't know anything about this stuff. 

Alex Hanna: All right. So let's see, there's "abundance." Let's see if I want to, what else is there anything you want to hit on here in "abundance."  

Timnit Gebru: It's kind of like the idea that they don't want to feel like there are any rules, like the conservation of energy or just they don't want their time to end. You know what I mean? Like, I don't it's just like they want more of everything forever, like all the time. Yeah, I don't understand that concept. It's it's the same in the TESCREAL bundle. It's just like they just want more expansion forever, always, no concept of enough, like, don't you have the concept of enough of something in your life, you know, and sharing, like, I don't get it. 

Alex Hanna: There's an idea that effectively if you have a pure market based environment, you're going to solve everything for everyone. And it's sort of, I think the thing that really trips me up is that these people really have no concept of of variance, like, you know, like, yes, the meat there might be a mean quality of life improvement, but there has been have been more destructive wars, more intense genocides. Uh, we have been able to kill people more efficiently.  

There's a, there's a, there's an S.M.B.C. comic, which I think, uh, sums this up very well, where. It's like there's a guy talking to God and God's like, oh, you know, everything's been improving. Why are you all complaining? And he was like, uh, have you actually looked at the amount of variation there is? 

And God looks at like the standard errors and he's like, holy shit. What the fuck, what the fuck has gone wrong?  

Timnit Gebru: And like the planet is kind of burning and you know, climate catastrophe and such. I don't understand where that is. And this whole analysis. Um, I, I think my gold in this whole, I don't know if this is the abundance section, is that um, Silicon Valley code of paying it forward. 

Do, do you remember, did you get that one? Do you remember that one?  

Alex Hanna: No, what is that?  

Timnit Gebru: "We believe in the Silicon Valley code of pay it forward. Trust via aligned incentives. Generosity of spirit to help one another learn and grow." If I had to think of one Silicon Valley code, right, like, pay it forward, is not, is not it. 

It's really not it.  

Alex Hanna: It's, we, 'I got mine, you fuck off.' Abstract Tesseract says, "He's looking for the Konami code of life. Up, up, down, down, left, right, left, right, start."  

Timnit Gebru: Yeah, yeah.  

Alex Hanna: I want to go down to the, uh, "becoming technological supermen" part, because this to me is the, like, most, like, outright fascist part of this. 

So it's, like, very much reading, like, uh, kind of a Nietzschean, Ãœbermensch kind of situation. Um, where it's saying, like, we need to become, "It is the the most virtuous thing we can do to, uh, follow technology to become these technologists." And then he has this statement here. "We believe that while the physical frontier, at least on earth is closed, the technological frontier is wide open." 

So yeah, we've colonized all those people already. We're done. We've killed all the natives. Um, we can actually go and, uh, get past and, and, and, and kill all the, uh, non, non tech people too.  

Timnit Gebru: But we also can get the whole universe now. There's no limit is what these people are saying, right? And in, the cosmists have, like, they say paradise engineering. 

Like the, once you become a post--this is why I was so mad reading this because I'm like, I wrote a paper already. I'm done. I'm going to move on from these people. And it's like, you know, once you're transend--you know, you become a post human, um, that that's what the transhumanists care about. And then the cosmos, like, it's not just enough to be a post human, you have--post humans have to do paradise engineering around the cosmos, right? 

Alex Hanna: Yeah, right.  

Timnit Gebru: That's that. You'd be happy to know they have 660 million dollars of crypto money to save us from the existential risk of AI, but.  

Alex Hanna: Yeah, well there you go.  

Timnit Gebru: Good for all of us.  

Alex Hanna: And then he also has here, really here about victim mentality. "We believe that we are, have been, and always will be masters of technology, not mastered by technology. Victim mentality is a curse in every domain of life, including our relationship with technology, both unnecessary and self defeating. We are not victims. We are conquerors." I can just, I can just like, I can just see him like, snorting cocaine off a desk with like a like a hundred dollar bill or right here. Like this is this is some real Scarface shit. Yeah, yeah. All right. 

We let's just to finish off. There's some more terrible stuff here. He tells you to cite--  

Timnit Gebru: The meaning of life.  

Alex Hanna: The meaning of life, the enemy, "The enemies here are--" 

Timnit Gebru: Bad ideas. 

Alex Hanna: Everything here includes, everything including existential risk, ESG, and tech ethics all in the same breath.  

Timnit Gebru: What is stakeholder capitalism? I don't even know what that is. 

Alex Hanna: Oh, that's like people who are, they make, they make investments, you know, that are basically in a way to, I mean, I've heard stakeholder activism where they basically, but I actually no that's like investing activism. Stakeholder capitalism, I think is like trying to engage more people rather than uh, just doing whatever the fuck you want. 

I don't know. Yeah. Um, and then at the end here, there's so much, there's just like the people who we cite, "the patron saints" like BasedBeffJezos, BayesLord, uh, Nick Land, just straight up fascists, uh, Frederick Hayek, Nietzsche, uh, I mean, I guess the bad parts of Nietzsche, I mean, well.  

John Galt, who's not even a real person, that is the, that is the person in Atlas Shrugged, that they reference, uh. Yeah, just like everybody, all the all the worst hits here. 

Yeah.  

Timnit Gebru: That is painful. 

Alex Hanna: Uh, yeah, and IttyBittyKittyCommittee says, "I hate they're wrapping ada left laced in this horseshit." Yeah. Yeah.  

Timnit Gebru: That's true. They probably thought that they had to cite a woman. And they were like, oh, they're going to come at us all those, those, you know, naysayers and horrible people. And now--  

Alex Hanna: Well they've got, I think they've got maybe three women. They've got Deirdre McCloskey, who's a Chicago economist, is a trans woman, but still is a Chicago economist. Um, they've got one black man that I know of, Thomas Sowell, the person that I was telling you about. Yeah. Yeah.  

Found him, they found him.  

Yeah. Arcane Sciences says they're shocked that there's no Bostrom or Yudkowsky, and I think it's because he's actually kind of attacking them. 

Timnit Gebru: That's what I'm saying. So it's like a family infight. So these people, Bostrom, Yudkowsky and all these people, they cycle between utopia and apocalypse, and they're currently in their apocalypse cycle. So this is supposed to be anti them, but they're all in the same family. You know what I mean? They're going to talk it over in some party. 

Alex Hanna: Yeah. 

Timnit Gebru: You know, discuss it over and then they'll shake hands over a couple of million dollars and they'll be fine.  

Alex Hanna: Yeah.  

Timnit Gebru: Tens of million dollars  

Alex Hanna: All right, shall we move to hell? Um, can you give me a, can you, can you give me a prompt to meet? Because-- 

Timnit Gebru: Let the AI do it.  

Alex Hanna: Let the AI do it. This is my, this is my new hit single. Um, I forgot what the name of our band is, Ratballs. It's called, "Let the AI do it." 

 (singing) Let the AI do it. Let the AI do it. Let the AI do it. Let the AI do it. I hope you were wondering how to solve climate change. Let the AI, the AI do it. I was wondering how you understand how to build the range. Let the AI let the AI do it. 

Hey, all right, let's move.  

Timnit Gebru: All right. 

Alex Hanna: All right, so the first one is this article. Do you want to read the title of this article Timnit? 

Timnit Gebru: I suppose. "NYC subway testing out weapons detection technology, Mayor Adams says." Yeah.  

Alex Hanna: This is in CBS New York by Marcia Kramer. Um, and basically Mayor Adams, who is a wannabe tech tech bro, former NYPD, has now, uh, said that they have, uh, are piloting a program, uh, that would detect guns, um, and he is quoted saying, "Would I prefer us not having to walk through this to come on our system? You're darn right. I do." And it's like this kind of a detection, metal detector looking system. "But we have to live life the way it is and work to make it what it ought to be." 

Oh, gosh. Yeah.  

Timnit Gebru: This kinda seems like a continuation of the prior manifesto we were reading.  

Alex Hanna: It really is. I mean, it's, yeah, Adams is, yeah, someone that is just increasing police funds, has you know closed libraries in New York on Sundays, and has really just gone all in on any kind of--  

Timnit Gebru: How is this supposed to work, do you know? 

Alex Hanna: It's like, there's a video here in this and for those of you on the pod, it's it looks like a metal detector that people are supposed to go through and that it's supposed to detect guns. Um, our producer, Christie Taylor, basically said, even the company who was selling this stuff is like, this is actually not a use case where this should be used. 

Right. And it reminds me a lot, um, about, uh, the, um, the, the little robot cop that they had at Times Square, um, where effectively they still needed two cops to guard it, uh, because too many people were effectively, um, yeah, effectively fucking with it. Um, yeah.  

Timnit Gebru: How's this gonna work during rush hour? Like, where, I mean, I don't even understand. 

Alex Hanna: Yeah, it's, it's, it's, it's, it's confusing. Uh, all right, moving on.  

Timnit Gebru: Also New York.  

Alex Hanna: New York while we're on this, on this tip, uh, this is a report from The Markup, um, which was, um, co written with, um, uh, documented in The City, um, which are both nonprofit newsrooms in New York. Um, the title is "New York City's AI chat bot tells businesses to break the law." 

"The Microsoft-powered bot says bosses can take workers' tips and that landlords can discriminate based on source of income," by Colin Lecher. Yeah. And so this went around and Emily wrote a, uh, a Twitter thread and a, and a Masto thread, we'll link in the show notes. Um, but it's effectively about this chat bot, which is just telling business owners to break the law. 

Um, Uh, let's go down to the table here, which is great. So the question submitted, "Are buildings required to accept Section 8 vouchers?" The chat bot says, "No, buildings are not required to accept Section 8 vouchers." The reality, "Landlords can't discriminate by source of income."  

Um, and then, uh, can I, another question, "Can I take a cut of my worker's tips?" And the tablet says, "Yes, you could take out the cut of the workers of your worker's tips." Yeah.  

Timnit Gebru: Wa, wage, wage theft is fine.  

Alex Hanna: Wage theft, yeah.  

Um, and then, uh, can I, and then lastly, "Can I keep my funeral home pricing confidential?" "Yes, you can keep your funeral home pricing confidential," uh, whereas the reality is the FTC has outlined, uh, outlawed concealing funeral prices. 

Uh, yeah, so it's, it's, yeah, just incredible nonsense, nonsense here.  

Timnit Gebru: We never asked for this.  

Alex Hanna: We never asked for this and yet. All right, Timnit, do you want to talk about this one?  

Timnit Gebru: Oh, Drug GPT. "New AI tool could help doctors prescribe medicine in England." What? Uh, okay. "New tool may offer prescription safety net and reduce the twenty-- thir--237 million medication errors made each year in England." 

So I'm assuming whatever they're calling AI is also ChatGPT here. And so given that we have seen, um, the accuracy with which the prior ChatGPT chat bot has been, uh, recommending things to people, I'm imagining how this is going to work out.  

Alex Hanna: Yeah, I mean, I think this is they're saying that um here, "This new AI tool developed at Oxford University aims to tackle both problems. Drug GPT offers a safety net for clinicians when they prescribe medicines and gives them information that may help their patients better understand why and how to take them." 

Um, so it's effectively, they're going to prescribe um and then they'll be, "Doctors can get a instant second opinion by entering a patient's conditions into the chatbot. Prototype versions respond with a list of recommended drugs and flag up possible adverse effects and drug on drug interactions." This seems like a real nightmare here, the idea and yeah, this is pretty bad already where it says, "Some doctors already use mainstream generative AI chatbots such as chat GPT and Google's Gemini to check their diagnoses and write up medical notes or letters. Meanwhile, international medical associations have previously advised clinicians not to use these tools. And we agree partly because of the risk that the chatbot will give false information or what technologists refer to as hallucinations." 

Timnit Gebru: My friends, some of my friends were telling me that a lot of doctors don't know that they, they sort of assume a lot of this stuff is being pushed all over hospitals. 

Like, yeah, you have to use ChatGPT somehow, like you have to figure out how to use it, incorporate it. And they sort of assume that it must have been trained with some knowledge, some facts, you know, some kind of thing in medicine, rather than just like straight up being, you know, a large language model. 

So they kind of are misled into believing that these things are trained with something more, like they must have some information that we don't know kind of thing, you know. And I, it's it's it's worrisome. It's a nightmare.  

Alex Hanna: Yeah, and ArcaneSciences points out that "finding drug interactions is already something that being done properly with software without weird and non deterministic AI stuff mixed in," right. And I mean there's also you know, there's evidence from folks like Roxana Daneshjou who was on the pod a few episodes ago where she talks about the kind of LLMs already do things like perpetuate myths about--medical myths about Black people and, you know, things around, around, um, like, um, uh, the kind of different differentiating measurements, I think one of the, I forget what the, all the metrics are called, but, uh, called, but they're just mimicking this stuff. 

Timnit Gebru: That's, that's really, you know, my issue with this whole thing is that, like, before OpenAI pushed the whole artificial general, general intelligence thing so far, large language models, LLMs were not, people weren't using them as like this end all be all thing, right? And now they've pushed it as an end all be all thing to be pushed in every single sector. 

Alex Hanna: Yeah. Yeah. All right. Oof. All right. Well, we got two more. So this one also from the Guardian, "Wearable AI, will it put our smartphones out of fashion?" And this is by Callum Baines. It's got a picture of some kind of stock picture of a woman or a femme looking person wearing something that looks like a weird like goggle, it's like a Geordi visor from Star Trek: The Next Generation but clear. And she's making the loser sign with her hand. 

That's--  

Timnit Gebru: Looks like it has some circuitry on it or something, the goggles.  

Alex Hanna: Yeah. Something, something about it. It looks like a, honestly, kind of like a cool looking squash, like eye protection. Um, yeah. So the, the journalism here is very breathless. It's very puff piece. Um, so it's written, "Imagine it. You're on a bus, you're on the bus or walking in the park. When you remember some important tasks that has slipped your mind. You were meant to send an email, catch up on a meeting or arrange to grab lunch with a friend. Without missing a beat, you simply say aloud what you've forgotten and the small device that's pinned to your chest or resting on the bridge of your nose sends the message, summarizes the meeting or pings your buddy the invitation--the buddy a lunch invitation. The work has been taken care of without you having to prod the screen of your smartphone."  

Uh and then it's basically shows this pin that's called the which is the quote "the AI pin" from the California startup Humane. Um and you pin it to your shirt like a magnet, it can send texts make those calls take pictures and play music um, and it uses a laser to project an interface on your palm and then this inbuilt "AI chatbot can be instructed through voice commands to search the web or answer queries in much the same way you expect out of ChatGPT." 

Oh dear.  

Timnit Gebru: Is this the one you said was recording us? It's supposed to be recording you all the time?  

Alex Hanna: Yeah, so this is what um, Hypervisible, Chris Gilliard said on, um, said on Bluesky when he highlighted this and said, uh, "These small devices are, are designed to dangle around your neck and passive--" And this is another device. "Dangle around your neck and passively record everything you hear and say during the day before transcribing and summarizing the most important bits for you to read back at your convenience later." 

Timnit Gebru: To me, this, this, besides the whole nightmare of the recording everything all the time kind of thing, it's like this idea of not thinking about the amount of work, the amount of data, the amount of energy it takes to do each of these little tasks. It's really like a continuation of Marc Andreessen's weird manifesto, right? 

Boundless. Let's not think about limits, that let's just, you know, think about like doing these things. And let's not think about how much time, energy, data resources it takes because apparently, you know, things are infinite and we don't have to think about resources.  

Alex Hanna: Yeah. And it's, it's just, it's frustrating. 

And this is just, terrible journalism from the Guardian.  

Timnit Gebru: It's the whole like internet of things, uh, drive too, you know. Wouldn't it be nice if your table can talk and your fridge can order food? It's like I want my table to be a table. I don't want cameras and I don't want--  

Alex Hanna: It's so hard. I feel like they're putting it in everything. I mean, it's so hard now these days to find something that is not quote unquote smart.  

And lastly, uh, this is from Prison Culture or Mariame Kaba on Bluesky. You uh, it's a website called TheCurricula.Com and, uh, it's got some very, um, terrible art on it, which looks like maybe some AI generated kind of mashup of, um, that very famous, uh, image, the wave with maybe Starry Night. I don't know.  

Um, and she says, "This website purports to help you learn quote 'anything' by generating a guide and resources. All the content is generated by AI. I entered a couple of topics and it's more miss than hit. I found that I would recommend maybe 25 percent of what was suggested." And--  

Timnit Gebru: 25 percent is, is, is, is higher than I would expect it actually.  

Alex Hanna: Yeah. And it's basically, yeah, I mean, it's, I don't suggest going to the site. I don't want to give them the clicks.  

Timnit Gebru: No.  

Alex Hanna: Yeah.  

Timnit Gebru: I mean, like, remember that, um, one where there was an Amazon, there was a self published or some book published on Amazon on mushrooms, it was ChatGPT generated. 

It was super dangerous because it was telling people that certain mushrooms are safe, which aren't and all this stuff. There's been a continuation of that.  

Alex Hanna: Yeah. All right. Well, I think it's time to go, Timnit. It was a pleasure today. That's it for this week. Our theme song was by Toby Menon, graphic design by Naomi Pleasure Park, production by Christie Taylor, and thanks as always to Distributed AI Research Institute. 

If you like this show, you can support us by rating and reviewing us on Apple Podcasts and Spotify, and by donating to DAIR at Dair-Institute.Orcg. That's D A I R hyphen institute dot O R G.  

Timnit Gebru: And you can find us and all our podcast episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream, as you did today. 

That's Twitch.TV/DAIR_Institute. Again, that's D A I R underscore institute. I'm Timnit Gebru.  

Alex Hanna: And I'm Alex Hanna. Stay out of AI Hell, y'all.


People on this episode