Science Straight Up

The Ethics of Emerging Technology: The Era of Artificial Intelligence--Dr. Teresa Head-Gordon

June 20, 2024 Dr Teresa Head-Gordon Judy Muller and George Lewis Season 5 Episode 2
The Ethics of Emerging Technology: The Era of Artificial Intelligence--Dr. Teresa Head-Gordon
Science Straight Up
More Info
Science Straight Up
The Ethics of Emerging Technology: The Era of Artificial Intelligence--Dr. Teresa Head-Gordon
Jun 20, 2024 Season 5 Episode 2
Dr Teresa Head-Gordon Judy Muller and George Lewis

Artificial Intelligence (AI) and machine learning (ML) are relatively new, powerful, and disruptive technologies that are rapidly entering practice in our daily lives and shaping our future in areas ranging from employment, health, politics, and what it means to be human. This talk, by Dr. Teresa Head-Gordon of the University of California, Berkeley considers the current status of AI and ML and the ethical considerations that can guide us to finding the best in this emerging technology while mitigating its potential abuse.  Veteran broadcast journalists George Lewis and Judy Muller moderate this episode.

Show Notes Transcript

Artificial Intelligence (AI) and machine learning (ML) are relatively new, powerful, and disruptive technologies that are rapidly entering practice in our daily lives and shaping our future in areas ranging from employment, health, politics, and what it means to be human. This talk, by Dr. Teresa Head-Gordon of the University of California, Berkeley considers the current status of AI and ML and the ethical considerations that can guide us to finding the best in this emerging technology while mitigating its potential abuse.  Veteran broadcast journalists George Lewis and Judy Muller moderate this episode.

“Science Straight Up,” Season 5, episode 2

“Ethics of Emerging Technologies, the Era of Artificial Intelligence”

Dr. Teresa Head-Gordon, University of California, Berkeley

 

(Theme music)

 

(George) From Telluride Science…this is “Science Straight Up.”

 

(Judy) Artificial intelligence and other emerging technologies..a boon to mankind or an existential threat?  Our guest, Dr. Teresa Head-Gordon, takes a generally optimistic view of technology although she warns that we have to be aware of the potential pitfalls and we have to pay attention to ethics when we develop and use things like A.I.  I’m Judy Muller.

 

(George) And I’m George Lewis. Every year, high up in the Rockies, Telluride Science brings together between 12-and 14-hundred researchers for a series of workshops, brainstorming sessions and exchanges of ideas on the cutting edge of science and technology.  They share some of those ideas with the community in a series of what they call “Town Talks.” This one was recorded at the Telluride Conference Center in Mountain Village, Colorado.

 

(Judy from Town Talk) We're really excited about the speaker this evening and the topic so we're gonna get right to it.

 

(George) Tonight's topic is one that affects all of our lives. Science fiction has been warning about the ethical implications of artificial intelligence for a long time. Now, who can forget that famous line from 2001 A Space Odyssey when HAL the computer refuses to follow orders?

 

(dialog from “2001”) 

            (Dave) Open the pod bay doors, Hal.

            (Hal) I’m sorry, Dave, I’m afraid I can’t do that.

 

(George and Judy from town talk—unison). “I can’t do that.”

 

(Judy) I thought you’d want to join in.

 

(George) Yeah, you wanna join in?

 

(Judy) And in the world of nonfiction, as in reality, a recent article in The Atlantic titled this is what it looks like when AI eats the world, said, quote, the web itself is being thrown into the great unknown.

 

(George) Thank goodness for people like Dr. Teresa Head-Gordon, who out there who are dedicated to finding safe, principled ways of using AI. Dr. Head Gordon is a professor at UC Berkeley who holds a BS in chemistry, and a PhD in Theoretical Chemistry. She has been involved in the study of machine learning from its early days in the 1990s, and teaches a course called the ethics of emerging technologies, which is a required course, for bio engineers and data science majors.

 

(JUDY) And after her brief presentation, George and I will ask her a few questions. And then we're going to open it up to the audience. So be thinking of questions you might want answered. And now please welcome Dr. Teresa Head-Gordon. (applause)

 

(TERESA) Thank you, Judy, and George, and everyone for, for coming out here today. The technologies that we make are almost always have the intent of doing something good for society. And so those technologies are the fact that we want to make new chemicals and, and new drugs, that for human health, things like genetic research, you know, IVF, which is, you know, benefited many couples, the fact that our planet is under stress, and we want to develop energy technologies to make the world a better place to live in. And it turns out, though, that we never quite anticipate how those technologies are actually going to impact the the public.

 

(GEORGE) And the technology that everyone’s talking about right now…artificial intelligence, A.I.  It’s increasingly working its way into our lives, and it’s getting smarter, day by day. And while it’s proving useful to many people, there are the downsides.

 

(TERESA) It turns out that a lot of automation has hit the blue-collar sector pretty hard. But if we are going to be able to start turning over human type reasoning and decision making, then it's going to be the white-collar workers that are going to be affected next. And then there's AI and privacy, which is that data's got to come from somewhere. And sometimes it comes from us, okay, without necessarily our permission.

 

(JUDY) And now, we have the problem of AI deepfakes.  Artificially generated voices and images created with deception in mind.  In early 2024, Democratic voters in New Hampshire received a phone call from a voice sounding like President Biden, urging them not to vote in that state’s primary.

 

(“Biden” deepfake audio) It’s a bunch of malarkey. We know the value of voting Democratic when our votes count. It’s important you save your vote for the November election. Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not next Tuesday.

 

(GEORGE) The creator of the Biden deep fake Joe Biden was caught and faces criminal charges and millions of dollars in fines.  But deep fakes continue to proliferate, many of them created overseas, out of the reach of U.S. law.

 

(JUDY) Then there’s the question of racial and gender bias in A.I.  Investigating this, reporters from Bloomberg created resumes for fictitious job candidates and fed them into an A.I. program that can help companies with hiring decisions.  What they found was the A.I. was terribly biased, selecting white males for the top jobs and recommending women and minorities for the lower paying positions.

 

(TERESA) And so this is the top paying jobs, and we can see architects and lawyers and everything down to engineers, and then the low paying jobs, janitors, cashiers, and housekeepers.

 

(GEORGE) The problem is that when the AI searches today’s top earners, it finds mostly white, middle-aged males.

 

(TERESA) We can't really fault okay AI for this one, which is that it turns out that 80% of CEOs and lawyers actually are white//just reflecting our biases back to us. There's probably a lot of reasons behind this, right. But it's something that worries us in regards to how do we actually deploy these AI technologies so that we're just not sort of exacerbating or amplifying that bias.

 

(GEORGE) Some in the A.I. field want to make it kinder and gentler.  Give it a touch of empathy.

 

(TERESA) here's this idea that actually started in 2023, which is what's called open, empathic AI. And the idea is that what they're looking for is to say, remember, we've all have, if those of us who've been lucky enough to see the Mona Lisa, wonder what she's thinking, you know, she happy, okay, she sad, Is she indifferent? And the thing is that this is going to now start labeling data, not for factual representation, but emotional representation. And so therefore, what we're going to do then is to create AI systems that are learning on data that now imbues the data with things that are human like, okay, emotional intelligence. 

 

(JUDY) And then there’s the big question, will AI get so smart that humans will become obsolete?  

 

(TERESA) This is the idea that these are becoming so powerful, and that they're actually going to surpass all levels of human activity of performance, and the possibility of actually superseding the human race altogether.

 

(JUDY) Here’s what computer scientist Jeffrey Hinton, known as the “godfather of A.I.” said on the CBS News program “60 Minutes.”

 

(HINTON FROM 60 MINUTES) “I can’t see a path that guarantees safety.  We’re entering a period of enormous uncertainty. We’re dealing with things we never dealt with before.  And normally, when you’re dealing with something novel, you can’t afford to get it wrong with these things.

(REPORTER) Can’t afford to get it wrong, why?

(HINTON) Well, because they might take over.

 

(GEORGE) Teresa Head-Gordon says there’s another scenario…one in which A.I. doesn’t take over immediately but slowly eats away at the things that make us human.

 

(TERESA) It's really more about a degradation--a slow degradation that's going to happen to what it means to be human. For example, we hone ourselves on making judgments, we make judgments every day, right? And we and hopefully as we get older, we get better, right? That's our hope. And it turns out that if we farm that out to algorithms, we're not going to be able to hone that as well. And our judgments might be getting poorer, and we'll lose some of that autonomy. Another example is the fact that, you know, chance, is something that plays a huge role in our lives. In science, we know this is serendipity, right? serendipitously I discovered something. And I didn't intend to essentially do that experiment, but somehow I did. And something miraculous came out. The fact that we meet our partners, you know, by some random event that just sort of brought us to, you know, to space and time to be together, all of those things, when it's going to be replaced by optimization, and planning and prediction, may essentially kind of destroy that aspect of, you know, the things that make life interesting. Okay, the unexpected.

 

(JUDY) And then, what happens to future generations of humans?

 

(TERESA) We see it at the university level, which is, you know, trying to get our students to read and write. And the thing is that, you know, they're, they're using these tools. And we're really worried about their ability to think critically, and they're the future, and when you know, I'm going to be gone off this planet one day, and I'm going to be turning it over to the next generation. And if they're losing that critical thinking ability, it's something to worry about.

 

(GEORGE) Congress, struggling with the idea of how to put up regulatory guardrails to ward off the potential negative impact of A.I. summoned tech leaders like Mark Zuckerberg and Elon Musk to Capitol Hill.

 

(TERESA) Mr. Musk, who has called a moratorium on the development of some AI, and he was the most vocal, he's the one that's most has the most anxiety around AI risk. And so he believes in the existential crisis, and what he believes is that with the creation of his company called neuro link, which is that the best way we can control AI is to fuse it, okay, is to create human machine interfaces, that allows us to be able to at least have some control, okay, on the underlying AI. And then I'd like to thank you all for coming to today's town talk to be an educated citizen. And thank you very much for for listening. (applause)

 

(JUDY) Well…if we weren’t terrified before…

 

(GEORGE) How much longer do we have as the human race?

 

(TERESA) (laughs) So actually, that is one area of speculation is what is the timescale for not doing for the end of the human race, but essentially, when things are going to start impacting us noticeably? And so as I was trying to outline the fact that the existential crisis, some fuel is here now, and others think that it's really far off. But I think that the second part, the more kind of creeping in is here now.

 

(JUDY) The fact that Elon Musk is terrified of this. I, you know, terrifies me, because I don't want him putting a computer chip in my brain.  What I'm wondering, that I saw the art that it produced, which pulled on images already out there. Will AI ever be able to produce original art or music? Will it ever have a sense of humor, which would be dangerous? Because I would follow it anywhere? I mean, I'm not talking about puns and jokes. I'm talking about repartee Yeah. What do you think?

 

(TERESA) So right now, it is rather humorless, that's for sure. But remember with the empathic AI that I was speaking about, with new data with new annotated data that speaks to emotional intelligence, people who have a good sense of humor have good emotional intelligence. And so what I'm trying to say is that I think it will get better. 

 

(GEORGE) Judy and I were discussing this earlier, and she said, AI can't write jokes. So I went to Perplexity AI, and said, “Give me a joke about AI.” and Perplexity said, “An AI walks into a bar. The bartender says, We don't serve robots. And the AI says, How about androids? I self identify as an Android.” There's a real thigh slapper. 

 

(JUDY) That’s the joke?

 

(GEORGE) That’s the joke. And I thought to myself two things, one that my wife, Judy, is always right. Which is a good thing to say, if you want to keep a happy marriage, and, and two, Jerry Seinfeld, or Jon Stewart, or Sarah Silverman, are not going to be out of a job anytime soon due to AI.

 

(TERESA) That doesn’t work, so If you say, it's not good at humor, there'll be lots of efforts putting into trying to get it to be better at humor. So as I said, I think that empathic AI is the started that that started in 2023. And that those will be the kind of things I've been able to tell jokes or other things that we consider to be in the realm of emotion of human emotional intelligence. 

 

(JUDY) You mentioned, of course, that AI is already having an impact on the workplace. And as journalists, we are seeing it in the news business everywhere. Sometimes it's okay. But the deep fakes, of course, are extremely concerning. I'm a big proponent of teaching News Literacy starting at very young ages in the schools. It's not fast enough, it's not keeping up teaching students to look critically at what they're seeing on on their media. And I just worry a lot that we're not concerned as educators, that these young people are not learning how to look at an image and decipher it. What do you think?

 

(TERESA) I think that we are starting to recognize it, I think that, for example, when we teach our courses, we actually have to put in a warning about using chat GPT, to finish their own finish essays, you know, which is, what I'm trying to say is that, I think we're trying to catch up on trying to grapple with this new technology, to make sure that that students are being trained in critical thinking. But I think what you're also bringing up is our ability to be able to detect, you know, the deep fake, and like I said, that's going to get harder and harder. So we're going to need some kind of regulation that sort of somehow, you know, provides a watermark or something, you know, wishes to so therefore think some of the technologies will have to help us here.

 

(JUDY) As a professor, have you been able to, you probably don't have a lot of writing assignments. But if you do, can you detect chat GPT work, as opposed to a student's original thinking?

 

(TERESA) In ethics, that's an essay-based course. So what I do is to tell the students, prompt Chat GPT to write the first solution. And then I want to see your solution. Oh, that's a good way to essentially, because they already run out of the ability to do it that way. It turns out that a lot of our students, and so many of us are computational chemists in this room, write software. And it turns out that a lot of my students are actually starting to kind of get going on a program by using chat GPT to generate some rudimentary software which they then you know, add on to and refine. So it's infiltrating even that things outside of essay-based.

 

(GEORGE) When AI messes up, who’s responsible? Is it the developers of the AI? The companies employing the AI? Or the AI itself?

 

(TERESA) Who's responsible? Well, it's obviously it's obviously not the AI itself. Because…

 

(GEORGE) I guess you can't sue an AI program.

 

(TERESA) Well also is, you know, in a way, machine learning is about as dumb as it can be. Right? And it has to be taught, right? So therefore, whoever the teacher is, is the one that's actually is at fault.

 

(JUDY) We’re running out of time and I want to leave some time for the audience.  Yes, go ahead.

 

(AUDIENCE QUESTION) What's the role of just technological consumerism and, and the pace at which that's happening relative to AI? And the questions we have about AI?

 

(TERESA) I teach ethics of all kinds of emerging technologies, not just AI, and one of the one of the complaints is why does technology have to solve every problem? You know, why is it for example, if we do, why don't we just kind of kind of conserve, and kind of walk more lightly on the planet isn't often an argument against energy technologies, which is that we could do more, instead of just trying to have a technological solution to everything. And I think that that's what you're trying to say. But the thing is, technologies always get better. And scientists and engineers want to make the world better. And so technology is going to march forward. And I think that the purpose of the talk today is to get ready.

 

(QUESTION) Feels like social media has been the training wheels for this, like, haven't we seen the problems that that's fomented? And who is? I guess my question is, if since this could take it to such a higher level, who is the right body to regulate? If we were to choose for regulation?

 

(TERESA) Well, we have what we have right now, which is our regulatory agencies at a government level. And then I think there's also, again, trying to appeal to people's altruism, both of the, you know, both in commercial sector and in, you know, universities trying to train the next generation of engineers and scientists to try to be ethical about this subject. So I think it's kind of personal, you know, kind of ground level, you know, ground level stuff. And then also just things where you just got to have a hammer, you know, to get bad actors to behave. So. And the other thing I kind of wanted to mention about emerging technologies, every new emerging technology, it doesn't matter if it's genetic, or energy, or whatever, always has this fear stage. And so what I'm trying to say is that everything kind of settles down. So I don't mean to be benign about AI. I'm just saying, I've seen this many times, that there's this fear. And I think what we just have to do is to try to get our arms wrapped around it through, you know, regulation and education, and trying to do the right thing. And I'm hoping it will settle into the landscape like all these other technologies.

 

(GEORGE) The Surgeon General has just recommended that social media come with a warning, like the warning on cigarettes. Does AI need to come with a warning? 

 

(TERESA) Yes, I think it does. And that that one, again, is it's just, you know, there's nothing in place. It's social media, but it's not AI, the technology itself.

 

(JUDY) I was thinking today about this, you talked about the things I did today with my phone, that, you know, my bank recognizes my face goes into my account. I heard a bird song and I wanted to identify it. It was a meadowlark. I mean, 10 years ago, this would have been unheard of. So it's happening so fast. And it is good stuff. Yeah. But the race to the bottom worries me in competition with China, Russia, Europe. I mean, how do you see that going?

 

(TERESA) Well, again, I do want to say that all technologies have good sides and bad sides. And this is an I really, really don't think that this is that different than the things that we feared before. But what I wanted to I really liked, you're emphasizing the good stuff. You know, that's, you know, the other thing to think think about, you know, I actually gave the students a debate on neural link. Imagine there was this, all of a sudden, all of us had exactly the same basic level of knowledge. Do I mean other words, you didn't have to go to college, you know what I mean, we all have at our fingertips, imagine the advances that we might be able to make, okay, we're that creativity doesn't sort of have to spend two- or three-decades accumulating knowledge that's been known for a long time, right, but the individuals got to absorb it. So I just wanted to say there are good things, good things that could happen with a powerful technology. And then And then the question of how we're going to, you know, one of the things I like about natural language processing is that we might be able to start being able to talk to other cultures, like China and, you know, for example, with a better understanding, because we won't have that language barrier anymore.

 

(JUDY) I like ending on that note, a little bit. And that is all the time we have. (applause) 

We want to thank our sponsors, the Telluride Mountain Village Homeowners Association and Alpine Bank and let's give a big hand to Professor Teresa Head-Gordon. (applause)

Thank you. (THEME MUSIC UNDER)

 

(GEORGE) And a big thank you to our audio engineer, Colin Cassanova. This has been a presentation of Telluride Science. The executive director is Mark ˚Kozak and Cindy Fusting is managing director. 

 

JUDY: Annie Carlson runs donor relations and Sara Friedberg is lodging and operations manager. For more information, to hear all our podcasts, and if you want to donate to the cause, go to telluride science-dot-o.r.g.  I’m Judy Muller.

 

JUDY: And I’m George Lewis, inviting you to join us next time on Science Straight Up. (THEME MUSIC UP AND THEN FADE OUT)