irResponsible AI

🌶️ Cutting through the Responsible AI hype: how to enter the field | irResponsible AI EP2S01

June 04, 2024 Upol Ehsan, Shea Brown Season 1 Episode 2
🌶️ Cutting through the Responsible AI hype: how to enter the field | irResponsible AI EP2S01
irResponsible AI
More Info
irResponsible AI
🌶️ Cutting through the Responsible AI hype: how to enter the field | irResponsible AI EP2S01
Jun 04, 2024 Season 1 Episode 2
Upol Ehsan, Shea Brown

Got questions or comments or topics you want us to cover? Text us!

It gets spicy in this episode of irResponsible AI: 
✅ Cutting through the Responsible AI hype to separate experts from "AI influencers" (grifters)
✅ How you can you break into Responsible AI consulting
✅ How the EU AI Act discourages irresponsible AI
✅ How we can nurture a "cohesively diverse" Responsible AI community

What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.  

🎙️Who are your hosts and why should you even bother to listen? 
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI. 

Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives. 

Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan 
Shea: https://www.linkedin.com/in/shea-brown-26050465/ 

CHAPTERS:
0:00 - Introduction
0:45 - The 3 topics we cover in the episode
1:52 - Is RAI all hype?
4:14 - Who to trust and who to ignore in RAI
10:35 - How can newcomers to RAI navigate through the hype?
13:36 - How to break into responsible AI consulting
15:56 - Do people need to have a PhD to get into this?
18:52 - Responsible AI is inherently sociotechnical (not just technical)
21:54 - Why we need "cohesive diversity" in RAI not just diversity
23:57 - The EU AI Act's draft and discouraging irresponsible AI
27:26 - We need Responsible AI not Regulatory AI
29:03 - Why we need early cross pollination between RAI and other domains
31:39 - The range of diverse work in real-world RAI
33:20 - Outro

#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the Show.

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

irResponsible AI +
We are self-funded & independent. Hit support & get exclusive content.
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Got questions or comments or topics you want us to cover? Text us!

It gets spicy in this episode of irResponsible AI: 
✅ Cutting through the Responsible AI hype to separate experts from "AI influencers" (grifters)
✅ How you can you break into Responsible AI consulting
✅ How the EU AI Act discourages irresponsible AI
✅ How we can nurture a "cohesively diverse" Responsible AI community

What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.  

🎙️Who are your hosts and why should you even bother to listen? 
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI. 

Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives. 

Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan 
Shea: https://www.linkedin.com/in/shea-brown-26050465/ 

CHAPTERS:
0:00 - Introduction
0:45 - The 3 topics we cover in the episode
1:52 - Is RAI all hype?
4:14 - Who to trust and who to ignore in RAI
10:35 - How can newcomers to RAI navigate through the hype?
13:36 - How to break into responsible AI consulting
15:56 - Do people need to have a PhD to get into this?
18:52 - Responsible AI is inherently sociotechnical (not just technical)
21:54 - Why we need "cohesive diversity" in RAI not just diversity
23:57 - The EU AI Act's draft and discouraging irresponsible AI
27:26 - We need Responsible AI not Regulatory AI
29:03 - Why we need early cross pollination between RAI and other domains
31:39 - The range of diverse work in real-world RAI
33:20 - Outro

#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the Show.

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

Upol Ehsan (00:01.271)
Welcome to Your Responsible AI, a series where you find out how not to end up on the headlines of the New York Times for all the wrong reasons. What better way to learn what to do by knowing what not to do. My name is Upol and I make AI systems explainable and responsible so that people who are not at the table do not end up on the menu. Whatever I say on this series is entirely my opinions and have nothing to do with any of the institutes I'm affiliated with like Georgia Tech and Data & Society. I'm joined by my friend...

Shea Brown (BABL AI) (00:29.538)
I'm Shea, an astrophysicist turned AI auditor, and I ensure companies do their best to protect ordinary people from the dangers of AI. I'm also the founder and CEO of Babl AI, an AI and algorithmic auditing firm, but like Upol, I'm just here representing myself. So, Upol, what do we wanna talk about today?

Upol Ehsan (00:50.603)
Well, we actually have a very interesting set of things to talk about because some of the stuff is actually given by some of our audience members who have kindly kind of suggested all the things that we can talk about. So first off, we're gonna talk about three things today. First of all, is our AI or responsible AI all hype and no game? So that's one thing. How do we distinguish between the people who we should pay attention to and not?

The second thing is how can we break into responsible AI consulting or RAI consulting? So that's number two. And number three is the EU AI Act's draft has come out and how does it actually discourage irresponsible AI? What's new about it? What's not so new about it? So those are the three things that I think we're going to talk about. And it will be a challenge because we have challenged ourselves to only limit ourselves to 10 minutes per topic. Let's see how good academics are.

in actually making that happen. So, so let's start with it, Shea. Like, what do you think? Is RAI all about the hype?

Shea Brown (BABL AI) (01:47.134)
Yeah, good luck.

Shea Brown (BABL AI) (01:56.522)
So no, I think there's a certain amount of hype around Responsible AI. And so it depends on what we mean. The content of Responsible AI, and I think probably you'll agree with me there, the content is definitely not hype, and I think it's something that's required and needed. However, the ecosystem that is growing up now around Responsible AI has a lot of hype associated with it. And I'm curious to hear your take on.

exactly what that means because we're seeing a lot of people who are becoming responsible AI experts or on their way to becoming responsible AI experts but how do we know? How do we know if someone's an expert?

Upol Ehsan (02:40.899)
That's a good point. I think first of all, I agree with you. I don't think all of it is about hype. I agree with you on that fundamental point. I also want to highlight there's a lot of noise. And I think this is where we need to be careful. There has to be a good signal to noise ratio. I think the space has been somewhat taken advantage of by some certain grifters as I call them. So there is that part where people...

who are not necessarily either have lived experience, credentials, and these are or not ends, right? Cause you could have multiple ways of gaining expertise and the space is definitely very diverse, needs a lot of diverse voices. But I think there is this aspect of where I'm starting to see a lot of quote unquote, what I call them the AI influencers, right? And they are trying to basically recycle content.

not even do attribution. That was the other part. I have seen someone steal my content and they could have just attributed some of this thing on their shiny little carousel that they built using Canva, which is all good. I built carousels myself. They're all good. They're all nice to look at. But I think there is this aspect where I'm starting to see, which is why maybe there is a bit of hesitancy and a snake oil nature to Responsible AI that the outsider world looks at us.

And again, we are community members. We have earned the right to be self-critical. This is not to be critical about others, but our own community. And how can we make it a more inclusive, welcoming, robust and trustworthy community? I wanted to ask you, when you look at things like, who do you listen to, who do you ignore? How do you make that judgment?

Shea Brown (BABL AI) (04:28.29)
Um, so that's really tough. I think clearly if I'm, if, if I know people have done something in the field, I think that makes me pay attention a little bit more. And I'll take, I'll, I'll name some names. Like for instance, well, there's a whole level of academics who have done research in this area who I respect and I know from, you know, people coming from the fact community, fairness, accountability and transparency community.

But then there are like business people, but for instance, Kathy Baxter is one that I felt somebody was not necessary in academic, but does a really rigorous job, came in very early working with Salesforce and kind of led the way on the business side. And so when I see content from people like Kathy, I was like, okay, I'm gonna pay attention to this. But there are increasingly a lot more people who are talking about these issues.

but it's unclear exactly what they have done in the field, whether that be research, working with a client. Now, you're right, you have to be careful because you wanna be inclusive, you want everybody to be involved, but it is hard. It's hard to understand. And so, like exactly what they're bringing to the table, is it just their unique perspective or is there something a little bit more rigorous in terms of the kind of thought that they've had or the research that they've done?

And I don't think we have an answer. I don't think I have an answer. But what's the signal to you? Like, what do you see when you look at this field and when you're evaluating people? How do you feel like, oh, I can trust that person?

Upol Ehsan (06:14.199)
That's a really good question. And you know, we're all asking the hard questions because that's the series that we have somehow built up. There is no science to this. There is no guaranteed answer to this. But what I look at is very similar to yours. And I agree with you. Like Kathy's work is fantastic. I think anyone who has done industry level work at that scale has to be listened to and we should listen to them. And they've done phenomenal work. So that's what I often look at. Like what have they done versus what are they saying?

You can say a lot of things in this world because there's content out there. Even people can recycle stuff from our series, right? And claim them to be theirs. So have they done something? So when I look for like, what have you done? There are many markers you could have. You could have publications. Like for instance, I publish regularly at Fact. There are things that I have done that has informed responsible AI policies as large organizations. And you could find my work cited there. So there is a, that's a marker for you.

Let's say Babel is there. You guys are auditing left and right. So there's a proven track record there that this person knows what it is to be in there. I do consulting with Human Standard X like my company. I consult with Fortune 500 companies. If someone wants to get a referral, I'll give it to them. And my clients are more than happy to talk to them. So I think when as a newcomer, when we look at...

what is getting said, we always have to ask what have they actually done? And if they have done something, which is you can just Google it, then you know that, okay, this person is maybe saying some reliable things. Now, I do want to caveat something. People have pivoted. We have all pivoted all our lives, right? So we have to be very kind to those who are pivoting. And that's really great. Like, you know, pivot all you want, and then...

But then at the same time, just because you pivot on day one, don't act like the expert. That's the problem, right? Like when you rebrand, there is a learning curve. Like for instance, you and I both went through a rigorous kind of PhD journey and an academic journey. And we've earned our badges, so to speak, or people who went through industry and have earned their badges, right? So that's the other part is, people should be pivoting, that's okay. So that doesn't mean everyone has to have published work. That's not what we're saying. We're saying that

Upol Ehsan (08:35.131)
If you see a consistent history of people just saying things and they have nothing to show for consistently over a period of time, not just day one, month one, maybe over a period of six months, then you got to come into question, what have you actually done? And I think it's a very simple thing, right? Like if you have done something good, then great. If you have not, then you're more than welcome to learn. But what you cannot do is sometimes mislead people and-

And remember when you and I are talking the other day, we mentioned something that, you know, people are rediscovering a lot of the wheel and that's fine. You can be better communicators and we need communicators. I often think that, you know, how new sites have aggregators, right? We need aggregators in our AI. The field is too fragmented and sometimes getting too broad. But aggregation does not mean expertise. There's a big difference in that. So,

Shea Brown (BABL AI) (09:21.708)
Yeah.

Upol Ehsan (09:30.999)
Contribution is not the same as noise or even aggregation. And if you're, and you know, one of the things that I often look at actually as a sign of expertise is, are you crediting others? If someone credit others, it's a very important sign that they are more than comfortable being who they are and they have enough and they know the etiquette and they know the norms and they're ethical. So in fact, I've seen more experts

cite other experts. Like when I do, if I were to say something and I think you say something, I would cite you because I'm also leaning on your expertise versus if I'm trying to hog all the spotlight and make it seem like that graphic that I took in that carousel is not from a report that, you know, some big organization released without citing them, I think that's also intellectual theft. So we need to be careful about that. And especially businesses need to be careful about that. I wanted to ask you, Shea, like,

Some of our viewers asked this question and I don't know the answer to that, but I wanted to have a discussion. If we are just about to enter responsible AI, how do newcomers figure out who to listen to versus not? Is there a heuristic we can share with them that they can use, which cannot be, you know, no guarantees, but here is a heuristic that maybe you use when you first enter the space yourself. That's another way of thinking about it.

Shea Brown (BABL AI) (11:00.478)
Yeah, I think so. I mean, what I look for when I'm. Engaging with people's content when I, you know, when I'm reading their content online, which is where a lot of this happens, places like Twitter or X or LinkedIn is I'm looking for some sort of contribution above and beyond just the repurposing of material. So if I see somebody has.

Well, there's some ways in which the repurposing of material is really valuable. There's things that I would have missed and I do that a lot myself. So if I see somebody consistently posting really valuable things that it's not necessarily their work, I start to trust that they're going to find things that are interesting because they're relevant changes or news in the field. But then if I see that there's some analysis on top of that,

and this is something that we might get into this later about getting into the field, having, demonstrating that you've done some thinking above and beyond that and you can comment or analyze and it's not just a reposting. Then I think, okay, this person has some insights, they have some experience and there's more trust there that they're not just reposting something, they've actually read it, they've considered it.

And then, of course, there are other signals like actual publications or white papers or real original work or original research. And it doesn't have to be academic. I think you and I both come from academia, but that's not the most important thing to us. It doesn't have to be academic, but it does have to be something that's original. And then that's where you really start to trust that they're doing something that's brand new and they're contributing to the field. And that's kind of the, for me, those are the tiers.

Upol Ehsan (12:30.871)
Mm-hmm.

Shea Brown (BABL AI) (12:52.382)
of trust, I would say, that I would fall into.

Upol Ehsan (12:57.447)
Actually, how are we doing on time? Do you think we have hit the 10 minute mark yet?

Shea Brown (BABL AI) (13:00.91)
I think we need to we need to move on or we're gonna we're gonna be in trouble

Upol Ehsan (13:03.731)
Yes, I agree. So that that's a good sign for taking the next action, which is what's something that you can do Um, if you already haven't done, uh, please try to help us work with the algorithm for what it's worth Uh, hit that like and subscribe button. Uh, you have no idea how much it will annoy the wrong people if this series actually gains traction Uh, so just a little click will help us a lot Awesome. So next topic Is actually something that was the second most important, uh

inbound messages that I often get is how do I break out into this field? How do we get into RAI consulting? What are some of the necessary things? I think, Shreya, you are uniquely positioned to say this because you came from astrophysics. I have actually some advantage in the sense that my training is somewhat very tangentially or directly related to the field that I'm in right now. But you had to do a massive kind of pivot. So maybe you're very well situated.

to kind of share some insights for people who are mainly, you know, would consider themselves outsiders, but are trying to enter the field. So what do you have to say?

Shea Brown (BABL AI) (14:10.686)
So I think I'll try to be short. I think the main thing that you want to start to do is to gather knowledge and awareness. So if you're trying to break into a new field, the very first thing you need to do is just figure out what you don't know or what the landscape is like. And so when I did this, moving from astrophysics, I started reading as much as possible. So there was two simultaneous things.

What are people currently talking about in terms of what's relevant for the field? What's the current hot topic? And then there's this sort of backlog of research and literature that I have to sort of go back and build up that and it's the same thing you do when you start a PhD. You know you enter a brand new field and you have to backfill a lot of that knowledge. And so that's the that's the very first step.

I think for anybody is to really figure out what is there to know and then start filling in what you don't know. And you have to be interacting with people in the field as much as possible and what's current what the current issues are because that's one of the most if you think about the lowest hanging fruit that you can bring a client is a broad awareness of what the field is worried about. And

In order to do that, you have to be engaging with what's current right now and then fill things in later. So for you, I want to ask you, so both of us are, we both have PhDs. In my case, it's in astrophysics, a different field. In your case, it's very narrowly focused on this. Do people need to have a PhD to get into this? Because I get that question a lot.

Upol Ehsan (15:55.435)
Mm-hmm.

Upol Ehsan (15:59.783)
No, absolutely not. Yeah, I agree. The PhD helps, but it's definitely not a necessary condition. In fact, some of the best people I know who are in this field don't have PhDs. But what they do have is real world experience. And I think that's what makes up, nothing beats experience, right? The PhD helps, but nothing beats experience. So in my view, I don't think you need a PhD. The PhD definitely can help because what the PhD helps you do, I think fundamentally is become an expert.

at converting the unknown unknowns into known unknowns. So that's what I think at some point we become an expert at. And that's what we can have multiple careers, even though our PhD could be in some other thing, God knows 20 years from now, I don't think like many other things that I'm working on today might be relevant. So I might have to have two or three different pivots. So I don't think so, I don't think so. I wanted to know, there's one more question that people ask is,

What should be the makeup? Should I be from a STEM background? Should I be from a humanities? Should I be from the social sciences? What do you think? I mean, both of us have STEM backgrounds, so we have some bias on this. But I'm curious to hear, like, is that true? Like, do people need to be from a STEM background to come to RAI?

Shea Brown (BABL AI) (17:13.866)
Absolutely not. And I think in some respects, there are a lot of people who do come from a STEM background that are currently dominating the AI landscape. And so what's really needed are people coming from the outside, multidisciplinary perspective. And so no, you don't. There is some base level that you do need to know. You have to understand the basics.

of the technology, but the real basics. It's not, you're not gonna be working with TensorFlow or PyTorch to construct some neural network and train it. Like you don't really need to get to that level. And you just need to be able to appreciate it from the outside and appreciate the moving parts and then come at it from that unique perspective because what we really need in Responsible AI, and we've talked about this here, is that there are many blind spots.

Upol Ehsan (18:09.097)
Mm-hmm.

Shea Brown (BABL AI) (18:09.646)
that the sort of insular community can have. And so you have to be able to have that outside perspective to fill in those blind spots. And so, and I want to get your take on this too, because I think that you have a pretty, you balance really well between the sort of technological knowledge, but then the socio-technical bit.

And so what's your take on this? You've worked with a number of companies and you've worked with different people on your team. Like how do you see them contributing if they're not technical?

Upol Ehsan (18:50.075)
I think it's a good question. I agree with you first of all in principle that this is fun even though like the word is responsible AI I think the operative part is the responsibility part and not the AI part. There's enough good people, smart, talented people who are working on the AI side. We need more smart, talented people working on the responsibility side. So the game is socio-technical so the solution cannot be technical, right? So the solution needs to be socio-technical. So for that to happen.

I think we need more expertise than just the core technology that is involved. And some people get nervous, you know, can I enter RAI without knowing hardcore AI? And I always say, no, there's enough people who know hardcore AI, you need to be conversant, you need to be somewhat fluent in the language and understand the basics, but you don't need to know how to whip up a transformer, right? And, you know, start doing your coding on a terminal. No, your job is actually different. And...

There is this aspect where I'm starting to see where one thing that I often try to be mindful for, because my background and my training actually comes from some of science and technology studies. And in those disciplines, being critical is a contribution. Like that's a more critical outlook. What I had to check myself was how to balance the criticality and make it constructive such that we're moving towards a solution, right?

or some intervention. That does not mean that we are being techno-solutionists. Techno-solution is not what I'm advocating for. But what I'm advocating for is we cannot simply stop at highlighting the problems. We cannot simply stop at criticizing what others are doing. We need to come up with some interventions or some attempts to address some of the problems. Because there's enough critiques, there is history of a...

history of technology people, there is philosophy of technology people, like there's SDS people. Their entire job is to find out problems in technology, which is great. But in RAI, I feel like we have to err towards action, towards mindful action, and not just stop at like, oh we identified harms. This is one of the challenges with a lot of RAI frameworks today, right? If you look at a many RAI frameworks, they stop at the harm identification. They're very good at identifying harm.

Upol Ehsan (21:16.895)
What they're really poor at is what to do with it. In fact, this is one of the reasons we wrote a very recent paper called Seamful Explainable AI. And the core contribution of that paper is a design process. And in this design process, what we tried very hard to do is help practitioners not just come up with envisioned harms, locate them where in the AI's life cycle they should be.

But more importantly, and this is something that came from, the work was done with real practitioners, and they said, we need to know what to do with it. We can't just stop it here. So that part is very important in my view, in terms of harmonizing these true traditions, where you can be, if you're coming from a sociology background, please give us all the richness that you bring to the field. But at the same time, be mindful of the community you are coming to. And...

try to be respectful of the traditions in the new community so that they are more harmonized. I think there's a term that I learned recently that kind of captures it. It's called cohesive diversity, right? So there is diversity of opinions, but it's kind of cohesive. I think it was Berkman Klein Center at Harvard who has this kind of ethos that we're going to be diverse, but we're not going to be diverse to a point of we're fractious, right? And we're fighting with each other, but we're going to be cohesive about it.

I think responsible AI needs cohesive diversity.

Shea Brown (BABL AI) (22:48.358)
Yeah, exactly. And I think, I mean, one piece of advice maybe I'd leave people with is don't get that sucked into an imposter syndrome. You know, don't try to conform with your cohort and really stand, know your strengths if you're not a STEM person and you're coming from social sciences or philosophy or some other direction.

That's OK. And we need that. And don't feel like you're an imposter amongst a bunch of data scientists, because that's what we need in order to get that kind of diversity that is going to push the field forward. So I think we're starting to run out of time for this topic. This is difficult. This is very difficult.

Upol Ehsan (23:35.239)
Yes, yes. Don't you have like newfound respect for all these like podcasters who kind of do it in such a nice regimented way? Like I feel like this whole thing has made me respect because you put like two researchers in a room, we can riff on this for hours, but I think that's good. So let's move on to the next topic. The next one is actually about something a little bit more timely, which is the EU AI.

Shea Brown (BABL AI) (23:45.823)
Yeah.

Upol Ehsan (24:03.851)
act has the draft that has been released and I am very curious because I'm actually not an expert in this You are more closer to the action than I am um What do you think like what's new? I couldn't find a lot of things that was new but it seems like there is something new or In other words, like how do you think it discourages irresponsible ai? I'm curious to hear your take on that

Shea Brown (BABL AI) (24:25.202)
Yeah, so I think, so you're right, there's not much new. So the Act has been around for a while, there's been different versions, the European Parliament had theirs, the European Commission had theirs, there's all sorts of information out there. There was an agreement, and then there was this draft that's being, this finalized cohesive version that's being created. It got leaked online, and everybody got to see it.

Now, there's nothing new there really in that draft, but what is, I think, exciting and relevant for the conversation we just had is what this means is that there is going to be a field for people to go into. I mean, because the requirements of the EU AI Act are such that there has to be responsible AI happening in every company that will deploy or use a high-risk.

AI system. And so the way in which I think it discourages irresponsible AI is that normally irresponsible AI is going to happen in the dark. It happens in the places where there's no visibility. It's in on some Jira board somewhere where nobody's paying attention or in some room where some decision gets made and nobody's...

reflecting or checking in on it. And what the act is going to do is going to shine some light on that dark. And it's going to make people think that, okay, someone's going to be looking at this and thinking about the decisions that I'm making as a developer or a product manager or anybody's associated with these sorts of systems. And so I think that's the main thing. It sheds some light on that darkness.

And it also means that there needs to be people to the point that we just made, there needs to be people that have this sort of multidisciplinary perspective, that understand the interface and socio-technical nature, not just the technical nature. Um, and so I think that's, that's going to be, that's going to be pretty important. I mean, for you, one thing that actually just, just occurred to me, I mean, a lot of what's in the act is going to be really relevant for people in your field.

Shea Brown (BABL AI) (26:43.414)
because this socio-technical nature is exactly what you study, this interface between the system and the people. And so I think explainable AI and human computer interaction folks who really dig deep into that are needed because there are a lot of open questions that we have not answered, and the act is not answering them for you. But what they're doing is they're forcing people to document those decisions about.

Upol Ehsan (26:48.983)
Mm-hmm.

Upol Ehsan (27:02.804)
Mm-hmm.

Shea Brown (BABL AI) (27:11.95)
How do I talk to somebody about what the system is doing? And so, what are your thoughts on this? Like, this is a huge field that's opening up, and a lot of people are gonna need your help.

Upol Ehsan (27:15.511)
Mm-hmm.

Upol Ehsan (27:24.427)
That's a good question. One thing that I'm often afraid of is that I hope responsible AI does not become regulatory AI. I think our AI should be our AI, not in the art of being responsible, not regulatory. Because there is this sense, right? You could do it too much and put too many lawyers and too many cybersecurity people and too many like guardrail people where

the responsibility that can be balanced with innovation gets stifled. And that is actually where you get a lot of the irresponsible kind of knee-jerk reaction, right? Because the whole move fast and break things people are like, oh, we feel stifled, we need space. So we're just going to do our stuff and you guys can deal with the stuff that happens in the wake of it. That is how I think we get divided and bifurcated into two sub communities.

where we're always in fighting. What I hope through this act is that we understand that the playground for responsible AI has a very concrete aspect of innovation in it, where we get to innovate, where we get to be brave, where we get to take bold steps, but without making sure people don't get hurt, harms are mitigated, et cetera, et cetera, et cetera.

That's my initial kind of gut reaction. Whenever I see these acts, there is a part of me that I'm like, oh my God, is it going too far on one side? Because you could see that. And I think a lot of people in our community are concerned about that, rightfully so. Coming back to the topic of human-computer interaction and explainable AI, I think therein lies another issue is, I don't think there is, because I live in these areas, there is not a lot of cross-pollination that is going on right now. And we need to start it yesterday.

We can't leave it too late So I think there is a need for early work So this is why I you know, I've been working at least thinking of working with the NIST And you know, I know you are involved in the NIST working groups as well So having this cross pollination early on in the development of the space Allows us to mature together Because it could be that our AI matures and there's a really nice field to be I agree with you hundred percent

Upol Ehsan (29:51.743)
But then what the field is for might not be very welcoming to people like me, right? Or might not be very appealing. Because if it's all filled with like regulations and like, you know, 60 pages of legal stuff, I frankly don't want to deal with it. I'd rather have a legal expert deal with it. Like the moment they give me like these laws, I'm like, no, no. I am not this type of person. I will read it, I'll understand it, but I don't have any expertise in it.

Shea Brown (BABL AI) (29:58.018)
Yeah.

Upol Ehsan (30:20.927)
tell me what in this law you need my input on and I can give it to you. So I think both sides need to be conversational about each other's worlds, but also we need to start collaborating and cross pollinating very early on. But yeah, sorry, go ahead.

Shea Brown (BABL AI) (30:36.306)
Yeah, I like that idea. No, I think it's good. I think, yeah, I totally agree. And I think, because I like the nitty gritty details, and I also like to try to connect those details to real harm or some real actionable things. And so I'll be sure to come to you if I've got things I want your input on, because I'll find that little bridge in the regulation, and I'll know that

that in order to actually make sure that this is going to happen, that the way someone understands or interacts with the user interface, for instance, has to be a particular way. And people like you are the ones that are going to tell us how that actually works.

Upol Ehsan (31:23.339)
That's a good point and made me think of something that relates to the other two points that we talked about, right? Is, for instance, you are doing auditing with clients, right? That's like very like on the ground with the details, the devil is in the details and getting those right. I, when I consult, this is the, just to give viewers the idea of the diversity of this field, I think auditing happens on one end of the spectrum and then the infrastructure building of how to be responsible happens on a different end of the spectrum.

and they too talk to each other, right? But for me, when I went going to clients and I'm helping them, they are really at the formative stages. They're not at the stage where they need an auditor to come in to help them figure out whether they've done it right. They're at the stages like, okay, what do I even do? I have no idea. How do I take some of the processes that I have and translate it into an auditable thing, right? Because a lot of the times, many of the processes are not even auditable.

So that's actually brings me a very interesting point that you mentioned is like, we all have our little places in the mosaic of responsible AI, right? And I feel like if we all do our roles well, the mosaic will be a very beautiful picture, right? And even if they're little fragments, you zoom out and you get a really harmonized picture that comes out of it. I don't know what those things called, you know, like you put a, you can make a big picture out of little tiny pictures. I don't know what it's called, but that's the metaphor that I'm going for.

Shea Brown (BABL AI) (32:49.175)
Yep.

Upol Ehsan (32:52.899)
But or you could do it very disharmoniously and then you zoom out and it's a mush and you see the fragments

Shea Brown (BABL AI) (33:01.126)
Yeah, yeah, I think that what's exciting is there's so much work to do. And unfortunately, I think we we're running out of time to talk about it. So, yeah, I'm going to let you if you've got any closing thoughts.

Upol Ehsan (33:09.159)
Yes, good. So, Thanks for watching.

Upol Ehsan (33:15.479)
Yeah. I think nothing other than the fact that, you know, if viewers have listened to this and they like it, please send us questions. We would love to, I mean, thank you for sending all the questions you've already done. It helped us give something relevant to you. We can talk about any topics for any given length, and we are trying to constrain ourselves to a limit. But thank you so much for listening and take care of yourselves. And if you can, take care of others.

Shea Brown (BABL AI) (33:45.358)
Thank you.


The 3 topics we cover in the episode
Is RAI all hype?
Who to trust and who to ignore in RAI
How can newcomers to RAI navigate through the hype?
How to break into responsible AI consulting
Do people need to have a PhD to get into this?
Responsible AI is inherently sociotechnical (not just technical)
Why we need "cohesive diversity" in RAI not just diversity
The EU AI Act's draft and discouraging irresponsible AI
We need Responsible AI not Regulatory AI
Why we need early cross pollination between RAI and other domains
The range of diverse work in real-world RAI
Outro