The AI Fundamentalists

Responsible AI: Does it help or hurt innovation? With Anthony Habayeb

May 06, 2024 Dr. Andrew Clark & Sid Mangalik Season 1 Episode 18
Responsible AI: Does it help or hurt innovation? With Anthony Habayeb
The AI Fundamentalists
More Info
The AI Fundamentalists
Responsible AI: Does it help or hurt innovation? With Anthony Habayeb
May 06, 2024 Season 1 Episode 18
Dr. Andrew Clark & Sid Mangalik

Artificial Intelligence (AI) stands at a unique intersection of technology, ethics, and regulation. The complexities of responsible AI are brought into sharp focus in this episode featuring Anthony Habayeb, CEO and co-founder of Monitaur,  As responsible AI is scrutinized for its role in profitability and innovation, Anthony and our hosts discuss the imperatives of safe and unbiased modeling systems, the role of regulations, and the importance of ethics in shaping AI.

Show notes

Prologue: Why responsible AI? Why now? (00:00:00)

  • Deviating from our normal topics about modeling best practices
  • Context about where regulation plays a role in industries besides big tech
  • Can we learn from other industries about the role of "responsibility" in products?

 Special guest, Anthony Habayeb (00:02:59)

  • Introductions and start of the discussion
  • Of all the companies you could build around AI, why governance?

Is responsible AI the right phrase? (00:11:20)

  • Should we even call good modeling and business practices "responsible AI"?
  • Is having responsible AI a “want to have?” or a “need to have?”

Importance of AI regulation and responsibility (00:14:49)

  • People in the AI and regulation worlds have started pushing back on Responsible AI.
  • Do regulations impede freedom?
  • Discussing the big picture of responsibility and governance: Explainability, repeatability, records, and audit

What about bias and fairness? (00:22:40)

  • You can have fair models that operate with bias
  • Bias in practice identifies inequities that models have learned
  • Fairness is correcting for societal biases to level the playing field for safer business and modeling practices to prevail.

Responsible deployment and business management (00:35:10)

  • Discussion about what organizations get right about responsible AI
  • And what organizations can get completely wrong if they aren't careful.

Embracing responsible AI practices (00:41:15)

  • Getting your teams, companies, and individuals involved in the movement towards building AI responsibly

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Show Notes Transcript Chapter Markers

Artificial Intelligence (AI) stands at a unique intersection of technology, ethics, and regulation. The complexities of responsible AI are brought into sharp focus in this episode featuring Anthony Habayeb, CEO and co-founder of Monitaur,  As responsible AI is scrutinized for its role in profitability and innovation, Anthony and our hosts discuss the imperatives of safe and unbiased modeling systems, the role of regulations, and the importance of ethics in shaping AI.

Show notes

Prologue: Why responsible AI? Why now? (00:00:00)

  • Deviating from our normal topics about modeling best practices
  • Context about where regulation plays a role in industries besides big tech
  • Can we learn from other industries about the role of "responsibility" in products?

 Special guest, Anthony Habayeb (00:02:59)

  • Introductions and start of the discussion
  • Of all the companies you could build around AI, why governance?

Is responsible AI the right phrase? (00:11:20)

  • Should we even call good modeling and business practices "responsible AI"?
  • Is having responsible AI a “want to have?” or a “need to have?”

Importance of AI regulation and responsibility (00:14:49)

  • People in the AI and regulation worlds have started pushing back on Responsible AI.
  • Do regulations impede freedom?
  • Discussing the big picture of responsibility and governance: Explainability, repeatability, records, and audit

What about bias and fairness? (00:22:40)

  • You can have fair models that operate with bias
  • Bias in practice identifies inequities that models have learned
  • Fairness is correcting for societal biases to level the playing field for safer business and modeling practices to prevail.

Responsible deployment and business management (00:35:10)

  • Discussion about what organizations get right about responsible AI
  • And what organizations can get completely wrong if they aren't careful.

Embracing responsible AI practices (00:41:15)

  • Getting your teams, companies, and individuals involved in the movement towards building AI responsibly

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Speaker 1:

Before we begin this episode about responsible AI, on behalf of myself, andrew and Sid, we wanted to thank you for your continued support for the AI Fundamentalists, whether you've been a long-time listener or are fairly new to our content. We hope that you've been able to use our content as a sanity check for safe and performant modeling practices as you navigate timelines and hype around implementing AI. While that is our guiding mission for you, there are certain topics that you also ask us to talk about that abstract from that mission. A few weeks ago, we dedicated an episode to exploring consciousness. This episode is going to explore another popular topic request, which is responsible AI, and the time and context are right to explore this term as hype receives a backlash. There are many questions about the term responsible AI in air quotes, such that it turns the phrase into who is responsible for AI? Or is it simply responsible tech? Why just AI? In the United States, we've had the additional context of watching two stories of regulation and responsibility over the past few months. One is the story of Boeing being in a regulated industry, going through an investigation and follow-up before one of their products, if you will, can fly again. Contrast that to the recent congressional hearings with Meta, a company in a questionably unregulated industry of tech, in front of a thousand plus parents whose children committed suicide after interactions on social media, the question being what algorithmic influence played a part in their emotional state at the hands of these suicides. Emotional state at the hands of these suicides it's sobering when you think about it. We use this contrast as a backdrop for who is responsible in responsible AI. Between us and the perspectives of our guests. Our hope is to provide you with questions and arguments to use in your innovative work. We also wouldn't be us if we didn't tie it back to the practices from previous episodes for you. Thanks again for your listenership and support.

Speaker 1:

Now on with the show the AI Fundamentalists, a podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses. Here are your hosts, andrew Clark and Sid Mongolek. Hello everyone, welcome to today's special episode of the AI Fundamentalists. Today we're going to do this is a second episode that we've done like this where we're going to take a step away from modeling best practices and really get into some more broader topics, today's topic being responsible AI. Today we have a special guest with us to explore this topic further beyond the models, we have Anthony Habayeb. To explore this topic further beyond the models, we have Anthony Habayeb. He is the founding CEO of Monotar, an AI governance and ML assurance company. He's also a visionary and industry commentator. He's on a mission to unlock the potential of intelligent systems to improve people's lives. Guiding his mission for starting the company, anthony welcome.

Speaker 2:

Good to be here.

Speaker 1:

And I should have said earlier, you and Andrew co-founded the company on this mission of there is a safe and responsible way of developing models that are fair, mitigate and mitigate bias and risk. Talk to us a little about that mitigate, and mitigate bias and risk.

Speaker 2:

Talk to us a little about that. Five years ago, 2019 started this journey because I'm a bit of an optimist. I think AI is going to make our lives better. In particular, healthcare was pretty inspiring to me. I think there's such an underservice of healthcare available, in contrast to the number of people that are coming up all over the world and varying degrees of developed markets, and I was like man AI's ability to like, triage patients and treat diseases and cure you know, novel healthcare challenges.

Speaker 2:

Sign me up for that. What could I do? And went down a journey that led to like this I felt pretty damn boring thing Like let's build governance and auditability of this stuff, because if we do that, then maybe we'll actually use this technology in places that it really matters. And you know, here we are today and I think we're living at a time that the thing that Andrew and I thought about five years ago is is actually a conversation that a lot of people are having, as you said in your intro. So I feel pretty lucky to have a chance to do something meaningful at this moment in history.

Speaker 1:

Yeah, andrew, was that the same feeling for you coming into it?

Speaker 3:

Yeah, I definitely think it's from me, coming from an audit background. There's a lot of things that auditors had in place and controls and lots of parts of the economy and anything financially related that are just kind of taken for granted segregation of duties, validations, independent review and things like that. And then coming into kind of moving over into the machine learning and modeling world and realizing that that's not a commonplace thing definitely was kind of like, okay, well, now that these models are making more consequential decisions, we definitely have to start putting those same guardrails in place. So that's really how I kind of slid into this as well with Anthony and Sid.

Speaker 1:

I think before we started this episode we were talking about a few of the key experts and schools of thought in AI modeling, in particular LLMs. Can you talk a little bit more about what that school of thought was? Because you come from an NLP background, so some of this next word technology with LLMs is not new to you.

Speaker 4:

Yeah, I mean, this space is a rather old space. Anything we're talking about now is basically a product of successful hype, train, successful marketing and successful feeling that this is a moment and building up this moment around what NLP can do and what it can't do. And I think people are seeing a lot of potential and a lot of excitement to do things with this technology as it exists now in the easiest ways to use it.

Speaker 2:

You know, sid, it's funny because I've been thinking about that a lot lately. I think every week or every month, some new AI governance or responsible AI governance company comes up to your comment about the hype train, and I think it's also sort of an interesting time now, in particular where there's this. I get a little bit worried about who's really going to do the work versus just saying they're doing the work right, which wasn't a thing that I think I worried about in the beginning, because the folks that we were talking to like were those earlier people seeing like, hey, if I just want to put good AI in the market, like the responsible thing to do is build good systems right, and I think that's interesting. Just hear you talk about the hype train, and sometimes it's challenging to figure out who's serious about solving this problem versus, you know, just talking about it.

Speaker 3:

I fully agree. I think that's a big problem and why kind of the genesis for this podcast and talking about responsible AI is it's really turned into same with, like the buzzwords around you know, chat, gpt and LLMs and everybody got excited about that of how they have to do less and not think it's gone that same way of like oh, now that people are concerned, let me put something in place, let me just buy some widget that's going to magically make all my stuff fair and get the regulators and the public off of my back and I don't have to actually do anything, I don't have to actually change my processes or actually think about how to just responsibly deploy things. And then, of course, you have the minority that has always exists. You know we always talk about NASA and Apollo program here, but that kind of like systems engineering, control theory, that deep engineering, best practices, stuff, that there's the people that have always been a small contingent of people doing things the hard way, because they were hard and because they wanted to do them correctly. That group has stayed consistent the whole time. They've been doing things responsibly since before AI.

Speaker 3:

Ai whatever you call AI or not has been around for a long, long time. This is not new, it's just the hype train is sliding into this responsible AI governance or whatever we want to call it now. And that's concerning as well, because people want to just buy something and claim that they're doing things responsibly and not actually do things responsibly. And doing AI responsibly is a very hard thing to do, and no matter, there's no magic tooling for that.

Speaker 1:

And something interesting you said in there, andrew, was like they want to know, they want to buy this, they want to buy something and get it going right away, just as if they would buy I don't know a marketing world. You would buy an email platform. Sure, hook up your DNS, start sending emails, put in a few template items, you're done. And I think the thing that I've been noticing is how eager people are to tell me the rubber stamp check boxes that I need to do just like they would buy any other software, to say that you know, we understand the risks, like you would accept an end user license agreement or a privacy agreement and go on your merry way. And I think we're finding that this is a newer technology that not a lot of people have worked with to integrate into a lot of their systems. It's not that easy.

Speaker 2:

I think responsible is just the wrong word, really. Yeah, listen, like explainability was a word I had issues with for a long time when a market was developing around explainability, because I thought that it simplified an idea into like a technical thing that was, it's more important for, like, all these stakeholders to understand what's going on and who did what work and to have some alignment right. And so in the beginning, I was talking about, like, what's the difference between explainability and understanding? And let's not say that explainability is a solution, like it's literally a library that you can use technically like that's not the answer. And I think, similarly, like responsible AI is dangerous in some ways because, like the phrase itself suggests like people are irresponsible, right, like, if you think about it, like wait, are we being irresponsible? Now we need to be responsible. Like, are people not using good practices? Are people trying to do bad things? So you know, it's one of the things I'm pretty proud of at our company that we spent a lot of energy around every day and still is like really trying not to like.

Speaker 2:

I know that the website says AI governance right, and there's a marketing reason to do that. It makes sense, right, there's a reason to sort of build some SEO and things that can catch responsible AI, but the substance of the things that you do to really like pause good modeling systems. It's multiple stakeholders with specific jobs and responsibilities. It's certain ways that you build modeling systems for robustness. It's ways that you train your models and evaluate your data right, and I do think that there's benefit in having something that catches all that easily, like calling it something and all right, fine, we'll call it. You know, responsible AI, but I think it's almost like a misnomer or it's a. It's an unfair characterization of AI in general to have the lead, say we need responsible AI at the at the expense of assuming, like the ways we've been doing things are, you know, always irresponsible.

Speaker 3:

I think that's great and I loved how you brought in the explainability part as well of like your modeling system. Being able to explain what happened to an end user requires a lot understanding what came into the top and what comes out the bottom. Sure, you can put a library in the middle, but people are over. Excuse me, we're obsessing too much over that. Same with like a good analogy, I think, for this responsible AI, which is also just a random buzzword. That makes no sense. I agree is like in running. So I'm a runner, I like to talk about running. So apologies for this, but I think a lot of people will be able to understand this example, but Nike and other running companies keep coming out with these super shoes. They're going to make you so much faster and, oh my, they're going to just revolutionize everything and it's like it adds much. Maybe someone hit a new world record oh, by the way, that was two seconds faster than it was previously. But they make this such hype Like this is such a game changer when the biggest thing blocking now, now the random person puts on those shoes and goes out and runs a 35 minute 5k. I got news for you, so you've been 35 minutes and one second difference If you did it the other way versus you can get down to world record way before super shoes was sub 13 minute 5k. So, like the difference, there is the hard yards doing the correct processes, doing things the hard way, actually learning how to do something properly. It's the same way as like we're essentially just coaching how to build models properly. The checklists are the same.

Speaker 3:

How do you get faster at running? It's very proven. There's no magic bullet on how to get faster at running. It's there's very proven processes. Same with this. Everybody's trying to like overly complicate and make AI or responsible AI, or if I just now say responsible AI and I tweak this, it's somehow magically different. It's like but you guys haven't gotten off the couch yet. You've never run a mile at all and you don't, and you're just using chat, gpt. You just bought it, the free version, or not even bought it. You're using the free version online and you want to now pop something on top to make it responsible, but you haven't done any of the just baseline steps to get there.

Speaker 4:

Yeah. So I mean, we're kind of in a space now where responsibly has become such a catch-all term for this idea of what is essentially good. Modeling right and good is extremely nebulous. It's not described, it doesn't describe the problem, it doesn't describe the steps, and now we're actually seeing almost a pushback against it, right, because it's such an umbrella term. So we have people saying things of the flavor responsible AI is going to hurt innovation, it's going to slow down AI. I guess I have my stance on this, but I'm interested in hearing Anthony and Andrew's stance on do regulations slow down modeling or do they make modeling better?

Speaker 2:

Yeah, Sid, I think it's a great question and I would say it's a really great segue from maybe hating on the marketing phrase of responsible, but also acknowledging like it exists for a reason and I respect that reason which is to bring people together around an understanding that, like there is a better way to build systems Right, and I think you know regulations in that context can be a positive change agent. I think I've even observed, you know, in my time at Monotar. You know you and Andrew have seen a lot of things. You have more experience than the average person being tasked with Build some model and deploy it and, by the way, you're a data scientist learn to be a software architect and a full stack engineer. Right, and build pipelines and monitor your models by yourself. Like that's not what an average person can do, nor should they have to right, because objectivity and distribution responsibilities for good AI is really important, and so I'm encouraged by seeing, maybe, certain regulation more than hearing people talk about responsible AI, because when you look at some of the standards or constructs from NIST, or you look at some of the things that the EUI Act says, or even, like you know, within insurance in the United States, where you look at what insurance regulators are asking for, which is hey, companies, demonstrate to us that you have built a full lifecycle cycle risk management approach to your AI strategies, right, and within that, they're asking for some very like I think reasonable things that I know you sort of agree with right. Like demonstrate that you're aware that testing is important in how you test. Demonstrate that you do know where your data comes from and how you validate it. Demonstrate that you sort of did a best fit evaluation of do you have the right model? Like show us that you're watching the model to make sure it's performing as expected.

Speaker 2:

Like no good modeler would argue with those things. However good modelers might be given those responsibilities without the enablement or the resources or the support to get it done. So what regulation can actually do, if done correctly, is cause a company to say, well, we have to do these things now, and the modeler can raise their hand and sort of say, yeah, we do, and I'm not an expert at this, I do need some help. We do need some assistance to solve these problems. And the good news is, then critical things of causing performant and accurate and robust systems get implemented, and maybe regulation was the driver of that and, yes, it's responsible to do those things. So, yeah, I don't think regulation is a inhibitor. I actually right now see it as an opportunity to just better software and I think the challenge for me that I am watching and interested in, you know, it's just like a pro-innovation person.

Speaker 2:

I do think that AI regulation is going backwards to also regulate things that are not AI right. Like regulations are the way that they phrase. Ai is not what you and Andrew would consider AI right. They're sort of asking you to apply certain new governance and testing on things that make decisions about our lives. I'm also pro that.

Speaker 2:

Like, yeah, it might not be an AI system that's using my financial data and lifestyle data to decide whether or not I should get a loan right, but that's an important decision, right, and so why wouldn't I, as a consumer, like to know that a company that makes that decision has some good quality control in place? Like I can argue that decision could be as impactful in my life as, like, does the seatbelt in my car work? Like somebody tested that thing. So I think that that's like as just a consumer, like, yeah, you know what it's going to be extra work, it's going to add cost into the ecosystem for sure. Consumer like yeah, you know what it's going to be extra work, it's going to add cost into the ecosystem for sure. But if we're moving to a world that most of our life is driven by software and automation, well then, let's have that future world, have some sort of seatbelts around it. I mean that seems okay to me. That's a cost worth incurring for the benefit that we can realize.

Speaker 3:

Fully agree. Of course there's always a risk of bad regulations, but I'm not seeing that really adjust. But of course there's always that risk. But, as generally, I think that's fantastic points. It seems like regulation more prevents the increased race to the bottom of just cutting costs. Cutting costs, cutting costs, just have people hack stuff together. Yeah, the move fast and break things. That will kind of go away a little bit.

Speaker 3:

But for the quality of systems it helps raise the bar of making sure like there isn't that race to the bottom. People have time to do it. Like Anthony said, it allows the people to raise their hand hey, I need more resources here and actually get them. And the people that are top performing it's not going to affect them that much. It's the people that aren't performing and just raising that bar a little bit, but providing them the resources to do so. So I don't really see a problem with it.

Speaker 3:

It's the same like I don't want to keep using the Apollo examples, but you had a very big constraint you can't keep killing astronauts, so fix that. But that made them be able to do things that you wouldn't think possible and helped optimize and had the ability to fix that Same with, like seatbelts in your car or like, hey, tesla, for your self-driving cars, don't kill me, like that kind of stuff. So it makes you have to have better controls in place. That makes the system better, but it means everybody has to do that playing field versus one company. I want to be the best, but I'm going to get costed out of the market because everybody else is trying to cut corners. So it helps kind of to pull that up.

Speaker 2:

Andrew, I think of course it monitor like regulations don't hurt us, right, but like it's good for Monotar. So we might seem biased in that opinion in some ways, right, but it's like a real genuine. You and I have both been in the room with people who want to do certain things and they don't have the time and resources to do it. Like it is a real well-intentioned, responsible builders unable to explain to their organization why they need certain things. Builders unable to explain to their organization why they need certain things. And for you and me, we felt frustrated by their inability to get the support they need. Right, and like I never want something good to be caused by regulation.

Speaker 2:

Like I'm not, I'm generally like a very free market thinker. You and I have talked about that a bunch, right, like, but like this is one of those examples where the right thing is struggling to help people understand it's needed and this could enable them to get what they need. And I think that's actually a really important thing for a lot of executives to realize. Your team doesn't know how to tell you why they need this thing that they need. So good news, somebody else is helping you realize that it needs to be done.

Speaker 4:

Yeah, I think that's totally the mindset right. It's when we have these regulations in place. It's the first time that engineers and data scientists can go to leadership and say, hey, we need to do this the right way the first time. And furthermore, does this hurt innovation? I would say flat out no, because if we want this to exist in the world, if we want people to use these technologies, we can't have every three months someone gets hurt really badly, or there's some major financial collapse, or someone catches some negative downfall from using these technologies and then it's like, oh well, I guess no one should use this. We don't trust this Almost in a brand loyalty and trust sense. Having these seatbelts around our AI systems allows us to keep building more and more and more and build these integrations with the confidence that they're actually going to help us.

Speaker 1:

Yeah, and I think this is all great. I love the regulation conversation. I feel like we could stay here forever, but I know that we're going to have some few topics here where we're going to come back to it. I do want to talk because Anthony was starting to introduce it earlier. There are some technical facets of responsibility In debating the term. There are some things that model builders, risk owners, business owners really just want to know about their products so they're not going to get caught in a situation where the product can't exist anymore. Let's go for the most severe case. So, in the facets of responsibility, andrew, what do you think are the most important in the modeling world?

Speaker 3:

Yeah, I mean this is the simple stuff that nobody likes to do, which is really fully understand your data, make sure it's balanced, like a lot of the previous episodes, we've talked about a lot of what these areas are. You know, make sure your model is repeatable and you can re-perform it and we don't want to get into the explainability thing we talked about earlier, but just make sure you understand what your model is doing. I'm less concerned about the explainability quote unquote but understanding what's going on. You know, having an audit trail, having records of it, making sure just a simple someone else is independently validated your system, that kind of thing is definitely huge.

Speaker 1:

What other things should we explore A lot? Another common topic that comes up when we're talking about responsibility is bias, Sometimes in the phrasing of who was responsible for detecting and mitigating bias.

Speaker 2:

You know, I think Andrew calls out some of these more technical considerations of a modeling system, right Robustness, right, accuracy, right, data quality, anti-fragility, like there are certain more technical things here, right, but actually let's give a nod to responsible AI, maybe for just a second, because one of the things that that generally does as a conversation is it includes the ethical and the roles and responsibilities of distributed stakeholders in the discussion. Right, and that's similar, andrew. Like we were riffing on explainability earlier, right, and explainability was just living in a technologist conversation and then it it became commonplace and everyone was saying our models need to be explainable. In some ways, like what that means, became less important than everyone recognizing models needed to be explainable. And I think, in a similar way as you think about responsible AI, like what exactly that means.

Speaker 2:

Yes, there are many definitions of that, but one of the nice things about having the phrase available in the market is it's also causing people to say this isn't just a technology problem. This is something that like hey, there needs to be model builders, there needs to be objective reviewers, there need to be cross-functional people who can contribute towards projects. You should be thinking about the ethics and the impacts of your applications and systems, do you have the right talent. So I think that that non-technical approachability of responsible AI, just like at some point, explainability, became a common phrase and brought more people into that topic. You know, I think those are some of the the key elements of, or facets. I think, as you said, susan, right of responsible AI is that the conversation includes who does what jobs in those responsibilities and you can't really cause a good system without understanding what people and processes need to exist. Also.

Speaker 3:

I love that, anthony, and that's fully agree with that.

Speaker 3:

I think that's a great observation, that this conversation helps and as well as giving them funding and ability to look outside their siloed org.

Speaker 3:

Because, as part of that, the next conversation, everybody's always like bias, bias, bias. But if you built your model responsibly from the get-go and really understood what use case you were and how are you representing the data like just making sure your data is representative of the use case, like assuming you're not trying to actively do something nefarious, that's just part and parcel of doing the job the right way anyway, parcel of doing the job the right way anyway. So, like, if you're doing, you have the resources to do the objective, reviews, the responsible, like getting multiple stakeholders in the bias part. Don't focus on that. Focus on getting the right people in the room and the right budget. And if you're doing, if you're actively trying to do the right thing, the bias part will fall out because you've been doing good data management processes, you're doing review processes and things. Don't focus on trying to make your model not biased. Focus on the broader thing and you will have a non-biased model.

Speaker 2:

Although you can have a biased, responsible model, like you know that right Like models are trying to discriminate and make decisions. So the idea that you can apply good practices and that the model has a bias, like the story that you might tell, is that my model does have a bias in the following way and this is why it's appropriate, or this is how we build a model right, like these are the objectives.

Speaker 2:

So like, even that conversation is interesting to me as we think about responsible AI, and maybe we don't always say it's only responsible if your model is unbiased. Well, what do you mean by biased? Are we talking technical bias? Are we talking societal bias? And how have you like part of responsible AI should be that you connect the definition of bias and the way you manage that to your corporate policies and your corporate strategies and approaches. Right? So like why don't we talk about you can have a responsible AI system and the model is still biased? That's totally a possible outcome.

Speaker 1:

No, and all of that makes me think that seems like the terms bias and fairness get really screwed up.

Speaker 3:

They really do, specifically because bias itself it means so many different things. There's statistical bias. That means nothing wrong whatsoever, it's just describing like a statistical phenomenon. So yeah, people focusing too much is like on bias as a thing. Bias, as Anthony just mentioned, can be a lot of different things and not all bias is bad. And like even all discrimination is not bad, because risk discrimination is what actuaries and insurance is based off of, so it's very much like we're being equitable and fair. That's more the thing versus everybody talking bias, bias, bias that's not a descriptive term whatsoever.

Speaker 4:

And even fairness is rather abstract and vague, right, you know, we think of fairness as basically correcting for biases. Right, If there's some intrinsic societal bias, that's in these models, that's in the world, the fairness piece is correcting that piece and that's, you know. Another level of responsibility, you know, do we want to lump that in with? Is responsibly I making a repeatable model, or is it also fairness of a model? Right, this is the umbrella problem. Even we see responsibility as not just a fixed, like you accomplished it and it's rather like a continuum which can go all the way to let's fix the world around us and maybe change these underlying biases.

Speaker 2:

Yeah, you know, sid and Andrew, and I'll play the I'm not the tech, the techie like you guys are card here, right? I think that's one of the things that Monotar does. That, I think, has really just been a function of what the market needs is people aren't ready to solve bias or solve fairness or solve robustness and sensitivity until they first even defined for their own company what is responsible AI and governance right, and so we could talk all we want about explainer libraries and sort of monitoring large language models for hallucinations and all of these things that get a lot of talk in the market the reality of our entire ecosystem of people who should be buying solutions to build AI governance or establish AI governance. They need to start at step one, like, what does fairness and bias even mean to our company? Right? Like what people do we need? What is our stance on this? Right? I hate hearing companies saying you can't use large language models, period.

Speaker 2:

To me, that's just like a result of not taking the time to to like carve out well, how do we use AI? Right, and building some rules on that, and granted, I know it's hard and it's part of the reason we're having success, like we literally help some companies take that first step. Here's what you need to do, right, but I think that that's a. It's a. It's just a really interesting thing to ground ourselves in, just a really interesting thing to ground ourselves in.

Speaker 2:

We could spend all of our time talking about these more technical concepts and some of these downstream ways to like manage things that are at the. You know the antithesis of responsible AI or bias, right, but before you even think about that stuff, like you as a company, like people building AI systems, they just have to, like, sit down and put down on paper, like, like what's good, right, what do we feel is good? And then, with that, like, if I handed something like that to you and Andrew, right, you're like, okay, these are the guardrails I have, these are the parameters I get to operate within. I got this right. But, man, how hard it must be to be a technologist trying to build AI in a company that hasn't given you any structure, right, like you know, there's n number of opportunities and you have zero guidelines around the parameters that you get to operate within. Like I don't know, that can't be really fulfilling. It's got to be pretty hard.

Speaker 1:

Yeah and Anthony, these are all great points because they they're building up to what the probably the hottest questions are is what do organizations get right or wrong about responsible AI? And let's start with, let's start with what they get right.

Speaker 4:

I mean, what they get right very simply is oh well, we want to do it, and they understand like there's an incentive to do this. You know, do we have a better looking AI organization? Do we have like some social responsibility? Do we have some social responsibility? Do you want to put back good into the world? I think what organizations get right is usually understanding that responsible AI is a net benefit for them and for their customers. Understanding what responsible AI is maybe not so much.

Speaker 2:

Shout out Monotar customers getting it right. No, I mean really, sid. I think the first step is knowing you need to do it right. I think that that's important, but then, as we like to say at Monotar, go beyond good intentions, right? The next element is like enabling budget in people to define what governance at our company means right, and then giving them the time or the space to operationalize those things. So you know, I think that if we go the next step so let's assume at some point, like the best people are all saying I need to do it and I definitely you know, susan, you're in marketing More people are saying I need to do it and I definitely you know, susan, you're in marketing more people are saying I need to do it Right. But I think the next key step of doing it right is not jumping ahead to solve things until you've first established a clear governance program with your policies and your controls and your responsibilities, and everyone understands what good looks like, right?

Speaker 2:

I think some of the companies that are struggling that I interact with sometimes are, first of those that don't have a framework to think about how to define their things.

Speaker 2:

So they have committees with lots of people with opinions and no experience to like go from a meeting to actually putting things on paper that then become operationalized.

Speaker 2:

And that's really frustrating to be in a situation where a lot of well-intentioned people don't have even any rubric or framework to try to cause good governance right. And then I think the other cohort of folks that I think are struggling a bit is those that are singular in the department that gets to drive what governance will be. So if just the IT team is doing this, they're going to take more of like a MLOps infrastructure approach to I'm logging, I'm monitoring, everything's fine, right, leave me alone. But they hadn't thought about what good or bad looks like. They hadn't thought about like ethical or societal implications in the way that if you let just legal and risk and compliance do this, they don't know how to communicate what is required of technology. So they're going to build policies at a level of abstraction. They're not actually executable for the technology team right. So you know, I think the best people they have, the disparate teams, come together in a thoughtful way and they focus on building alignment first before they start operationalizing their strategies.

Speaker 1:

Yeah, sid or Andrew, anything to add to that, because I have one more thing, one more direction. I want to take that in before we get to what could just absolutely go wrong.

Speaker 3:

Yeah, just one way that I've heard someone describe, like competence and things before that I think is kind of useful, is there's really like four stages there's unconscious incompetence, there's conscious incompetence and then there's conscious competence and unconscious competence, right, so it's kind of a continuum. So companies there's a lot of them that are, on the unconscious, incompetent and you want to be moving to the like drat, I don't know what, I don't know. Let me hire a coach. Okay, please hire a monitor to be basically your coach to help you get there and move to the next step, building that that motion like Anthony talked about in in legal or whatever, in your compliance and organization. So kind of moving on that continuum, and as long as you're moving in the right direction, that's progress. You know it's like, um, if you're not, if you're not treading water or if you're not swimming forward, you're sinking that kind of thing.

Speaker 1:

So as long as companies just start kind of moving down that that path, it's helpful okay, when you guys were talking, you made me think of the situation where there's also a world out there in the greater efforts of responsible AI, where you've got companies that have been working with some type of modeling machine learning, AI, statistical models, whole modeling systems versus those who saw Gen AI last year and said, oh, we got to get on board and this is their first foray. Is there any advice that any of you have in that paradigm on where their focus should be?

Speaker 3:

I mean it's it's really the pyramid where people like to skip steps. It same with you know. Back to the running analogy. Oh, I want to run the Boston marathon, so let me go out tomorrow off of I've never run before in my life and start running the Boston marathon. I expect to have good results. Now you're going to get a bunch of shin splints and you're going to fall on your face right Like it's that same thing where companies that don't understand what data they have, they don't have good processes in place, they don't.

Speaker 3:

They don't even have a solid modeling team that has done the fundamentals and knows how to do that kind of thing, and then jumping to the top, so you still want to have. How are we going to responsibly deploy this, even if it's starting with the policies? What data am I going to be using? What's my use case? How do I manage? Where is this acceptable? All those sorts of things, even if you don't want to have the whole modeling teams and stuff, but just thinking about how you want to approach this. And it's the same way as like when, when cloud computing became a thing, where Dropbox became a thing like how, how do companies want to manage this stuff. So starting there versus just you know, jumping to the top and thinking they're going to use this to transform everything, without understanding the basics.

Speaker 4:

Yeah, it really boils down to taking big problems and chunking them down into smaller, reasonable problems you can really realistically tackle Right. So if you're coming at this from the first time, make sure that the L I'm using is going to solve the problem you need to solve. Make sure it's you know you're okay with the downfalls that come with it. Make sure you're okay with hallucinations and if all that looks good, great. If you're thinking about doing responsibly, I don't think of it like how do we save the world. Think of it as you know reasonable subtasks. Do something as easy as reading the EU AIX. Watch a YouTube video, if you need to, or read some nice articles about it and see what these auditors are out there saying and find the pieces and define what's reasonable and what you can do and start building towards that from the beginning, rather than coming later and trying to patch it later.

Speaker 2:

You know what? I don't even think it's about AI, susan. I don't think it's about open AI. I don't think about its large language models. I don't think it's about linear regression. I think it's always about what's the thing that you're building, what is the impact of the thing that you're building and what is the opportunity or the risk of that application. Right, like? The 101 here is are you building a cute little chat bot to help internal discovery of information? Have fun, right? Like are you going to build a system that's going to diagnose a patient with having diabetes or not? Be a little bit more serious, right? Like? I think sometimes we're letting the product type or the technology type dictate what needs to be done, and I get that, but I think it's actually more valuable for a lot of companies to like, not get caught up in. Like, what kind of software? Oh, it's ai, it's not a hold on a second. Like.

Speaker 2:

Everything in business can start more simply with like, what's the project we're about to do? Is this material, right? Is this a higher risk thing? You know? Is this going to require a lot of money? Um, where is this going to be used? What is it going to have?

Speaker 2:

Pretty basic things that a non-technologist can ask and understand. And if your signals go off with, ooh, this could really hurt our company. Oh, this could really have impact on our end users, this could really impact our financials. Well then, it's your job to just figure out how to mitigate those risks, right? This isn't even an AI problem. This is like a good corporate management or a good business management thing. Like what are you doing? How much opportunity, how much risk is there? And in context of that, like, what do I need to do to like maximize the likelihood of achieving the success and or maximize the minimization of risk? Right, I don't know, it sounds so simple, but sometimes like that's the answer. Right, I don't know it sounds so simple, but sometimes like that's the answer, right, like it could be so simple to just think in those terms.

Speaker 1:

It is true, like if you, approaching it as a whole business problem is absolutely key, still because we have no thought left behind on the AI fundamentalists. What about other organizations? What can they get completely wrong.

Speaker 3:

Pretty much the inverse of what Anthony said. You need to look at this problem very holistically. It is a holistic business decision and it's very hard to do, but very simple. Everybody makes it super complex and also companies fall into the trap, no matter if it's AI or anything. If it sounds too good to be true, it is, and that's a common problem we're having now. And then, specifically, we have the issues with all the protesting about regulations and things. I think Anthony did a fantastic job outlining that of really what that does Assuming, you know, of course there's outliers where there's bad regulations, but assuming everything like we're seeing is tracking really well, at least in our opinions on the AI, regulations look really well thought out.

Speaker 3:

They level the playing field and make everybody give people resources to build stuff properly. So I mean, if you're protesting so much about how horrible these regulations are going to be and how they're going to destroy your business and stuff, to me it really boils down to two things. You're either doing something you don't want to get caught doing, which okay. Then that's why you need regulations right or you know your stuff might not be that great and you aren't using the quality and you're concerned that if there's a higher bar that's placed, you won't have the strategic moat you think. You have open AI, potentially that kind of thing right. So if you're protesting regulations so hard when they're actually leveling the playing field those are the two things that pop into my head. And if companies shouldn't be super concerned about that because they're now getting the resources to build better models and gonna have industry practices that level everybody up.

Speaker 4:

Yeah, I think we're thankfully at a time where the regulations that are out there and that are coming down the pipeline are actually really well aligned with what the experts want. You know what they're describing. You know recording explainability, explainable bias, if not mitigating bias these are all pretty like no brain, obvious things that we all need to do, and you know pushbacks against this kind of stink of. I don't care about good modeling more than I don't like regulation.

Speaker 3:

And that's what's crazy, because I would a hundred percent wager there's several companies out there that are doing things really solid. They're like EUA IAC. Yeah, what's not not much different. Okay, I need to now map it to EUA IAC, but it's really, if I'm really modeling and building really good systems same with the NAIC stuff. I'm just kind of shrugging it off like, okay, cool, now I got to report it to somebody. But it's not like there shouldn't be this huge Delta of, oh my goodness, I have to do this thing. It's like, so you didn't have any policies in place, you had no idea what you were doing. No change management. You weren't monitoring it in production. You have no idea what's happening to your system. Okay, maybe you you also leaving a lot of money on the table because, as we all know, like you build better models, as we talked about in this podcast, you build better models. It makes your company more money. You save more money. They're more accurate. So, like these are literally best practices that will make your models better.

Speaker 2:

And Andrew, they're out there. I was just at a conference. The chief risk officer of a huge global company, like one of the biggest, very clearly said I'm not scared of the EU AI act, we're ready. So. So let's not hate on the market. There's a lot of people doing a lot right, like, and we're not saying we're hating on them, right, but like there's a there's a lot of people that are doing good things, and I and I really do think the there's such a common excitement about the potential and opportunity. There's a lot of companies doing great things.

Speaker 2:

And my hope to Susan like, when you're saying what are companies getting wrong, I don't think the companies that are doing good things are telling enough. People are doing it Right and I think that that's a huge opportunity. Like we need more people that are doing the right things to raise their hand and say, hey, here's what I'm doing, it's solvable, here's the benefits we got beyond compliance, because I don't think that this is a proprietary thing, right, and so I respect companies' concerns to keep privileged confidential information close to their vest totally. But we all benefit if everybody realizes that there are better ways to build models and deploy modeling systems, and I am excited to hear people like this chief risk officer stand up in big rooms and say I'm not scared of this thing Because, just like Andrew said, that person then went on to sort of say we log all our systems, we inventory every one of our models, we have objective reviewers of all our high-risk systems, we monitor our models for performance, we test them for bias and fairness, and that's the crux of a lot of this, right? So, um, I mean, I am, I am I. I mean I feel so fortunate and lucky to have the opportunity to be building a company that does something that matters, and it's pretty cool to know that we could fill a podcast just talking about responsible ai and that there's a lot of people who are going to listen to this, because there's a lot of other people that care too, and I don't know that's a responsibility, to play on the word to be an advocate for responsible AI, and it's pretty cool to be able to think about these things being common conversations in every company right now. And and I think everyone will figure it out eventually, it's not easy, but we'll get there and we are.

Speaker 2:

Oh, that was the other thought I meant to give. This is not like we get to an end state, by the way, that I think that's maybe a mistake, that I'd love to just call it really quick you don't do something and it's done right. Like this is continuous, this is going to evolve, this is going to be omnipresent, like culture of quality control and good. Like SDLC software development life cycle controls. They don't go away. They're always there in the same spirit. Like we're always going to have good model development, life cycle controls, right. So that was one last thing that I was just thinking about that I wanted to get in here is like this doesn't end when you buy something or you build something. This keeps going and that's good, that's a good thing, and it's going to be commonplace. Like we're going to have quality control for software, just like we do for every other huge thing that impact our lives, and that's an okay thing.

Speaker 4:

Totally, and and you know that's that's what we're doing right. It's it's best practices. And best practices aren't static right, they change every year, they change every month. What's the best is basically what people out there are doing. That's right, and what we can agree on is right and probably the best way to do it. And so building this knowledge base and continually evolving with the space is the way to go, and that's how we end up in a situation where we're not getting bogged down by regulations, we're not scared of them, we can embrace them fully and openly because we are doing best practices and, at least for the time being, that's what we're seeing is being asked of us.

Speaker 1:

Well, before we close this out, I want to go back to our guest, anthony. What did you think of your first AI fundamentalist podcast?

Speaker 2:

I have imposter syndrome, like I hear with some people that I respect and it was a great podcast. I hear, actually, when I'm out there in the market meeting with customers, there's a lot of data scientists from our customers and from prospects that are like hey, you guys talk about some things that, like, people don't normally talk about, like you're really talking about the substance of our job and the work that we do and how we think about systems. So thanks for letting me interlope on a pretty substantive podcast to talk about something that is a little bit more generalized, right. So thanks for having me guys.

Speaker 1:

Yeah, anytime, we enjoyed it. So for all the listeners. Please enjoy this episode and if you have any comments or questions, visit our homepage. The forum is open Until next time.

Prologue: Why responsible AI? Why now?
Special guest, Anthony Habayeb
Is responsible AI the right phrase?
Importance of AI regulation and responsibility
What about bias and fairness?
Responsible deployment and business management
Embracing responsible AI practices

Podcasts we love