The Product Experience

Navigating AI bias: Insights and strategies - John Haggerty (Founder, The PM Insider)

Mind the Product

Discover the intricacies of bias in generative AI in product management with John Haggerty, Founder of The PM Insider. In this week's episode, John unveils the layered complexities of AI innovation, navigating the ethical realm of the AI revolution reshaping businesses.

Featured Links: Follow John on LinkedIn | The PM Insider | IBM AI Fairness 360 | Google's 'What If' AI Tool | Aequitas Bias and Fairness Toolkit | Deon Ethics Checklist | Kaggle | Perplexity AI | Midjourney

Our Hosts
Lily Smith
enjoys working as a consultant product manager with early-stage and growing startups and as a mentor to other product managers. She’s currently Chief Product Officer at BBC Maestro, and has spent 13 years in the tech industry working with startups in the SaaS and mobile space. She’s worked on a diverse range of products – leading the product teams through discovery, prototyping, testing and delivery. Lily also founded ProductTank Bristol and runs ProductCamp in Bristol and Bath.

Randy Silver is a Leadership & Product Coach and Consultant. He gets teams unstuck, helping you to supercharge your results. Randy's held interim CPO and Leadership roles at scale-ups and SMEs, advised start-ups, and been Head of Product at HSBC and Sainsbury’s. He participated in Silicon Valley Product Group’s Coaching the Coaches forum, and speaks frequently at conferences and events. You can join one of communities he runs for CPOs (CPO Circles), Product Managers (Product In the {A}ether) and Product Coaches. He’s the author of What Do We Do Now? A Product Manager’s Guide to Strategy in the Time of COVID-19. A recovering music journalist and editor, Randy also launched Amazon’s music stores in the US & UK.

Speaker 1:

Hey Lily, I saw a great cartoon in the New Yorker the other week. One guy was leaning over his screen and says to the other can you go through all the old pitch decks and replace the word crypto with AI?

Speaker 2:

Oh yeah, there's a lot of hype in this space, but there are also some actual, living, breathing, thoughtful people who work with AI too, and we booked one of them this week for our chat.

Speaker 1:

Uh, are you sure you didn't get catfished by ChatGPT? Who's out there? That actually fits that intro. And how did we find them?

Speaker 2:

That is a good story. Actually, John Haggerty has the skills. He was the VP of product management at Highwayai before leaving to set up his own shop at the PM Insider, where he's both a coach and working on some interesting ideas of his own. He was recommended to us by a former guest, Tammy.

Speaker 1:

Rees, you know that's one of the best ways for us to source guests. So here's a request for you. Yes, you, we know you're listening. If you have any suggestions of great people for us to chat to, please get in touch.

Speaker 2:

You can do it right now or after you listen or watch our chat with John. He gets into some great practical tips for how to deal with the issues related to bias when developing with AI and LLMs. When developing with AI and LLMs, the Product Experience Podcast is brought to you by Mind, the Product part of the Pendo family. Every week we talk to inspiring product people from around the globe.

Speaker 1:

Visit mindtheproductcom to catch up on past episodes and discover free resources to help you with your product practice. Learn about Mind, the Product's conferences and their great training opportunities.

Speaker 2:

Create a free account to get product inspiration delivered weekly to your inbox. Mind, the Product supports over 200 product type meetups from New York to Barcelona. There's probably one near you, you.

Speaker 1:

John, thank you so much for joining us on the podcast today. How are you doing?

Speaker 3:

I'm doing great. Thank you, Andy, Thank you Lily. I'm super excited to be here today.

Speaker 1:

Let's just do a quick introduction if you don't mind. How did you first get into the wonderful world of product and what are you doing these days? How did you first get into the wonderful world of product?

Speaker 3:

And what are you doing these days? Oh, how did I get into product? Well, I'm from that generation of product managers where product found me versus me going to look for product. Like, way, way, way back in the day, I was working at a broker dealer in Minneapolis and this was post 9-11, usa Patriot had come out. We had to determine, we had to know your customer and we had all these corporate customers where we had no documentation at all that existed. So I'm like, hey, I think I have an idea of how to solve that and like, quick, threw together some macros and use some fun Microsoft tools to throw together a solution that actually created documentation for all them so we could prove that we knew who the customer was and fulfill the legal requirement. And that was like my first product experience.

Speaker 3:

And from there, help them create, like actually create a mark to market tool. Like stringing together macros in an Excel to my Bloomberg terminal to get real-time market information on our positions for risk analysis. If I knew then what I know now about product, you know it'd be in a completely different space. But you know from there, just a number of other roles that you know ended up over at Wells Fargo helping them, like create their first contract management solution in-house for Sarbanes-Oxley, and just all these weird roles that just kind of developed and then finally about oh I don't know what about 12 years ago actually ended up with the product manager title finally and you know, the rest is kind of history.

Speaker 1:

And so, what are you doing these days?

Speaker 3:

Well, these days, I mean that's interesting spot right now, kind of in between things, but like really digging into the AI world. So most recently I was at Highwayai, a company that's in the intersection between PropTech, martech, fintech. We did some awesome stuff there, both in traditionally, but then also within the AI space of how we were bringing data together. Data together Right now, today, besides looking to find that next opportunity with a company really trying to focus on the AI early stage companies that post-seed like Series B companies, I'm also got a couple of things in my back pocket that I'm playing around with, most of it's in the AI space Real. Specifically, I'm digging into how we look at the outcomes and the evolution of AI non-deterministic, generative AI products from a bias and heuristics point of view and what's involved in them and can we understand their evolution and track it longitudinally. There's some big words in there.

Speaker 1:

I love it. Yes, just to make sure everyone's on the same page, this is something that a friend of mine was talking about. In his business, he's banned the use of the word AI unless someone can explain what they actually mean by it. Do you mean an LLM? Do you mean a generative AI? Do you mean RPA? Which thing are you actually talking about? So can you just let's start with definitions, if you don't mind? What is the difference? What are the things that people are actually using these days? What's getting all the hype?

Speaker 3:

Oh my God, I love that question and I love asking people what is AI and have them explain it to me. So for me, I have this image in my mind and the picture I like to paint is AI is the universe of making machines smarter. Within that universe, you have different galaxies. One of those galaxies is machine learning. Machine learning is where we use algorithms and self-determination from that bit of code to learn and understand what is happening with the data. It becomes smarter and can do more with the more data it has and the more experiences that it has, it learns, it evolves.

Speaker 3:

Now within that galaxy, we have different solar systems which could include deep learning, where you have RNNs and CNNs. You have generative AIs with LLMs. They can be in different solar systems, they can be planets, however you want to look at it, but it becomes this hierarchy of different levels of detail. For me, where I like to focus is really in that generative AI, leveraging the LLM technologies that are part of machine learning, that are part of those neural, of deep learning within those neural networks.

Speaker 2:

So everyone is talking about or it seems like everyone is talking about generative AI these days and experimenting with it. What are the kind of issues with using it? What are the risks with using anything that's generative AI?

Speaker 3:

Oh, there are a lot of risks. First and foremost, the generative AI. You can't always trust what you get for a result, which is one of the most important things, I hope. We all have had experiences with a hallucination within AI. I know I have personally, both professionally and personally. It's the trust but verify Great results.

Speaker 3:

The one thing with generative AI is it wants to answer your question. It wants to do what you have asked, whether it's right or wrong. It wants to give you what it thinks you want. That is hugely important.

Speaker 3:

The other side of it is it is based on the information that it's fed throughout. Whatever written history, whatever visual history, whatever we have, that's given it. Written history, whatever visual history, whatever we have, that's given it. That's the basis of its understanding and its knowledge of how it's going to answer back. So anything that's been fed into it that is historical data, historical information, historical trends, historical art, whatever it is that's what's coming back out. One thing we've learned from history is when we have societal biases, when we have the heuristics within human nature, those things get perpetuated with what we produce. And if that's what's being fed in these models, it still exists there and we have to understand that we have to recognize it, we have to look for it. The only way we can mitigate it is either through augmenting with underrepresented data that counteracts that, that proportionately adjusting the models, putting in human moderation, for testing for it, for making the adjustments, or through how we engineer our prompts and what we put in to get those answers back.

Speaker 1:

I hosted a panel the other week and my buddy, tom Dolan, quoted someone else's name, I'm forgetting where they compared generative AI, I think it was actually. They're comparing ML in general to having infinite interns or infinite 10-year-olds, and it's great, they can do a lot, but you know it's never going to be perfect. So, from what you're saying, how do we even begin to test this? How do we know and track the biases that are inherent in the models that we're using, in the applications that we're building?

Speaker 3:

that are inherent in the models that we're using, in the applications that we're building. Well, first and foremost, you need to define what are you looking for? What is fair? What is the bias that could come out of this, whether it's doing a generative AI product or whether it's doing a traditional SaaS product? One of the things I absolutely love is doing that pre-mortem, looking at what could be the worst case scenario here, what would fail? Well, how do we know if we're going down that path? How can we determine it?

Speaker 3:

So you need to bring in those the non-business metrics, but the health metrics, the behavioral metrics and be able to create tests around that. Those can be done through, you know, fairness testing. They can be done through using, like Google. What if? To do different testing around different inputs, looking at those, the outputs you get there? There are some things out there Um I'm going to butcher this name, I think it's a Quatus um, which is an open source toolkit that's available. You can do some bias testing with that. I know Dion has got a really good checklist for the data scientists to use to bring in fairness considerations, bias consideration when they're laying down that fundamental framework, bias consideration when they're laying down that fundamental framework. I personally I like using IBM's the AI Fairness 360. It has a good tool set for basic metrics around fairness testing of your algorithm.

Speaker 3:

Those are some things to build out there, but really it comes down to that question before you begin, what are the risks here? What are the bias? How are you going to be able to test and understand when you are starting to migrate and move in a direction that isn't good? That isn't your optimal way. You want to go put in ways to catch that it was moving into sharing racist, sexist, anti-semitic tweets before that started. But even getting down that path, is there a way to look at that?

Speaker 3:

One of my big proponents, one of the things I'm trying to work on, is how to test longitudinally for this, so being able to test over time, to look at and see when you are starting to move from a green score to yellow before you hit red and determine okay, how do we pull? Do we need to pull this back now? Do we need to bring in more data for those underrepresented groups? Do we need to synthetically create data here? Do we need to adjust hyperparameters? Do we need to synthetically create data here? Do we need to adjust hyper parameters. Do we need to make changes to the model? What is it that's going to bring us back to staying in green when we drift out of that space?

Speaker 2:

It's interesting, isn't it? Because it does feel like, ultimately, in order to be safe and make sure that everything is done well, you kind of you need a human at the end of it.

Speaker 3:

But then, as actually as I was saying it, I was like, well, hang on a minute, because I'm pretty sure I probably have some internal biases as well, so even that's like not gonna be the perfect solution no, actually we all do, but having human in the middle, human in the loop, safeguards is extremely important with all AI, um, having that, those key decision points, and that, whether it's moderation, whatever it is to be able to detect and then make a decision. You know? But, lily, to your point there, though, we all have biases, we all have assumptions we've built into it. One of the examples is the US healthcare algorithm that was used to determine the level of care that's needed. As a proxy, there, they used healthcare spend in the US to determine the. It was a proxy metric for the level of care that was needed. In theory, yeah, great.

Speaker 3:

However, within the Black communities, the spend was significantly lower due to social economic factors, due to societal economic factors even, and due to systemic barriers that exist for those communities to get healthcare. So when they were judged to be in less urgent need of care or in-depth care from a racial point of view, it wasn't because they didn't need the care. It was because of the societal biases that exist for their access to health care were being perpetuated then by the system. So it goes down again. What's the problem? What could be the problem here that could exist? What could be the problem here that could exist, what could be the worst thing that could happen? Having that pre-mortem and checking your biases One of the things I like to do with generative AI is feed in an idea and ask it to point out my biases to me, to use it as that mirror to reflect back at me. Yes, I know they're not unbiased. Yes, I know they're prone to error, but it does serve as a starting point to get me past the blank page of looking at myself.

Speaker 1:

It's interesting. One of the things that we do as product people is you know we look at the life cycle of our product and at a certain point you, in many cases especially when you're doing things like B2B or some B2C tools you'll get to a certain level of maturity and you know how the algorithm works. You know what to expect to when something comes in. You've tweeted enough that you know that you're getting the right answers out, but with this, with generative AI, it's the black box inside, right. You don't know exactly what's going to happen. Is there a way to say we have a mature AI product, we don't have to keep doing it? Or what's the life cycle? I know we're early on, but where do you see it?

Speaker 3:

Well, I love this question. The definition of done doesn't really exist with these products. It's like the definition of when are you done parenting, because they're evolutionary in nature. It just will continue on. There's different roles and responsibilities that you're going to need to take on. What you do need to look at, though, is when can you pull back on the updates and the fine tuning of the model that's going to need to exist forever, as long as the product lives, but when do you pull back on it? When do you?

Speaker 3:

As long as you have your continuous monitoring in place, you can look at your um, look at and understand the, the integration points. You know that those are working. You're looking at the inputs, the outputs. You can get some type of um tone and understanding of those outputs being at least correctly formatted and that they fit with where you want to go from a um and a system understanding point of view. Yes, there are things you can pull back there, but the, the, the existence of dealing with drift of data, drift of the actual usage of the end user impacting the system over time. It's like I and I said earlier I earlier it's just going to continue on. You need to be able to keep your eye on it. You need to look at understanding when you will need to step in to help the system get back into a good state a green state state, a green state um, and knowing having defined ahead of time what those points are as the most valuable step I think you can take.

Speaker 2:

I think a I like the the sort of chat GPT side of things. Anyway, for me, one of the things that's like a little bit scary is it feels so believable, like it's your, it's your friend who is just so confident they have all the answers and you're like OK, you know you talk with such authority. I kind of, I kind of believe you. You know, I remember back in the day when I worked in a recommendation engine like business, a recommendation engine like business we had to explicitly say like these recommendations are built upon your personal taste, like the things that you've been watching, and and kind of manage the customers expectations on like what they were seeing and why they were seeing it. And I feel like that's a bit harder with with something like chatPT, because it feels so believable and real.

Speaker 3:

Absolutely and, as I said earlier, it wants to answer your question. It wants to help. That's its purpose, that's why it exists. So what you ask, it is extremely important in understanding the results you're going to get. I was at a meetup, a conference last week and was talking with a Google engineer a former Google engineer just recently left and we were talking about prompt engineering and he talked about how using the word or asking the system to summarize is a big no-no, like that word itself. It will create information to be able to summarize that, while instead asking it to extract from the text, from the information that's provided, is a much better way to get those, the assets, the understanding, the information pulled out of text, a site, whatever you provided, the data you provided to the system, but that right there, just understanding the nuances of the language we are using. The other important thing that I would say is understand the user's text and knowing what you're looking for, what you're wanting, what your goal is Going in there from a prompt engineering point of view. You for what you're wanting, what your goal is going in there from a prompt engineering point of view. You know, are you leveraging a, you know, a zero shot or a few shot, prompt style. Are you using that, that, that train of thought prompting? That's my preferred method because I want to systematically lay out how I want it to work, versus just giving it the information to go forward.

Speaker 3:

You know, there's a number of things that we could get into about, you know, giving the system time to think. And what's really interesting with the with like chat GPTs, is, once they've used the word, the word, and they've started typing out that sentence, it's locked in. It's not going to go back and rewrite when it gets to the end of that. Oh, actually, I think this is a better way to say it no, it's locked in and so that's the answer. You're getting, um, it just, that whole thing changed. You're giving the system time to think and do a layout of its thought process and then going and giving you the information so much better to get cleaner results that are more useful in my mind from where I, from where I come from. But it's just, it is that desire which I'm personifying the systems here which they don't have feelings or anything like that, but it's goal is to answer your question, to give you what it understands you are asking for, and it's going to do everything it can to do that.

Speaker 1:

This episode is brought to you by Pendo, the only all-in-one product experience platform.

Speaker 2:

Do you find yourself?

Speaker 1:

bouncing around multiple tools to uncover what's happening inside your product In one simple platform. Pendo makes it easy to both answer critical questions about how users engage with your product and take action First.

Speaker 2:

Pendo is built around product analytics, enabling you to deeply understand user behavior so you can make strategic optimizations.

Speaker 1:

Next, Pendo lets you deploy in-app guides that lead users through the actions that matter most.

Speaker 2:

Then Pendo integrates user feedback so you can capture and analyze how people feel and what people want.

Speaker 1:

And a new thing in Pendo session replays a very cool way to experience your users' actual experiences.

Speaker 2:

There's a good reason over 10,000 companies use it today.

Speaker 1:

Visit pendoio slash podcast to create your free Pendo account today and try it yourself.

Speaker 2:

Want to take your product-led know-how a step further? Check out Pendo and Mind, the Product's lineup of free certification courses led by product experts and designed to help you grow and advance in your career.

Speaker 1:

Learn more today at pendoio slash podcast. It's an interesting one, john. So you were telling me about a menu you tried to create using this and the problems you had. But you know, essentially the systems hallucinate. You know, if they can't answer, they make things up. Why is it so hard to get a computer or to get a generative AI to say I don't know?

Speaker 3:

Because that's what it's intended to do. It is intended to answer your question. It will get you an answer. Hell or high water, it will get you an answer. That is the ultimate goal. So the ability to say I don't.

Speaker 1:

Is this like an OKR brain? Get an answer, not get the right answer. Get an accurate answer.

Speaker 3:

I believe it is an issue with the ultimate job. The user's goal is to get an answer, and so that's what's been set as success that we answered the question.

Speaker 2:

And you mentioned earlier a few different ways of testing the generative AI tools that you're working on to make sure that they're not being biased and that they're giving accurate information and things like that. You breezed past a few different tools that I'd never heard of before. It would be great, like to just dig into. You know maybe a couple of your favorites and sort of how they work and, yeah, how you use them.

Speaker 3:

You know maybe a couple of your favorites and sort of how they work and, yeah, how you use them, yeah, sure. So, first and foremost, the one I absolutely love right now is IBM's AI, fairness 360. Open source toolkit that provides metrics, explainers and algorithms to check for and be able to mitigate biases. Great tool out there. Google's what If? Allows you to probe different ML models and visualize the impacts of the different inputs or predictions, or the different inputs to get different outputs.

Speaker 3:

I know with generative AI you can put in the same input, get different outputs, but this is different inputs, looking at how it will change things. It can help with surface-level bias detection. I believe it's called Equitas. It's European-based. It's an open-source bias audit toolkit that helps evaluate the different models of several biases and fairness metrics. Their GitHub is awesome with information on it. And then Deon is another one.

Speaker 3:

This is more informational and an ethical checklist for data scientists for their workflow and prompting teams of what to think about and what to consider when going through that risk assessment, that bias risk assessment. When going through that risk assessment, that bias risk assessment. My recommendation is using that like the very beginning of the process, that pre-mortem, to understand what's going on. Those are the four I would mention right off the top.

Speaker 3:

For me, the other thing that I would recommend is, if you're using your, if you're using generative AI for analyzing information, think about what would be the biases you would that could exist from like a behavioral psychology point of view, and look at what tests were have been created for those specific bias exists that are used on humans today, and why not create a test for your GPT? With that test, you know if you're looking at, you know whether this product is a good fit or we want to go through with this. We want to continue on Run ChatGPT through some cost fallacy assessment and see if it exists and look at the answers you get back. Don't just ask it once. Maybe ask 10, 15 questions three or four different ways and look at the overall answer and try and get an understanding of how does that exist in there. So those are things, just being proactive about it.

Speaker 2:

Nice, what about? So? Like normally, you know, when we're talking about testing software, we're talking about unit tests and things like that. So is that is there going to be like the unit test version of like how we test for our AI?

Speaker 3:

tools. No, I'm hoping. Actually, my primary side project right now is hopefully going down that route of creating a test suite like that. I don't see it existing today. There's no test-driven development, there's no behavioral-driven development that can really work because those are traditionally aligned with predictable, reliable outputs from defined scenarios going in. That doesn't exist with generative AI, because you can put in the same inputs and get different outputs.

Speaker 3:

What can be tested that way is looking at those integration layers, making sure that, systematically, things are working the way they want to, that you are understanding the touch points of the system. That's working. What can work and where we need human intervention is looking at it from a prompt engineering and understanding, putting in the inputs and be able to test the result to make sure that they match, that they align, that they work in the way that's expected. I believe as we do more of this testing, as we get further down the path, we are going to see an increased understanding and understanding around those outputs for accuracy.

Speaker 3:

You know the lack of or the ability to remove the hallucination and maybe even getting to the point where they'll say no, sorry, I don't have an answer for you. That that's a really good question. You know, no one's, no one has an answer to that yet. The the other thing that, um I find interesting is going on the flip side is actually using generative AI for your testing at the integration level, having it create the test driven development tests for you, those unit tests, and really understanding and being able to look for those edge cases and those bugs better, because it's just exploratory but from a like a true automated test point of view. For that the outcomes, it doesn't exist yet. What I'm hoping we can do is almost recreate or regenerate the ability to test it, almost like a behavioral psychologist would test an individual to understand where the system is getting its answers and turning those black box, opening up those black boxes to understand the decision-making processes that were involved inside of those LLMs.

Speaker 1:

Jonathan, you bring up an interesting point about how we use this technology as part of the product development process itself, and I've had friends who have used Intested in various ways, and someone I know used it, just went to chat GPT and asked it to write all the user stories for an Epic, and it did a great job. But in that case, the Epic was how do we integrate with a third party API that's well-documented. So there's a lot of information there and it wasn't perfect, but it was a great first start to save them hours and hours of time. Uh, on the other side, asking you to to generate a user stories or user acceptance criteria or something like that for something that's more of an unknown, I'd be really worried about all those hallucinations and things like that. So what's the right way? What? How are you using it? What would you recommend as an approach? Where? Where are the limits?

Speaker 3:

What I recommend right now and from the coaching I'm doing and helping other PMs like just embrace it is first start playing with it Just to get an understanding of it. Look at qualitative research and quantitative research. Run experiments there. Whether you're diving deep into your user feedback, your support tickets, that qualitative side, pull out the themes. It can help you remove your own bias that may exist in those. It can also help you dig through tens of thousands of tickets and look for underlying patterns that you may not be able to detect. That's one good thing, yeah, One good way to get exposed to it. The other side I would say is um, initially just quantitative research. Um, look at your first couple of weeks of of usage data and compare that with your three month or six month or nine month retention and look for those keys to success for that onboarding flow. And maybe it's three, four, five combination of events that need to flow together. That would take for me it would take days or weeks of doing complex maths to get to that answer where this can run through it you know in mere minutes and be able to go much further and look and much deeper for those underlying correlations, either positive or negative, that may exist there. Those are two ones where I recommend any of them, just to do the exploration. If you can't use your data like you're not allowed to go out to, like Kegel or some of the other data sources and pull down generic data and just play with it as a product manager and really just get exposed to it and start thinking that way. Um, what I would say on the, the research or evaluation side of things, um, one I like and I credit for my former team is a market and competitor research analysis Great one there.

Speaker 3:

Pulling together information you can feed in there. You can feed in the format you want it. You can lay out how you want to do analysis and then have it start pulling in that information. One big shortcoming is usually pricing because it's not always public. So that's where you might need to augment information, add additional insights there. But I think it's a great way to start pulling in that publicly available information. You can look at comparing tone and what they're trying to express in their brand. It can get them an understanding. Especially if they have good alt text on their sites, you can get an understanding of where they're going with those images to pull things together. The other one I like to do is, instead of writing the user stories, have it point out what's missing from the user stories. Where are my blind spots? What did I not include here? What are the questions that are going to be asked? What are the things that I need to extrapolate, to flush out more on? It's a great way to start, as well, yeah, I love those use cases.

Speaker 2:

I'm curious to know you're obviously dabbling a lot with AI tools that have been created. What's the one that you're kind of really excited about right now and the one that you've been playing with a lot?

Speaker 3:

Oh, most recently I've been using Cloud 3 Opus via Perplexity. That's kind of my go-to. What I like about it is I feel like I get the best results from Cloud 3 Opus. I like Perplexity for the pro model there because it helps me with my prompt engineering by asking me follow-up questions after I put in my initial prompt. It's trying to verify that understanding of the question and then before beginning to answer it. So that's kind of my go-to from a tech space, answer it so that's kind of my go-to from a tech space. Image-wise, I'm a mid-journey fan. I just like playing with it. My daughter, my oldest daughter, is a photographer so she has fed me all kinds of information to put into my prompts about, like camera settings and lighting settings and different things like that that helped make me create really good lifelike images. So it's kind of fun to be able to bring in her knowledge into what I'm doing creatively from an image point of view.

Speaker 2:

Nice and like in the future. I feel like currently, at the moment, product management is like the hot thing, like it's the. It's the job that lots of people want, at least lots of people like the new baby. That's just going to say I work in this area, but we, you know, with the AI revolution and it definitely feels like a revolution to me Um, what's going to be the? You know the? What are the? What are the jobs that people are going to really want to do that are related to AI?

Speaker 3:

Well one. I love the fact you called it a revolution. I just posted a poll on my LinkedIn this past week of is it a revolution, an evolution, a disruption or an invasion?

Speaker 2:

Which of those four?

Speaker 3:

ways do you view what's going on with AI today? Just that understanding of your own worldview paints how you're going to think about everything else around AI. So one I love that you called it that. Answer your question. What are the most important things? So one, ai I don't think it's going to go anywhere. It's not a fad, that's passing. It's where we're going either in a revolution, evolutionary aspect from my point of view, hitting my worldview of where AI is going From a product manager, there's two sides of the coin.

Speaker 3:

One is how do I use AI to do my job better? I think about AI and where we're going today. Going back to like the 1950s, 1960s with the US space program. We had human calculators. We're doing the calculations. Then we have machines now that are able to do those calculations much quicker, much faster, more accurate. All those things AI is going to evolve into the point where it's going to replace or augment the roles in which humans are performing like machines, where they're doing repetitive tasks that are either in this decision tree, decision forest.

Speaker 3:

One of those aspects to get through it, the understanding context, understanding the empathy and emotional side of product management is so important the curiosity, the learning agility that's not going to go away. That's the basis of product management. That's always going to be there, so I'm not worried about how product management roles being diminished by this. What it is, though, is how to embrace it, how to use it, and how it can become another tool in your kit to move you further along.

Speaker 3:

The other side of that coin is product managing, an AI product, which we talked about earlier. It gets much more complicated and much more to the extreme, like I. When I talk about, I say everything that marty kagan teaches about iterative development, everything that theresa torres teaches about continuous discovery just goes to the nth degree with ai products, because you have to constantly iterate as the full what you're trying to produce, as well as the full what you're trying to produce, as well as the system evolving and learning just machine learning and be able to keep up with that and looking at those constant feedback loops throughout the entire process of developing it. Yeah, it's just, it's extreme, extreme agile development there.

Speaker 2:

Amazing, John. It's been so great talking to you today. Thank you so much for joining us.

Speaker 3:

Thank you all for having me. This was wonderful. Thanks, John.

Speaker 2:

The Product Experience hosts are me, Lily Smith, host by night and chief product officer by day.

Speaker 1:

And me Randy Silver also host by night, and I spend my days working with product and leadership teams, helping their teams to do amazing work.

Speaker 2:

Luran Pratt is our producer and Luke Smith is our editor.

Speaker 1:

And our theme music is from product community legend Arnie Kittler's band Pow. Thanks to them for letting us use their track.