Money Isn't Everything

Antidotes to AI Hype

May 30, 2024 Mary Wisniewski / Eric Siegel Season 1 Episode 4
Antidotes to AI Hype
Money Isn't Everything
More Info
Money Isn't Everything
Antidotes to AI Hype
May 30, 2024 Season 1 Episode 4
Mary Wisniewski / Eric Siegel

Send us a Text Message.

Eric Siegel, founder of Machine Learning Week conferences and former Columbia University professor, has focused his attention on cutting through the AI exuberance in his newer book, “The AI Playbook.” Eric talks to Mary about what corporations get wrong about one of the buzziest technologies and offers practical steps to take instead. Eric also talks to her about the tension between fraud and customer experience and what went into making his music video on predictive analytics (yes, that happened).

 

Show notes:

The AI Playbook: Mastering the Rare Art of Machine Learning Deployment

Generative AI Applications Summit

Show Notes Transcript

Send us a Text Message.

Eric Siegel, founder of Machine Learning Week conferences and former Columbia University professor, has focused his attention on cutting through the AI exuberance in his newer book, “The AI Playbook.” Eric talks to Mary about what corporations get wrong about one of the buzziest technologies and offers practical steps to take instead. Eric also talks to her about the tension between fraud and customer experience and what went into making his music video on predictive analytics (yes, that happened).

 

Show notes:

The AI Playbook: Mastering the Rare Art of Machine Learning Deployment

Generative AI Applications Summit

Mary (00:00.136)
Eric, welcome to Money is in Everything. Thanks so much for joining the show today. And we've got news coming from you, or I guess it's old news at this point, but you've come out with a book, the AI Playbook. And I'm super excited to have you talk about it because this show is for people dreaming up new ideas, experimenting with bank accounts in different ways, which of course will involve predictive analytics and has for a long time. But your book sort of takes away the starry eyes a bit and says, hey, there's a lot you need to do to actually make this happen. So let's start with like, Eric, what made you want to do this book?

Eric Siegel (00:57.391)
Well, you know, the way you just phrase it, it's kind of like, we got to get a little sober. There's a lot of fanaticism and elation, intoxication around AI hype. Sure. This is my second book. The first book's about how machine learning works, the value proposition, how it delivers value. This one's how to capitalize on it. So how to make it actually work for your organization, not just the core technology, not just the number crunching, but that what gets learned from data and ...

Data is a recording of history. It's a long list of prior events. It's experience from which to learn. And the most valuable actionable thing you can learn from data is to predict because predictions directly inform the action taken for each individual case or individual consumer or corporate client, where to drill for oil, which satellite to investigate is potentially running out of a battery. All large scale operations consist of many, many decisions and the Holy grail for driving decisions more effectively.

for improving large scale operations is prediction, per case prediction. That's what you get from data. That's what you learn to do is to predict. But as sound as that number crunching might be, that doesn't necessarily mean it'll be valuable to your company unless the company acts on it, unless you actually change the operations and thereby improve them by way of integrating and acting on all those individual per case predictions. So for example, predict who's gonna be,

a good debtor than approve their application for a loan. Which transaction is most likely to be fraudulent? Block or audit that transaction. Who's going to buy or cancel? Market accordingly to that individual customer. So each prediction directly informs the action, which looks great on paper. But it turns out that the majority of these machine learning enterprise projects actually fail to get that last mile to actual...

liftoff to deployment, to integrating what's been learned into operations. So that's what the new book, the AI Playbook is about.

Mary (02:59.752)
Well, one of the things I loved about it was you peppered in a bunch of quotes throughout the book to set the stage. And one of them you were quoting another while you were quoting the professor. And the quote was I wrote this down. So it said, do you have an AI strategy makes as much sense as asking, do we have an Excel strategy? Which I think really sets the tone because I feel like there's a big disconnect there. And I'm wondering, you know, for an audience of not.

not only FinTech entrepreneurs but like the innovators of banks and credit unions, you know, what's a, how, what's the, where do you start to sort through all the chaos that's been happening right now in terms of the hype? How do you, how do you, how do you think about this? What's a good starting point? If there is a good starting point.

Eric Siegel (03:42.671)
no, the antidote to hype is super straightforward. It's to focus on value, a concrete value proposition, the actionable deployment, the way you're going to actively improve an operation. Whether we're talking about predictive AI or predictive analytics, those types of enterprise predictive applications that I've mentioned in my little monologue up front here, or we're talking about generative AI where you're, it's still built on the same core technology.

Mary (03:46.792)
Mm -hmm.

Eric Siegel (04:08.591)
machine learning, but we're obviously it's generating new content items like graphics or writing or video. Either way, how exactly are you going to use this? To what value driven end? What's the actual operation that's going to be improved in a measurable way? The antidote to hype is to focus on the value, the concrete value proposition. So.

The hype kind of says, hey, this is a panacea. This thing is, it's self -evident that it's, valuable. Of course, the word intelligence is a big problem and AI is always going to be a plagued by the word intelligence because that's the name of it. Right. But intelligence is a subjective word that doesn't define anything concrete, any particular value proposition, any particular kind of technology. It's very subjective.

Mary (04:53.544)
Mm -hmm.

Eric Siegel (05:02.479)
And if nothing else, it's a word that describes something very particular to humans and something very ambitious and hard to try to get a machine to do as seemingly human -like as generative AI might be, which is amazing and unprecedented. Doesn't necessarily mean it's as valuable as everyone seems to be presuming. So it's a little bit of a cold shower, right? But so let me put it this way. I mean, if you're kind of feeling fearful in some way, even if just as far as FOMA,

Mary (05:23.304)
Yeah.

Eric Siegel (05:31.631)
or job security and or, and often it's a combination, you're feeling that intoxication of the hype of this thing is just incredible, what it's coming, it's just, it's, you know, then you're almost deifying the technology, which intrinsically the word intelligence actually does, because once a computer actually becomes human level capable in general, then it can conduct AI research and prove itself ad infinitum. So there's always that elephant in the room underlying talks, conversations about AI is like, are we really talking about something that's approaching general human level capabilities? That is to say, well, they call it artificial general intelligence, but I like to call it artificial humans. I say, no, I don't think that we're actively headed there. I don't mean that it's theoretically impossible many centuries in the future. But we need to be kind of come down to earth and be like, look, this is where we are now. There are improvements coming, but what's the actual, actual concrete value proposition. So that kind of fear and or elation, the antidote is to focus on a concrete value proposition.

Mary (06:39.624)
Okay. There's two things I want to get into from what you just said, but let's start with fraud because that's certainly fraud has been soaring and financial services, even I think Czech fraud in particular. And I know in your book, you mentioned FICO as, you know, one of the tools that this industry uses to help, you know, get in front of fraud. But let's talk about, I mean, well, we can talk about whatever you want to talk about, but we can use FICO as an example, but like, how is fraud getting so out of control, Eric?  And how do we use this technology to maybe get a little bit better with it? 

Eric Siegel (07:09.071)
Yeah. I know this is recording, so people listening. Yeah. Okay, great. So, you know, the book covers several case studies. One is FICO for fraud detection. FICO is very well known for, obviously, for its credit scores. So it kind of identifies strong debtors by day and fights crime by night. Because the thing that's lesser known, but is at least as big a business for FICO, is they have the leading payment card fraud detection model that's used for every single transaction of two -thirds of the world's payment cards and 90 % of the US and the UK. Fraud detection is a well -established value proposition of using machine learning, whether it's for payment card transactions, checks, or any other kind of transaction. So it's very prevalent. Fraud is obviously prevalent, and we're constantly fighting it with these models that say, hey, look at this transaction and everything you know about the transaction. What is the chance that this is not to be authorized, right? That it's being conducted, it's a wolf in sheep's clothing, essentially. If so, either block and or audit that transaction. So that value proposition is really clear, really well established. And it's a really important part of... the white hats versus the black hats basically. This is sort of an ongoing battle between criminals and sort of the above board world of financial transactions. And it's absolutely critical, but it's not a magic crystal ball, right? So you're not going to be able to know for sure with 100 % confidence about every transaction which one is fraudulent and which one isn't fraudulent. In fact, fraud is...

The good news is it's relatively infrequent compared to the number of legitimate transactions. So often, also in the case, for example, of payment cards, it may be one in a thousand transactions that are fraudulent. Now that still makes up for a lot of fraud and it's very, very costly. But how do you get the computer to automatically on the fly in real time, determine which of these transactions are likely enough to be fraud that we want to potentially inconvenience a legitimate transaction and the card holder, the end consumer, whoever's trying to conduct that transaction. That's the numbers game that you're playing. FICO for payment cards plays as well as anybody. They have that leading fraud detection model. And I cover in the book as an example where they pull the data together from across thousands of banks. Any bank that wants to use this best of class fraud detection model to be the customer and use FICO's model, they also have to contribute to the data. So FICO gets to learn from transactions and data that's accumulated across a whole bunch of organizations, a bunch of banks. Whereas with most machine learning projects, you just use the data in -house. You might augment it with external data, but in terms of transactions and how people respond to marketing or which transaction turns out to be fraudulent, this kind of thing.

Usually it's from your own customer base or prospects and how they interact with your products and what the outcomes are. Right. There's always some behavior or outcome you're trying to predict for any given machine learning project. So FICO is a great flagship example. Another example I cover in the book is UPS, a couple of prominent dot coms, one of my own client base, one of which is a failure. I kind of lead off the book after the UPS story, which is an amazing one with my own story, which is a big failure, then I go back later with a different success story. But I think that the failures are just as important because they're at least as prevalent. And there may be a really good amount of successes, which really in those case studies and those success stories are the bread and butter of the conference series I've been running since 2009, which is called Machine Learning Week. But there are probably a lot more failures. So that track record stands to be improved greatly and it should be.

Mary (11:24.584)
Yeah, well, it definitely should be. And I wanted to, you know, to that pendulum of like inconveniencing the consumer by shutting down a transaction when it's not fraud versus like, you're getting in front of fraud.

You know, startups are usually criticized for bringing on lots of fraud when they are growing their user base, if you will. But like, if you were heading a company, how does one work that pendulum? Like, do you really keep it tight at first, or how do you think about that? Or how would you think about that? Yes. Yeah.

Eric Siegel (11:57.039)
As far as like that balance between false positive and false negatives, well, I'm so glad you asked. So this comes down to evaluating how well a predictive model works. And the evaluation piece is really core to where things break down. The breakdown, the reason things don't get to successful deployment in general has to do with a disconnect between biz and tech, between the technical data scientists, the number crunchers who are operating machine learning,

Mary (12:10.6)
Mm -hmm.

Eric Siegel (12:26.191)
for preparing the data and operating machine learning software. And then their client, the stakeholder, the person who's running the, or in some way in charge of operations meant to be improved with these predictions. For example, whether transactions should be conducted at all when requested. So of course, the big question is how good is machine learning, right? How well does it predict? And that question is almost never answered in a reasonable way. That's the disconnect. 

Mary (12:46.856)
Right. Why, why is that?

Eric Siegel (12:56.015)
Because the data scientists are all trained to focus on technical metrics like precision recall, even accuracy is only a technical metrics. These are numbers that tell you simply the pure predictive performance of the model comparison to a baseline, the relative performance, a baseline like random guessing. So the fact is they do predict better than guessing. That's generally potentially quite valuable. That's the good news.

mary (12:59.88)
Mm -hmm.

Eric Siegel (13:22.063)
But that doesn't directly translate to what the actual business value. So for any particular use case like fraud detection, what's the business value? The business value is going to be, well, look, every time we allow a fraudulent transaction to go through, we lose, right? Usually the bank, for example, in the case of payment cards is responsible for the full price of that fraudulent transaction. But there's also a cost with the other mistake, which is when you inconvene to cardholder, with what's called a false positive, you say, hey, this is fraudulent, let's interrupt the transaction. And then you find out later it was actually legitimate. Now that's generally less costly, but how costly are they? 

Well, in the case of payment cards, they're approximately a hundred dollars versus $500. That tends to be the industry norm, but you don't necessarily just want to work with those averages. It depends on the particular transaction. It depends on the region. It depends on a whole bunch of factors, but one way or another, you need to incorporate those pragmatic business factors to translate the model's performance into the potential business win. In the case of fraud detection, the overall savings in comparison, for example, to not doing any fraud detection based on how you use the model to balance between how aggressive you are stopping fraud versus how lenient you are to avoid inconveniencing cardholders. 

The way you turn those knobs, depend on those pragmatic factors. So I'll tell you now, I've actually co -founded an early stage startup, we're almost a year in and we have a product to do that, to take the performance of the model and actually display it in terms of those business metrics, KPIs like savings and profit, rather than only the standard technical metrics. 

So what happens right now is regularly, systematically, repeatedly, routinely, this is almost always what happens is the data scientist says, hey, stakeholder, the modeling work really great. We've got an area under the receiver operating characteristic curve of 0 .887. Isn't that amazing? Right? And it is amazing because it's like this thing predicts a lot better than guessing, which is cool. It's really, really, really cool. The computers can learn from data to predict a lot better than guessing in a way that's quite possibly very valuable. That is to say that they've learned from some historical examples. and drawn generalizations that hold in general, that hold in new circumstances that have never before been seen. 

So in that sense, it's literally learned something about the world, not just about this particular set of examples. And that's called induction versus deduction, if you want to put it in abstract terms. That's really cool. But the fact that it predicts better than guessing and that you can put it in terms of this measurement or that measurement only goes so far in actually speaking in terms of in business language, right? So that it's lost in translation over and over again.

Mary (16:23.88)
One of the things you wrote about was the importance of working backwards. I think you even quoted something about like, you know, thinking about the movie script, ending before the beginning, and that's something I can relate to because I write. 

Eric Siegel (16:38.319)
Yeah, I quoted a script writer who wrote about writing. It's like everything's backwards planning. Like in choreography, you got to figure out where you're going to end up on stage. And in writing Thelma and Louise, you got to think about the last act before you start writing the first act. So yeah. And the same thing is true with any enterprise project. How's it going to actually be deployed? It's really kind of obvious. And most people know, hey, you have to have a particular use case. 

The reason that gets lost with machine learning and AI, probably more egregiously than you would expect, is because everyone's so enamored with the core technology. We're fetishizing it. Like I said, it's so cool to learn from data. It's also so cool to start thinking, hey, what would it mean for a computer to be intelligent? It's cool philosophically. Everyone loves the sci -fi. So next thing we know, we're incorrectly presuming that this amazing core rocket science, is valuable, it's not intrinsically valuable, only if you use it, only depending on how you deploy it. It's almost like we're more excited about the rocket science than the launch of the rocket.

Mary (17:46.92)
I know, I mean, I just feel like I relate to this and things I've experienced at different corporations. But I mean, you're pointing to how it is a really cool technology, but certainly you've been in this world for years - I would call it like a passion career, it seems that way from an outside perspective. So you've seen it change, but one of the things we do on the show is, that's what you said, but this is what I'm putting, direct quoting your book. And I picked this one, and it was: 

"Most people think data is boring. The word data is a deal killer at cocktail parties. I know this from personal experience, I have the data.

But data isn't just an arcane bunch of ones and zeros. It's a recording of history, a list of prior events. It encodes the collective experience of an organization from which it is possible to learn analytically how to predict." It is a great quote. Eric, congratulations on writing that.

Eric Siegel (18:36.399)
That's a great quote. I love it. 

Mary (17:46.92) 

It is a great quote. Eric, congratulations on writing that. 

 Eric Siegel (18:36.399) 
That's actually from my first book on Predictive Analytics. By the way, is it ? no, no, maybe you're right. You're right, you're right. See, I'm confusing my own books. It's all good. You're right, you're right. I stand corrected. I stand corrected. You're right about that.

Mary (18:46.056)
Bodies of work blur and blend. Well, let's unpack that? Because one of the things, of course, you're known for is the music video that you put out on Predictive Analytics, which I don't know, what year was that, Eric?

Eric Siegel (19:09.711)
That was 2016. Yeah. It changed my life forever. My handle is DrData, and I'm still waiting for people to start calling me DrData. The video went by today's standard, I'd say slightly viral, three and a half minutes long, predictthis.org, and it's the best ever genuinely educational music rap video about predictive analytics, which is another word for these.

predictive enterprise use cases of machine learning. The problem with the video is that it's a very campy rendition of what does it mean to be a super ultra nerd and what does this do to your social life? This is a big problem because it may distract you from actually listening to the lyrics, which are genuinely educational. Everything you ever wanted to know about predictive analytics, but we're afraid to ask.

Mary (19:41.736)
Mm -hmm.

Mary (20:08.008)
Well, OK, I want to unpack that a bit too, but I have to imagine from when you did that to now, I mean, now it's like the cool technology, right? Do you feel that or do you still feel like you're going to shut down a party, like literally, if you're like, let's talk about data? Or do you think people would be like, yeah, let's talk about data?

Eric Siegel (20:25.967)
You know, I still haven't really figured it out. I've heard, I've sort of, I started programming when I was 10. So that's like the late seventies. And I was ostracized and I still, I think a lot of people think it's technology is cool, but there's still a level. So, you know, and it's, this is a relevant question, the social question, because it speaks directly to that disconnect data scientists. I could get in a room....

I love, like most data scientists are like the best kind of nerd. They're fun nerd, they're funny, they're poignant. That's why I love running the conference series. And we could speak for hours about area under the receiver operating characteristic curve. In fact, just yesterday, I listened to a whole podcast episode about it and debating about how valuable it is. And by the end of the podcast episode, they were saying, well, well, don't forget about the business metrics.

Mary (21:22.632)
There's that, unfortunately.

Eric Siegel (21:24.815)
And it's like, so it, there's a, you, you, you get, you do lose the forest for the trees. And that's sort of the definition of nerd. So the, it's not just about being socially different. It's about being pragmatically irrelevant to, at a certain point. Like the, if the rocket scientist, the real rocket scientist very well may be more excited about the cool science itself than the actual launch of the rocket.

They might be like, look, we could launch the rocket next week or next century. I don't really care. This is just fun technology, right? But to the rest of the world, it's like, come on, if we don't launch the rocket, we're never going to get to Mars, right? So.

Mary (22:04.296)
Right.  No, there's always, I want to razzle dazzle, which again, you achieved in the music video.

Eric Siegel (22:16.847)
Well, we tried. We had the disco, people yo -yoing and playing a Rubik's cube while they're dancing in the disco. That's my favorite image. 

 Mary (22:26.33)
How long did that take to make? 

Eric Siegel (22:16.847) 
it was a two -day shoot. My friends in LA directed it and edited it and stuff.

Mary (22:26.33)
Okay, well, I like it. So still you'd have a fan base here. Well, okay, so you mentioned like, you don't feel like it's going to take over the whole human. And you know, one thing that's been happening in financial services is they're loving evermore the chat pod deployment, but it's also, you know, bringing on like a fear of like, am I am I going to lose my job as a result of this thing? But I you know, what's the bigger than that? Like, what's what's unpacked this fear of human versus robot?

And I mean you really dash the dose of reality of saying that you don't expect like the takeover anytime soon. You know, why is that? I mean I guess we're pointing to like how people are always wanting that fantasy, that big storyline, but what's, yeah.

Eric Siegel (23:18.639)
Yeah, we love it and we fear at the same time. I mean, there's a reason the Terminator movies are basically zombie movies, right? In a good way. They may be my favorite zombie movies and the thing is kind of coming at you unstoppably, maybe sometimes slowly. So it's a love -hate thing. You love to hate it, you love to fear it because the fear, it's crit -a -hype. I didn't coin that word. You're...

You're saying this stuff is too dangerous. And that's just another part of the narrative that's saying it's so valuable. Hey, look, if this thing has the potential to cause human extinction, it probably also do a pretty good job of targeting my marketing. Right? So, you know, you can't get away from that narrative if you believe that it's actually going to become general human level. So it's one thing for it to be better than a human at task A, B, C, or D.

It's a very different thing to achieve what is generally called artificial general intelligence, where it's basically capable of anything a human could do, including running a fortune 500 company, right? Anything you might want to ask a virtual assistant to do in normal human language. And that sort of, I believe there's no evidence that despite how seemingly human -like and amazing capabilities have emerged, especially more recent in more recent years. I do not believe that they represent concrete steps towards that audacious goal of an artificial human. So look, these things are tools. They're under our control. They're potentially very valuable depending on how you use it. and you know, but at the same time, because it's so seemingly human -like, it makes for an incredible demo.

I don't mean it's only a demo. I don't mean it's not, it is valuable. Last week I published an article in Forbes about, look, there are some studies that actually measure the concrete enterprise value and they're improving marketing. They're speeding up marketing creatives by 30%, not threefold improvement, 30%, stuff like that.

Many of these projects, of course, they're not measuring at all. Sometimes they measure it and it turns out to not be helping at all unexpectedly, but that doesn't mean that I can't get it to work better. But these are valuable improvements, but that's a far cry from just sort of hiring a computer and installing it the same way you would onboard a human employee unleashing them to operate autonomously.

And it ultimately comes down to autonomy. To what degrees does a thing operate without human intervention? In general, not at all. In fact, the more long -term established, older but not old school predictive applications I've been talking about are more potentially autonomous. For example, fraud detection is going to automatically on the fly decide whether to authorize a transaction. Generative AI output has got to always be proofread. You don't know what it's going to do.

It has, it does a really good job of being seemingly human -like and sometimes being correct because it's predicting one word at a time. So there comes prediction again, that's the same underlying core technology. it's not literally a word, it's a token, but it's on that level of detail. but it's, but that those core language models are not designed or trained to meet higher order human goals, like being correct. Right. So that's a whole different on, on, you know, unresolved open research area. It's not just a product design issue. So you can't just, it's not autonomous. You have to proofread everything it writes.

Mary (27:09.512)
Eric, that's a relief. So we're gonna end the conversation on the relief, but I do have one more question for you. But before that, people can get your book on Amazon. You have a conference coming up, yes?

Eric Siegel (27:21.871)
Yeah. Well, machine learning week, depending on when, when this drops, we're the first week in June in Phoenix. And then the end of, I think it's in the fall in, in Germany. the conference is machine learning week .com and the new sister conference, generative AI application summit. my book is at Biz ML. So the business practice paradigm framework playbook that espoused my book is called Biz ML, the business practice for running machine learning project. So bizML.com is the book.

Mary (27:54.088)
Wonderful. And for the last question, Dr. Data, what's the photo on your lock screen of your iPhone or your whatever phone you have?

Eric Siegel (28:05.103)
I actually updated it pretty recently, which is saying a lot because you only do it like every two, twice a year or something. It's a picture of my wife and our older toddler. So I've got two toddlers and the toddler's got chocolate ice cream all over his face, which is such a cliche, but I think these things are really cliche for good reason. Yeah.

Mary (28:30.664)
For a very good reason. Well, Eric, thanks so much for being on the show. It's been a pleasure speaking with you today. 

Eric Siegel (28:36.431)
Yeah, likewise. Thank you, Mary.


Podcasts we love