What's New In Data
A podcast by Striim (pronounced 'Stream') that covers the latest trends and news in data, cloud computing, data streaming, and analytics.
What's New In Data
Harnessing the Power of MLOps for Business Transformation with Andy McMahon
Discover the transformative power of Machine Learning Operations (MLOps) as we sit down with Andy McMahon, the head of MLOps at NatWest Group and author of "Machine Learning Engineering with Python." Andy's transition from the world of theoretical physics to the cutting edge of MLOps has positioned him as a leading voice in the field. This episode promises to shed light on the sometimes-blurry lines between MLOps, data engineering, and data science, illustrating the crucial role of operationalizing machine learning models to make a tangible impact on business infrastructure.
Our conversation with Andy McMahon dives into the concept of 'value left on the table' and how MLOps ensures machine learning models are not just innovative concepts but are also deployed to drive real-world solutions. He emphasizes the importance of initiating MLOps practices early, to manage models and data effectively, steering organizations toward successful operational transformation. Moreover, Andy shares his expertise on evaluating the fit of machine learning for various business challenges, guiding our audience through the landscape of informed decision-making in the world of data and AI.
Looking ahead, Andy offers a peek into the future of MLOps and the integration of advanced technologies like large language models into everyday operations. He stresses the fundamental skills necessary to thrive in the evolving AI landscape, such as software engineering and system design. Additionally, we discuss the collaboration between NatWest Group and AWS, highlighting the pioneering machine learning initiatives detailed in a four-part blog series. This episode is a wellspring of insights for anyone with an interest in leveraging machine learning, from the banking industry to broader business applications, making it indispensable listening for forward-thinking professionals and enthusiasts alike.
What's New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What's New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.
Okay. Bye. Bye. Hi everyone. Thank you for tuning into today's episode of What's New in Data. We have some really exciting topics here. Excited about our guest as well. We have Andy McMahon, author of machine learning engineering with Python, and he's head of ML ops at NatWest group. Andy, how are you doing today? I'm great, John. I'm really excited to be here. I love the podcast. So it's, it's a real kind of, it's a real kick for me coming on and talking to you today. So glad to be here. Yeah. Yeah. Pleasure's all mine. I had the, the, the pleasure of you know, being pointed to your book from Abi Aryan and you know, she recommended it and, you know, I was able to get started with it over the. The Christmas holiday, and I just feel like I found a treasure trove of amazing insights on getting started with machine learning engineering, the concepts, the principles, and then kind of diving into the weeds of, like, what does it really mean to launch a. A production machine learning engineering pipeline within a company. And it's so accessible to, for, for especially those who know, who know Python. It's a very readable book for, for software engineers and data engineers. So I'm really excited to talk to you about it today. But with all that being said first I want, it'd be great if you told the listeners a bit about yourself, Andy. Yeah, no, and thanks for the kind words on the book, John. So, so my background way back was in physics. So I studied, I'm from Scotland, as you can probably tell from my accent. I studied theoretical physics here at the University of Glasgow. I then went to London for a few years, did a master's and PhD at Imperial College. It's a really good science based university there. And I came to the realisation, partway through my PhD, that I did not want to be an academic physicist. And I would absolutely hate that. After thinking for so many years that it was the dream. So, this was at the same time of the Harvard Business Review article. You know, data science is the sexiest job of the 21st century. So that, kind of, was a really good time to pivot into data. So, I went on GitHub, started teaching myself machine learning, started learning how to apply some of the skills I'd learned in physics to, to the wider world of data. And then I, I came back home, got some, some of my initial jobs. My first job was in a small startup back home. And I was the first data scientist surrounded by 13 or so software engineers, died in the will, hardcore software engineers. I remember sort of thinking I was going to play with algorithms and Be, you know, the next, the next kind of the next kind of brainiac working on the neural networks, et cetera. And they were just like, you know, where's the API? What, what are you, how are you deploying this? So I just had to learn very quickly, what engineering meant. And that, that was really good and that's really set the trajectory for my career. So I've done a few different roles and yeah, ended up as head of ML Ops at NatWest Group. Where I own the ML Ops strategy and run our ML Ops Center of Excellence. And lots of lots of other bits and pieces. But yeah, that's me in a nutshell. Amazing. And again, super excited to have you on this episode to dive into MLOps. But maybe first for the listeners, would you mind defining MLOps other than, you know, the, the long form version of machine learning operations? Yeah, yeah. So, so the, the key word there is operations. So Machine learning, data science, these are terms that have been really in the mainstream for a few years now. But it was, it was very clear after, you know, a big rush to hire lots of PhDs into organizations and get the first data lake spun up and, and get data in front of these data scientists. It was clear that running a machine learning algorithm on data was only part of the solution that businesses, organizations were really looking for. You know, so. The question then was how do you take that proof of concept that someone can come up with in a Jupyter notebook or in a Python script, and how do you actually turn that into a software product or a service? that can drive value and impact in a variety of different ways. And that's really the question of operationalizing this. And so, so MLOps was pretty much born from that. And it borrows heavily from things that already exist. So in the software engineering world, we've had development operations or DevOps for many years. And the idea there was to break down the. The silos between a development team that would build an app and the operations team that would run it day to day, monitor it, fix it if it was broken. And the idea was to marry those two worlds. And MLOps is really just the natural extension of that, but into the world of machine learning. So it encompasses a lot of the techniques from DevOps, like continuous integration, continuous deployment. Testing, platforms, understanding the route to live of software application, but also has very ML specific pieces in there. So how do you take your training methods and turn them into a pipeline? How do you take your inference sort of function or idea and make, make that an API or make that a solution and then how do you monitor these things? So it's a bit different from traditional software monitoring where you're maybe monitoring latency, Now you have to think about things about data drift, concept drift, covariate drift, all of these terms that basically mean, is my machine learning model doing what I expect as part of this software solution? So MLOps is just a holistic set of practices, techniques, and tools that come together to make that a reality. And the, the kind of the key thing for me that some people are surprised to hear me say is, I often don't think that's a technology problem. I think technology is a big piece of it, but it's really a process problem and a, a pattern problem and a people problem. How do you organize things to get something from idea into production? And that's kind of, for me, what MLOps means. Absolutely. And how is MLOps different from data engineering and data science? No, really good question. So, so actually, because this question comes up so much, it's one of the first things I answer in the book that you mentioned. I sort of go through different roles and what they do. So, so for me. The key roles, just expanding out your question a bit, are in any modern data team, there's a data engineer, like you said, and their job really is to build robust, scalable pipelines, to take data from a source, from location A. Send it to location B, and in the process of going through that, they often have to transform it. So we've all heard the term ETL, extract, transform, load. You now also get extract, load, transform, and variants thereof. But fundamentally, you're taking data from A to B, and transforming it and molding it into something that's useful for downstream use. A data scientist is A person who's kind of their focus is more on using the data that the data engineer is often produced for them or for the application they're building in order to produce a machine learning model or some insights. So they're trying to use that data, feed it through different algorithms, feed it through different analysis techniques and come up with something that can drive impact or value in order to answer a business question. Or an organizational question. You then get into like the ML engineer. So an ML engineer for me is the person who can take what the data scientist has came up with, and help translate it into a pipeline that can then become software that actually runs, is orchestrated, is monitored, is tested, has good control flow built around it. And then you get kind of this newer term maybe, which is an ML Ops engineer, which sometimes is interchangeable with ML engineer. But to me. That often entails the people building the platforms to help the ML engineers move faster. So those are kind of the main roles I see. There's, there's lots of other roles, you know, around delivery, scrum master, product owner, et cetera, which are equally important, but those are the main technical roles. And, and what I kind of think is really powerful is when you have those people all working together as part of a cross functional team. So it definitely used to be initially you'd have. A group of data scientists would build a model and hand it over to some other, you know, the ML engineer saying they would then go build that and shape it. I think it's, it's ideal if you have a data scientist, an engineer, an ML engineer, all working together from the start, because then they can basically start thinking, you know, idea to production very early on, and that's where it gets really powerful. But those are the main roles I sort of see in any modern data project. Yeah, and I love how you, you dived into how it's different from, let's say a data scientist or a data analyst doing some ad hoc work, doing some sort of forensic analysis on data, but in MLOps, you're, you're really operationalizing this and making it, like you said, software. a continuous pipeline. And when I say continuous, it can be event driven or batch based, as you mentioned in your book, you mentioned how to deploy with both of those patterns. And, you know, it's definitely a powerful concept. It can be an absolute game changer for organizations. You know, I'd love to get your perspective on why MLOps is important. So this is something I sort of evangelize on a lot. I sort of go to bed thinking about this all the time. Why is it so important? And the reason I think it's important is because there is so much value left on the table when someone has a good proof of concept and it doesn't go further than that. So if you have notebooks, Jupyter notebooks lying around or Google Colab notebooks lying around in your organization, in your team, Some of them have some amazing solutions that are just waiting to be unlocked, but they're, they're, they're very much at the beginning of the journey. They've not been fully sort of, they've not gone through the full life cycle and really came to fruition. And if they're buried in those notebooks and sitting on those drives and sitting on your laptop, they're not out there in the world really, you know, making customers lives better. They're not out there catching fraud. They're not out there, you know, just. Generally making shopping experiences or your experiences day to day more, more easy and more fun. And I think that's a real shame. So I think the reason MLOps is so important is because it takes these amazing capabilities we now have to model data, analyze. It's forecast, predict, and it builds them into, into real solutions that just are part of our daily life, daily lives now. And I think MLOps is literally everywhere now. We're such a technology driven society and so many of the applications and solutions we use have machine learning baked into them. You know, you use your Google maps and try and find where you're going. That's got ML baked into it. Use your search. You know, use chat GPT, et cetera, and we can get on to that later. But it's just such an important part now of our modern way of life. So I think if you don't have MLOps, then you're going to leave a lot of that value locked away. As sort of latent potential. So it's about unlocking that potential and making things real. Which really gets me excited because I'm very, very much driven by that idea of impact. Of outcomes. How do you really make a difference in the world through your technology? And I think MLOps is such an important part of that now. Absolutely. And the other thing with the MLOps is, you know, it's very, it can be transformational both organizationally and from a technical perspective, however, you know, not all companies are, are ready to embark on MLOps, whether it's, you know, staffing or their, their, their, their data ecosystems not in place, you know, when should companies start investing in MLOps and machine learning engineers so they can get to the type of practice you're, you're, you're talking about? Yeah, the way, the way I've often phrased this before is so there's, there's kind of the stages of evolution on that journey that you're mentioning. So, so if you don't have any data, you can't do. You can't do data science, so often it's about getting that data together, curating it, storing it somewhere. Could initially be just, you know, a SQL database. It could then become sort of blob storage on AWS S3 or whatever your tool of choice is. It then becomes more sophisticated things, data lake houses, etc, data lakes. So that, that's the, that's the first port of call for evolution on the data maturity curve, if you like. When you, when you then start using it to, to drive value, and I think, If you're using it to drive value and you're not doing MLOps, I think you're just doing MLOps badly. That's kind of, it's quite an aggressive take on it, I think. It's quite, it's quite might rub people up the wrong way, but, but I think it's an important one because if you're using that data to drive insights, even in an ad hoc way, I think you have to be asking yourself, how are you managing those models? How are you managing the lineage of the data? How are you understanding, you know, how things are changing through time, if your data is being updated through time? And all of those are MLOps questions. And if you're not answering them, you're just kind of, MLOps is happening, it's just not happening very well. So, so I would kind of say you have to invest in at least that way of thinking as early as possible. Now, not everyone is a huge, financial organization, like NatWest, one of the biggest banks in the UK. You know, we have 90 million customers. We have 500 data scientists and engineers. We, we're, we're at a big scale. If you're a startup and you're kind of two people, you can't go and hire 10 ML engineers, but I think you can start thinking about those processes and you can start thinking about those techniques very early on. So even if you are still in the notebooking stage, you know, doing something like spinning up an ML flow tracking server, it's an open source tool. It doesn't cost you anything to run if you're running it locally. But even doing that and then doing experiment tracking and then tagging your models with different stages of development and some metadata That's you're already starting to do MLOps and that's, that's zero cost up front, right? But I guarantee you that it will help because it introduces hygiene to your practice, it introduces consistency, standardization And then as you grow And maybe your demands become a bit more intense, or you want to scale a lot more, you can start investing in more infrastructure and more ML engineers or people with that particular focus. But I think if you're doing anything with data, you should, you should start thinking about the practices at least very early on. But I would say that, right, it's kind of, it's in my job title, so I'm, I'm a real evangelist for that, but I do think it will pay benefits to, to anyone in that space. Absolutely. You know, I like to think of this in terms of the, the phrase earned secrets. You know, this is a a term that Ben Horowitz is a well known technology investor uses, which essentially people who are, you know, graciously sharing their insights from years and years of experience. And there's one chapter specifically in your book about real world ML ops. And you even have this really nice table where you summarize like, Hey, what are the requirements? And, you know, is, is machine learning appropriate for this? And, you know, I, that, that's just one of those things that I think is, is very practical and helps organizations and, and individual, you know, data engineers, data scientists understand like, okay, what's the class of problem that we're solving? With, with machine learning and does it apply to the challenges we're facing here at our company? So again, just really recommend your, your book is a great resource for this, not only from the, the, the engineering side, but also the practical real world applications. So thank you for sharing your, your quote unquote earned secrets. So, so graciously, I really like that. The other one is obviously, you know, I think it was Jeff Bezos or someone talking about you, you pay the, you pay the plumber with 15 years experience, the pricey quotes for the 10 seconds of work because he knows how to do it in 10 seconds. Yeah, I think, I think what you, what you've highlighted there was kind of something I was really passionate about putting into the book was I'm not, I'm not a theoretical guy by nature, even though I did theoretical physics at university. Interestingly, I sort of a really like practical. Ways of thinking and applying and solving the problem and that is sort of the way I think through these processes and these, these problems, you know, sometimes, sometimes the best outcome is not to do machine learning and people sometimes are surprised. I say that as well, but you know, sometimes it's a, it's a sequel store procedure or it's, it's it's a heuristic. It's, it's a lookup table. It's something like that. And then you come to that conclusion, still using your machine learning knowledge because. Yeah. You're able to diagnose the business problem or the organizational problem, and then work it through and really understand, you know, that does this make sense? Does this make sense? And you kind of use the Toyota five why's, why are we doing this? Why does it make sense? Why would I do this? And sometimes you come to the conclusion, no, it's not quite that, or it's, or it's why I call the minimum viable model. So there's minimum viable product with software, but sometimes it's about the minimum viable model. You know, I don't need a massive deep neural network to solve this basic regression problem. I can just do a linear regression or polynomial regression. And I think having that kind of, that understanding is really powerful for organizations because a lot of people, you know, they. They, they get a hammer and they see a nail everywhere. And I think it's important to understand there's, there's other tools and ways of doing this. So no, I'm glad you highlighted that because that was, that was something I was really passionate about was putting in those quite practical examples in the book. Absolutely. And you know, we already spoke a bit about your chapter on machine learning engineering in the real world. I won't give too many spoilers, but I'd love to hear from you. You're generally like when you should be doing machine learning. Yeah. So I think, I think machine learning is. It's ideal when, when those, those initial sort of heuristics are maybe not cutting it anymore or, or they're just, they're just not fit for purpose. So the sort of scenarios where that's the case are maybe you're, you're trying to do something where the rules that you would have to write down are far too complex. So there's, there's this kind of classic diagram we've all probably seen a lot of your listeners or we've seen where, you know, traditional. Traditional software was you'd have some data and you'd write some rules and you'd have an output whereas machine learning is you put data and you learn the rules and you output a model that embodies the rules and I think once you get to the point of writing down those rules would become too complex you're probably in the realm of machine learning or at least statistical modeling. But when you're, when you think you're there, you have to ask yourself, you know, do I have enough data for this? Am I going to keep getting fresh data? Because machine learning models are sort of, you know, they're like plants or something or tamagotchis. You can't, you can't just let them rot in the vine. You have to, you have to keep feeding them with data. You have to keep updating them, retraining them. So you have to ask that question. Are you going to have the throughput? To justify that, and sort of, yeah, you just, you just boil it down to is, is the minimum viable model in this case, actually within the machine, machine learning realm. So there's, there's just so many different examples where, you know, you want to forecast a time series because you want to, you want to understand how your revenue is going to shape up through time. There's, there's no real way to write down a rule to do that. You could do, you know. The very simplest forecast is the previous point is the next point. But if you do that and you find actually the accuracy is terrible, then you're probably in the realm of, you know, let's try something more sophisticated. Similarly, if you want to classify something and you can't just do a simple SQL query, you know, is income above this, is income below that, therefore classify this occupation as high value or low value, etc. If you can't. If you can't just write that down, you're probably in the realm of, you know, no, it's a tree based model or it's something else. So it's, it's very much case by case, but I think where, where you need to, where you need to just be honest with yourself is machine learning might not be the solution. Also, it might be the solution, but I don't have the infrastructure and the data in place. And also like. Be very clear on the business requirements. And I sort of put that in those examples you've mentioned, John, in the book, where there has to be a clear case from the requirement side, it can't just be let's use ML because it's cool. And I think this is the danger we're back in with the whole gen AI wave, right? Everyone's racing to deploy a large language model because it's a large language model and there's sort of a. It's very reminiscent of a few years ago. Everyone wanted to deploy the first ML model because it was an ML model. But as long as you're very clear, no, actually this will drive value. This will drive impact in the way I want to, then, then it justifies sort of going the machine learning route. And that really ties in well to, you know, maturing machine learning ops in a, in a large organization is sort of proving that value. But it would be great if we could sort of dive into the details on what it takes to get to that point where, you know, you matured in all ops to the point where Yes, you've, you've operationalized it, you've turned it into software, you know, you, you, it's cross functional but it's also deriving value for your organization. Yeah, so, so every organization I've been in, I've sort of, from the small now to, to, to the quite large, it's always about, I sort of call it bootstrapping in, in the widest sense, in the sort of organizational sense. So you have to prove out that first case. You have to be, you have to do that first iteration of take the notebook, translate it to a pipeline or pipelines, orchestrate it, run it. And get the value out and then you have to calculate the value. You have to be brutally honest. So there's no, there's no point, you know, going out into your organization or going to your investors and saying, this is the best model ever. It's done. It's done amazing. You know, it's going to generate 10 million value just because you have to be very honest. What's the metric you're measuring and just prove out that value. But once you've done that. It's then a case of building on that. So your first iteration might be very simple. It may be a very simple scaffolding. You may only use open source tooling. It may be running on your laptop, but you're pulling data from a database somewhere, you know, an open source dataset just, just to prove out the concept. But once you do that, the key is then, you know, what's the next iteration of that? Maybe it's doing a bit more, a bit more data hydration of your data lake, making sure you enrich that data, trying a few different models. Maybe it's then about introducing more testing. So when do I introduce unit testing, integration testing? When do I then start thinking about, the orchestration layer? You know, I make, am I going to use airflow? Am I going to use a cloud hosted solution for that? Am I going to use you mentioned before event based triggering? So I'm using a pub sub model and you just keep building and building. And when do I do CICD and bring in continuous integration, continuous deployment? And I think what happens is you sort of you bootstrap it that way. And every time you have to justifying your existence, you're sort of. I think it's really important. If you're going to ask for more investment, as happens in large organizations, you know, you have to say, give me budget to do X. You have to really prove out that there's value from this. So just having that value question in your mind is key. Because then it means as you, as you do scale and you ask for more resource and you want more cool tools and you want more people, you've shown that it's worth the investment. So one of my, one of my favorite things to do is try and understand the ROI, the return on investment of a machine learning project. Because when you see that you often see the, it's. It's, it can be really, really high and that's just really exciting to see, but also justifies future investment in that bootstrapping and that scaling. So I think, I think, I think that's the key thing. And then the key other, one of the, the last other point I'll mention on this, really is that you shouldn't view it as something to be afraid of. It's, it's very easy to be quite scared by a lot of new concepts and new topics. And it's quite scary maybe to think, you know, I'm going from my first model through to. serving 19 million customers daily or hourly or per second, but it is an iterative process. So the key is just to really embrace it and just always understand that it is driving more value and it is important. So I think, I think if you do those things, you're generally, you're generally fine. You won't stray too far from the path. That's at the, that's at the highest level. We could go into sort of technical details, but I think that if you keep those strategies in mind, you'll be quite, you'll be in a good place. Excellent. And you know, we, you, you brush over this a little bit, but. You know, I'd love to get into this, this technical detail just because I like nerding out over this specific topic. But how do you know whether you want to do a batch based ML pipeline or a event driven streaming pipeline? So the first question you'll ask is, you know, what's the data I'm using for this solution and how does that come in? So, so many, many processes are just batch by nature. You know, they're, they're, they're pools from, from an existing database. You know, there's a store procedure. It just runs a big query with some filters, extract some data and dumps it somewhere. Or there's, you know, it's It's something like an end of day business process where stuff gets pushed out in a big batch. So, so it's often driven by the data frequency, I think the first instance. And then what happens is the, the next kind of layer on top of that might be it's, it's not necessarily that, but it could be actually the statistics you require for your, your approach. So you might have things coming in event by event, but actually you want to run a certain type of statistical model or you want to do a clustering algorithm that requires big statistics. So you actually think for the use case, grouping that up to daily or a higher level aggregation is actually, actually makes sense. So, so that's kind of an intermediate case where it's not purely the data frequency, but it's more of the frequency required by your modeling approach. When you want to go to streaming and event driven, I think It relates to that as well. So can, can the modeling approach you're talking about work on a sort of point by point basis and many, many ML algorithms can, but as I mentioned, they're sort of clustering. For example, if you were doing a clustering algorithm, you need the statistics to cluster on. You need, you need many data points. You can then run, you've stored your, you've stored your clustering algorithm and you can do. point by point. But maybe you actually do need to see that distribution. I think if you, if you can get the data in an event based format so a pub sub model, for example, using Kafka topics, et cetera. So you're subscribing to this Kafka top and getting the data dripping in as, as, and when it is, that's really good because then you're introducing, you know, asynchronous workflows. You're in, you're starting to introduce these, these lighter pieces of functionality and you're moving into microservice architectures and things which have been proven out to be quite valuable. Although there was that recent, that recent piece of work, I think it was last year, can't remember who it was, but they basically said they'd taken out their microservice architecture and replaced it with a giant monolith and seen massive savings. But I think in a lot of cases, microservices still, still rock still rule. So it's, it's a kind of, it's a mix of those questions, but when you get to that. I think it's really powerful as well because you get especially if you're on the cloud, you start leveraging really scalable tooling. So things like if you're on AWS, I think on Azure, it's cloud functions, etc. But you can start writing these very small pieces of your solution that have very clear solutions. You know, as one task, which is a really, really good embodiment of some of the software engineering principles that I write in the book, you know, it's kind of, it's, it's, it's only doing one job. It's very clear what it's doing. It's completely decomposable into just that piece, and that's quite powerful as well, because it means it's easier to maintain. So if you have a lambda function or a little function or a REST API, and it's got one job, which is Get this data vector in, run an inference on it and spit out the answer. It's just going to be much easier to maintain that. Whereas if you had a big monolith, traditionally that was quite, that was quite hard to maintain. Although there are, there are ways around that. So yeah, it's a kind of, it's a mixture of those questions. And then the final one is really what's the, what's the downstream? Consumption model. So you might have a requirement, you know, if it's, if it's say something to do with transactions, for example so your retail transactions classifying those transactions, or it's, you know, it's, it's taking a banking transaction and trying to check for fraud, you probably want that to be event driven, because latency is a major requirement. So you want to get down to like sub, sub second millisecond latency. So that means you by nature have to be event driven and very fast, because there's no use me finding out tomorrow my Okay. Bank card has been stolen. I want to find out pretty quickly, right? So so it's driven as well by that requirement And I think you can often then play around with those other levers I mentioned and try and make it work But then some use cases, you know, it's to populate a dashboard so a decision maker wants to see a dashboard where they can drill down into different data and sort of inputted into that are machine learning inferences. So here's some classification, here's some forecasts, all dotted around that data, but they only look at it once every day. They look at it morning to make some decisions. So I think it's driven by that user requirement as well. So if you kind of, if you think about all those and sort of square that circle, then, then you'd sort of decide which route to go down. Absolutely. And your, your, your book is also has some great explanations that show, show the differences between the two patterns. And you know, one of the things that, you know, from just from my experience, you know, stream is a data stream processing product. And, you know, when we help the customer deploy a support vector machine to detect you know, abnormal network flows, the other, the other thing that we noticed was, you know, it's sort of the, the the nature of the incoming data, if it is event driven, if it's incremental, if it's transactional, for instance, and you don't have like downstream latency requirements per se, because like you said, most of the data will end up in a dashboard where someone might be looking at it, you know, periodically, who knows, maybe like, you know, once a day or once every hour, only if something goes wrong, but you want to maintain the kind of the integrity of the data. And it's event driven fashion, you know, that's, that's one of the other things that, you know, I've seen come into play where it's where, you know, this is where you have to work with the real world operators who give you this type of context where, you know, they were telling me, you know, Hey, it's not that we need like this, this, this millisecond live feed dashboard. But it's, you know, more about, you know, the fact that our data is coming in like incrementally changing and things along those lines. And if we were to batch that, it would sort of break that, that, that pattern and kind of the, the, the, the, the true state of the upstream data. So there's all these fun factors that come into play with these pipelines, whether it's between data engineering or machine learning ops, and now we're getting into AI. with you know, generative AI pipelines and, you know, every cloud service provider is just kind of rolling out these, these AI tooling products. So, you know, you know, that companies are going to start investing there and having their, their data engineers and ML ops engineers start to take a look at that. So I'd love to get your perspective on, you know, how generative AI relates. to the MLOps that you've been talking about. Yeah. Yeah. So, so we're definitely in a phase of evolution. I think across the entire industry, you'll, you'll be feeling that in your company. We're feeling that. I think just the, the kind of average Joe in the street, I found it really interesting that I think, I think when ChatGPT came out, it was the first time members of my family Thanks. started to understand what I did as a job. So I was like, you see that thing that you're playing with that? That's really, Oh, that's weird. Yeah, every, everyone's in learning mode and everything's changing. So I think when it comes to the, the, the ops question, I think what's important is That we don't think it's such a new paradigm that we throw everything out. I think it's additive. And I think that it's the same way that MLOps adds to DevOps. So you wouldn't, you wouldn't throw away DevOps and try and do MLOps. It's baked in there. There's just extra bits. I think this is what's starting to happen with the whole large language model question. And in fact, the more you work with it, The more I think you recognize, it's just machine learning. It's just, it's a different scale, it has different capabilities. And there are unique questions about, you know, how do I monitor and evaluate this? And what do I do about these embedding vectors? And, you know, I have a different stack. And the point you mentioned about latency, you know, there's kind of, there's, there's different tools and techniques I need to work to, to make my model actually perform quite quickly. I've got new infrastructure because I need GPUs. So that there are uniquely. Different questions, but I do think it's additive. So I think the fundamental principles remain, how are you, how are you managing and versioning the model or how you interact with it? The additional layer now is, you know, prompts. So rather than just the features that feed into the model, what are, what are these kind of natural language prompts and which ones work really well? And am I creating like a prompt bank or a database of. My best prompts, and this whole prompt engineering skill that's developing. But to me, that's still, you're still just developing features to feed into a model. So you should track them and understand which ones give the right performance, etc. You then have, you know, it's a model that spits out an answer. How do you determine if the answer's what you want? You have to develop the right metrics. So you then, although it's new, you still have metrics. We have rogue, we have blue, we have lots of metrics from the natural language processing world we can apply and adapt to our different use cases. Sentiment analysis, et cetera. So we can start applying those and then say, right, well, that's a metric. If I've got a metric, I can track it. If I can track it, that's monitoring. So I'm still doing monitoring. Well, well, the. Where I think these foundation models are a bit different is we've now entered that paradigm where you're not always building the model. In fact, a lot of the time you're leveraging a vendor supplied model, and you'd mentioned all the clouds are rolling out, you know, AWS have Bedrock and lots of other providers have these solutions where you can access these models from all of the different vendors and providers. And I think that's quite different, you know, we've. We've, we've entered the new phase of model sovereignty. It's sort of, is not, is not just the developer. It's, it's, it's out there in the wild and it's something that you're interacting with and paying for. You're paying for the use of the model. And we've never really had that before. And I think, I think that. It's an interesting new set of Ops challenges because we have things like, you know, how are we going to catch the drift and the APIs we're using and ensure that we, we roll back to previous versions of the APIs we use? And how do we manage the risk of the dependency on that? So if I have an outage on OpenAI, Not likely, but see I did. Do I have a rollback in place? And does that mean I've invested in building my own model, but it's going to be so poor compared to GPT 4 that, you know, is that enough of a rollback? Is that a good enough mechanism? So I think there's, there's some of these challenges that I'm, I'm really excited to, to work out. And I think the emphasis is no. It's going to be less on building your own models, obviously, and it's more on how do you build systems that can consume these models but do it in a robust fashion and do it in a way where you're still monitoring and you're still, still confident at solving the business problem but you maybe have different techniques and, and ways to, ways to monitor that. So, so that's a kind of, that's a, that's a set of a few random things I'm thinking about but it's, it's just a super interesting time and I think we're definitely seeing an evolution of MLOps. Yeah. And, you know, the evolution in this industry is happening very fast, whether it's evolution or revolution, and I, but I'm 100 percent in agreement with you. I do see generated AI is more of a, even though the impact is transformational and revolutionary, and you can use all those buzzwords for it. The actual technology in terms of adoption within these enterprises is, is, is incremental. I mean, it really is. I mean, when you actually look at some of the The products that are rolling out, like you said, you're not going to throw away everything you've been doing. And have this new, fresh gen AI architecture. No, it's going to be kind of like, like I said, an incremental add on to, to what you have today, even, you know, Postgres and these MongoDB, et cetera, are throwing vector extensions to their, to their products, along, along with kind of highlighting the sweet spot for some of these other vector databases, like, you know Pinecone and you know, Rockset has some great capabilities there as well. And, And, you know, it's, it's, it's going to be an exciting time for sure. I'd love to get your take on, you know, your vision on what's going to happen in this space over the next five years. Yeah. So, big, big, big question. I feel like it's been five years since ChatGPT came out, but it's only like one in a bit. It's just so, so much is happening. So, I think, I think some things I think will go slower. then are anticipated. So there's a, there's a lot of chat right now about, you know, the usual hype cycle of AI will take everyone's job. The economy is going to completely transform, et cetera. And I think I, I view them with a healthy dose of optimism, but skepticism. So, so I think things will change, but like you said, I think for a lot of organizations, a lot of use cases, things are actually incremental. And I think that's, That means there's actually going to be quite a lot of stability there. There will be some disruption in different areas. We're already seeing disruption in different areas, but I kind of think it's less of a, you know, it's not going to be chaos. I think, I think it's, we can do this in quite a controlled way, which I think gives me a lot of, a lot of optimism, actually. I see it as a, as a really exciting new capability, like you said, that you can add on to what you have. So I think, I think that's the first thing. I think we will continue to see this democratization. of the ideas behind AI, which is, I think, an inherently good thing. So, before this explosion of interest in generative AI, It was still very much, there was a lot of excitement about deep learning, there was a lot of excitement about machine learning, but it was still not kind of your average person on the street was talking about it. It didn't really, you could maybe sit down and have a conversation with them and sell it to them and get them on board, but that, that was few and far between. Now I feel like everyone almost has exposure. If you have an internet connection, you've probably played with ChatGPT. So you've had this exposure to AI and you're sort of, you're starting to ask questions and you're. I think people are educating themselves in a way that they, they kind of, they didn't have the impetus to before. And I think that's very powerful because it means we're going to get customers who are more empowered. So even your sort of your, your typical customers using like your mobile apps or just your, your, your retail. sort of customers, they're all going to be asking a lot more kind of informed questions about AI. You know, am I, am I talking to an AI right now? How are you treating my data, et cetera, et cetera. So I think there's a real empowerment of people as this knowledge filters through. I think in terms of pure technologies, I think the LLM stack itself is going to evolve quite rapidly. I'm already seeing articles saying that rag retrieval, augmented generation is going out the window and it's only just kind of arrived on the scene. I don't necessarily think that's true, but I think, I think there's, there is going to be a lot of evolution. There's going to be a lot of discussion, and there's going to be a lot of new technologies coming out. And the hard thing for teams to navigate through, and professionals to navigate through is, you know, what, what is kind of buzzword bingo and what's just What's just kind of fluff and what's, what's, what's PowerPoint versus what's in Vaporware and what's actually, no, that's, that's a concrete solution I need. So I think we're going to need vector stores or vector extensions of data stores. We're going to need, we're still going to need data lakes and data sourcing platforms and capabilities there. We're still going to need lots of monitoring. But I think, I think there will be evolution and how all those things stick together. And even things like, you know, the adoption of LangChain and LamaIndex and things like that. That might change in a couple of years. There might be a new competitor, but that fundamental idea of orchestrating workflows with large language models will remain. The real, the real winners, I think, are going to be, the real winners in this race will be the ones that can take themselves out of the weeds and sort of abstract away some of that. And there's a great diagram from A16Z where they You know, they, they map out the LLM stack and it's, it's great because they sort of say, look, you have your vector store, you have your prompt playground, you have your orchestration layer, you have your, your call outs to your vendor supplied models, you have call outs to your own models and the boxes, the high level boxes, I think are going to remain, what's going to change is like, you know, the specific technologies within it. And as long as people are comfortable with that and can build processes and ways of working that adapt to that, then. They're going to reap the benefits. So I think those are, those are the main sort of things I see. I think agents as well. The final point I'll say is the whole agent question, I think is really exciting. You know, can you build agents that are quite autonomous and can complete quite complex tasks? And something I'm really excited to dive into is how do you build an operational process around that? How do you monitor an agent? So I'm looking forward to having that security nightmare land on my desk. There you go. Well, all, all, all in the realm of duty for a head of MLOps, so, that's, that's very good. Where, where should people focus on developing their skills for the future? Yeah, I think, I think the fundamentals are still going to be key. So as I said before, there's a lot of chat about, you know, oh, we have co pilots now, does anyone really need to learn to code? I think the need to learn to code is even is even more incumbent on people. I think what will happen is, we'll see like what happened with, there's this story about the introduction of compilers. I don't know if you know this but, there used to be obviously people would write out every single step that the code had to do, machine level code. And then they introduced compilers and they thought well that's it, there's not going to be any software engineers anymore because You know, you can compile down from a higher level language. We don't need as many people, but it meant it was cheaper to produce software. So there was more software engineers and I think that's going to happen. So I think more and more people need to upscale in software engineering. I think if you're already in the space, you need to, you need to be playing with LLMs and generative AI and foundation models, but you still need to be thinking about those. those fundamentals. So, you know, what is good system design look like? What good architecture principles? What, what does it mean to test an application from unit testing, integration testing, regression testing, smoke testing, et cetera. But how, how are you thinking about, you know, how do I build something that's robust? How do I use the cloud? What are the tools that are available? How do I scale? And how do I build processes around these things that make sure they go from idea to production and then post production I can monitor and look after them. So, so kind of in a nutshell, everything in my book still stands and will stand for a long time. Just the fine details will change. So, so I think that's important. And I think, I think people might easily get swept up in, you know, I don't need to learn to program. I don't need to learn to do X, Y, Z. But I think, I think you should develop those skills. And more generally, I think what makes, what makes people in technology sort of. 10x at what they do is the classic things, developing good communication skills, developing good understanding of business, and really just having a passion for, for what you do. So if people do those things, I think they'll be in good stead for the future. Absolutely. And I can't recommend your book enough, even though I I'm not, you know, day to day building machine learning pipelines or operationalizing it. But at the same time, I just felt like I learned so much and it's, and it is very readable. It's a very well written book. Great conceptual diagrams as well. Great business context for, for those who want to think about how they can manage machine learning in their organizations. And then if you're an end user or a data engineer, or, you know someone who wants to make the jump into more of a technical role like I said, I think your, your book is an excellent resource because it covers so many core principles that aren't just I should say, just focus on machine learning. I think it's a great software engineering book and also sort of a a business facing perspective on how to make machine learning. And I think it applies to any type of data pipeline, honestly, in terms of making it successful within a business. So you know, in terms of people developing their skills over the next few years and future proofing. Their, their, their own toolbox. I, I do really recommend your book. So thank you for, for, for writing it. I think it's one of those great resources that people can rely on. And I, I really feel like every, every single chapter is, is, is super productive and, and, educational. So definitely recommend machine learning engineering. with Python. It's available on all the popular bookstores, but you can go ahead and Google it. AndY McMahon head of ML Ops at NatWest Group, author of Machine Learning Engineering with Python. Where can people follow along with your work? Yeah, I'm kind of most active on LinkedIn. So if you just Andy McMahon, you'll find me. I'm one of the few, the few on there. I have a, I have a Twitter account at Electric Weegie. So Weegie is someone from Glasgow where I'm from, so that's electric W E E G I E, but that's kind of less where I'm, where I'm active just now. So mostly on LinkedIn, I think is the best place. And if you're interested in some of the stuff we've done at NatWest, we've actually published a series of blog posts on the AWS machine learning blog site. So if you just type in AWS machine learning NatWest, you'll see a four part blog series where we detail the MLOps ecosystem that we've built, and you'll sort of see a lot of the, a lot of the thinking that's in my book. But translated into sort of some, some real life applications from, from an amazing team that we worked with both from NatWest and AWS. So, but yeah, LinkedIn is the main place to find me. Excellent. And we'll have for, for those of you listening, we'll have a links to those down in the show notes both to Andy McMahon's LinkedIn, and we'll have a link to his book as well on Amazon and it's on other bookstores as well, but I was able to just super accessible, go through it on Kindle. It was nice in my spare time to be able to read through it. So Andy, thank you so much for joining today. Really educational as always, and, and, and love the perspectives that you bring and thank you to those of you who have listened in and tune in today's episode of what's new in data. Thank you so much, John. It's been a pleasure. Okay. Bye. Bye.