Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
128 | The strategy and use cases used by a company to implement AI successfully with Peter Gostev
Join us for an exclusive, live discussion with Peter Gostev, Director of Data at Moonpig, as he takes you on a deep dive into the real-world steps behind AI implementation at a large-scale e-commerce company. Peter will share his experience navigating AI integration in a 500-person organization, from organizational structure to use case selection.
In this episode of Leveraging AI, Peter will reveal how Moonpig uses AI for automation, providing actionable insights every business leader can use. You’ll leave with the tools and knowledge to replicate Moonpig’s AI-driven success in your own company.
Peter is a leader in the AI space, driving innovative solutions at Moonpig, one of the UK’s largest online greeting card and gifting companies. His approach combines practical AI strategies with a clear focus on business results, making him a must-hear for any AI enthusiast or business leader looking to scale AI in their organization.
---
Join the next open AI course: https://multiplai.ai/ai-course/
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Free AI Consultation: https://multiplai.ai/book-a-call/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello and welcome to another live episode of the leveraging ad podcast. We have a really special episode for you today. Special number one, because I'm in Florida and there's a hurricane around us and it's still pretty far, but I really hope I'm praying for the hurricane gods to keep my power up and running so we can finish this live session. But the other reason why this episode is special is maybe the two questions that I get asked the most. The first one is how do I get started with AI? But since a lot of companies are kind of already gotten started, the second question that is very close second is now. We kind of getting started, we have a few use cases, mostly specific individuals are doing like their own thing, but we don't know how to go from that to a company wide implementation of the process of how to implement AI in a more structured way as a company. I'm happy to tell you that's exactly going to be the topic and the focus of our episode today. Our guest today. Peter Gostev is the head of AI at Moonpig. Those of you who don't know Moonpig means you're not in the UK. If you're in the UK, you would know them. So Moonpig is a really large company. They're actually a publicly traded company in the UK and they sell personalized gift cards and personalized gifts. They obviously sell them online, but they deliver them in the real world, which means they're a very interesting mix of an online business combined with a real. world business that actually has inventory and shipping and supply chain and customer service and everything that comes with living in these two universes, which makes Peter the perfect person to have this conversation with. So we are going to be discussing both strategy. How do we actually think about these concepts of AI implementations from a company wide perspective, which is from his seat? That's what he's paid to do. So hopefully he knows what he's doing. No, he does. and you'll see that in a few minutes, but then also, we'll dive into a few specific use cases that are practical ways that they're using AI right now at MoonPig. And so I'm really excited to welcome Peter Gustav to the show. Peter, welcome to Leveraging AI.
Peter Gostev:Glenn, thanks a lot for inviting me, I'm looking forward to the conversation.
Isar Meitis-1:Yeah, same here. I really, it's, Very rarely that I get to talk to somebody who's running AI in that size of company. Like I talked to a lot of AI experts who are either consultants like me or work in smaller businesses. You really have a unique perspective because you're working on a sizable company. And again, I said, In the beginning, it's a unique company because you both in the digital and real universes. Tell us a little bit about from again, a 30, 000 foot perspective, how did you even get started with the process? what were the things that were done from a strategic perspective in moon pig to even set the ground to enable the stuff we're going to talk about later?
Peter Gostev:Yeah, sure. So I joined the company about, I think it's about nine months ago now. So we've, we've certainly done things before I joined as well. And I think the recognition was that you probably needed more concerted effort around it where we needed to, apart from just general Please go try things and experiment. We needed to have some center of gravity where those sort of experiments could, get going. And the hot thing about, generative eyes specifically is that people don't really have intuition for. What works and what doesn't work. And that's why I think center of gravity is quite useful. And, that we've got a bit of more space to experiment, to try things out, build out patterns for kind of things that, that work. And that was really the idea behind the team. We've got a central team. It's a fairly small team, where we really experiment, we try things out and then we partner with different other. parts of the business to really help them also deliver. So we've got mixed mode in some areas. We built, new projects directly, with test ideas out and implement them. And in some other cases where we've got already a strong engineering team with, some problems they couldn't solve maybe, or couldn't easily solve on their backlog. We would partner with them and help them deliver better. So there's a kind of a, a bit of a central unit where we experiment, but still, I think what's important is that I've seen a lot of teams, maybe my previous roles were in larger organizations where, because it's so hard to innovate in the existing teams that quite often would take innovation outside of those teams. It does help for some time, but then you can't actually do deliver anything. So I'm a big fan of maybe experimenting a little bit, but you have to really get it to the teams who would actually build it and own it. So that's really the key idea behind the model.
Isar Meitis-1:Yeah. I want to ask a follow up question, but first of all, I want to thank all the people who are joining us live, both on LinkedIn and on the zoom call. So I. Really appreciate all the people that are here. feel free to introduce yourself. So where are you from? share a little bit about your company, share your LinkedIn link. If you're not on the LinkedIn side, if you're there, obviously people can see where you are. and if you have any questions. Please, feel free to write them in the chat. I'm monitoring both what's happening on LinkedIn and on zoom. And if you want to chime in and be a part of the conversation, please go ahead and do that. So my first follow up question to what you said is in a quick summary, you basically created this center of excellence for AI where people have both the time because it's not They don't have a full time day job to do other stuff, as well as the resources, meaning some kind of assume or have a playground and access to licenses and you're experimenting with different ideas. My first question is. Who are the people, like which department they come from? How did you pick them? Because I, one of the first things I do when I come into businesses for consulting and the things that I teach in my courses is to build an AI committee, which is basically what you have created. who are the people in the committee? How'd you pick them? What are like the, what would be your recommendations to how to start a process like this? Yeah. Yeah.
Peter Gostev:And the team that I have is a very engineering team. So the, when I look for people to join the team is the way I think about is that they should be engineers who then learned to do AI. But to be honest, there's definitely more than one way of approaching this problem. And I've definitely seen some very strong people who, don't have, AI background. They come in from research perspective or product perspective, and they just learn about how to use it. And then they apply it very well, for what they need to do. then, I think probably the most popular path is to go from, data science kind of background, machine learning background and get into generative AI. And I think this is an interesting path that I think, probably it is very popular because it's the closest in terms of the kind of the technology because it is machine learning. It is AI, but it's interesting that the mode of operating, I think is quite different that, what we want to do by with generative AI a lot of the time is to build is just. Use the API and do the easy thing, not over complicated. And I think what, data scientists, data engineers, or rather the machine learning engineers are used to doing is, really to solve very difficult problems, prepare the data, train the model and so on. And, I'm. Not that I'm against that skillset. I think it's actually really critical that we still not use the people who are very skilled at doing the hard thing. We just make them use the API. So I think it's actually important that we give those people the tasks that they can actually do really well. but, I can also imagine a lot of people moving into the AI engineer role and actually not being as, as happy with that role because they actually just build software and use the API. but I think what's really important is that I think we have not really reached any level of maturity in using the APIs. and certainly not at Moonpeak yet. I think we've got a hundred things that we could do, and I could imagine it's probably the same at most other organization, organizations. And I think what we should focus on is just basically use the API and find good ways of using the API to just in a very reductionist sense.
Isar Meitis-1:Yeah, great answer. I'll add my two cents. And then I have a follow up questions when you said you have 100 things that you could possibly do. my, my two cents to add is that even if you don't have technical people, there's a lot of stuff that you can do without having any technical skills. Like in most of the companies that I work with, we just either use the chat platforms as is, or the automation tools that are built on top of them, custom GPTs or Claude projects or Gemini gems or whichever platform they want to use, or that in combination with external automation tools like Zapier and make. Combined with different capabilities from the AI still allows you to do magical things. And the thing that I always recommend to companies in that committee is always to have people from each department participate in the committee, because then you get the inputs of what everybody needs, as well as you get different kinds of brainstorming, because it's different kinds of people. Finance people think differently than marketers or think differently than customer support people. And so you get the brainstorming that is a little better. and the thing that I think that is as critical is you get champions in each and every one of the departments to actually use the thing that they, committee puts together. So beyond the techies and if you have technical people in your company, yes, you want a couple of them in there because they can do a lot more stuff than you can do without those skills. but it's not a necessity and adding stuff on top of that is actually very helpful. So now to my follow up question, that was a very long comment. I apologize. But, the, you said a hundred things and yes, there's. Probably a very long list of things. So I have two follow up questions. One is how did you come up with the list of things that you can do? And then what was the process to prioritize? What are the things you actually want to do?
Peter Gostev:And actually, maybe just to build on your comment, in terms of the opportunities, I would say for us, we break it up into three buckets. One is actually just a little bit like what you said in terms of using the tools. And then we've got everyone from, I know, HR to comms to CRM, everyone using basically, pretty much, we built our custom GPTs, in the areas where it makes sense. And, but we still experiment with new ideas and train people up and so on. So that's definitely one bucket. Second bucket, opportunities where we, where we need to do some engineering, but they're quite all The things that we're pretty sure will work. We just need to put some engineering behind it. And then we just make it happen. And then the third bucket is the more exploratory, innovative bucket where we're not even sure it's going to work. And for us, quite often, it is, maybe, Image models where we want to see what is quite a visual, business for us at Moonpeak. So working with image models is quite interesting, but it's quite tricky as well. but not only that as well, maybe audio, video, all of those, these things come out and we want to expand. So these are three buckets in terms of how we, come up with opportunities. I would say that there is a more kind of corporate answer that we've, speaking to stakeholders and collecting ideas, evaluating ideas, prioritizing ideas in the kind of formal way. And that happens a bit. I would say what's probably pragmatically what's the most important thing for me is whether it is deliverable. And in the first instance. Is it a thing that people care about that's a given, but the second part is it deliverable. And that has two parts to it. One is, does it technically work, and sometimes it just wouldn't work and sometimes the issues where it's a, It's a, for example, something that we need to bring seven data sources together. And you know what, it's just not gonna happen. We've got better things to do. So we just don't, maybe not prioritize that one. But, the other one, other category, which I think is really important is, does the stakeholder care? Would they actually put time behind this and would they work with you and would they partner with you, resolve issues, provide you extra documentation and feedback and so on. And I think that's probably the biggest thing because we can build the best theoretical prioritization model to calculate the value, calculate the uplift and so on. And then if the stakeholder says, you know what, I'm busy for the next 18 months, come back later, then it's, and you can't pass, get past that point. then that's the biggest issue. In practice, I would say, apart from all the things that we all know that we should do, I would say the most important one is can you actually work with the stakeholder to deliver it?
Isar Meitis-1:No, I think that's a huge point. It goes back to the people you want involved in the process, right? In the committee, if you have the people who are the stakeholders, or at least the representatives who can say, yes, this is a great idea. But like you said, I'm busy between now and March. So talk to me in April. then it's not a good idea to go down that path, even from paper and the Excel file, the bottom right corner shows it's going to be a good ROI and so on. So I agree with you a hundred percent, something that, that in general is very obvious from everything that you're saying. And I agree with that with, from my personal experience, working with multiple companies is at the end of the day. The hardest part is the people part. The technology implementation is in most cases doable, sometimes easier, sometimes a little more complex, but it's doable. It's the people part that is usually the hardest. And you touched a little bit earlier, when you said about training the people on how to build GPTs and how to do that. How do you do that? How do you go into What kind of training is delivered to employees at what frequency by what people on what platforms do you evaluate them on that? what's the actual overall training process on a company wide deployment of AI?
Peter Gostev:And I'm not going to claim that I've cracked it for sure. but we are trying. it's certainly not a soft problem to a point that I think it is about the people. And I think we did a bit of a survey, across our, charge GPT users. And they, the biggest point of feedback was I don't have enough time to try things out and I don't have enough time to like, to dig into it and invest time into it. And I think that will be true because I think Maybe if, if you're listening to this and thinking to yourself, or maybe that's easy, I use it all the time. the big difference towards probably 90 percent of people is that it's not easy. Like they don't really know all the details, all the nuances about how it works. sometimes when I speak to people who haven't used it before, don't really know about it much, you realize how much tacit knowledge you built up just using these tools every day about what kind of things work, what kind of things don't work, how much data you can put into it, what kind of data is usable and so on. So that's a, that's basically that is a hard problem. So I don't think it's, there's like a perfect answer, but I can tell you what we are doing at Moonpig. So we've got about, nearly 200 users of ChargePG. Which the business is a lot. I'm also, I didn't expect that many people to be interested. And this is not really, we did not make anyone sign up. This was pretty much one Slack message, a number of months ago, people just obviously teams, it gets spread word of mouth and so on, but I deliberately did not want to market it too much because I didn't want. people just to sign up because it's cool and they don't want to use it. so that, that was the first stage. And then, we had a just kind of optional drop in sessions for people to learn about how to use it. that was part of it. then also we had, team sessions. So I would try and pick out some teams who may be expressed interest or for some reason I knew maybe they want a bit more help and I would have team sessions with them. And in those team sessions, we would build a custom GPT together or just go through a little bit of a training about how that would work. and then we also have. sometimes, big sessions with maybe the whole department and we recently did one for our senior leadership where we also got them to build a custom GPG, which is quite cool. And I think it's probably the more senior you get, the harder it is to find time to just sit down and actually do the thing that you've been reading about. but I would say the overarching lesson out of all of this is that you must do the thing. Just, you have to try it. It's just, there's no, the thing to realize is that there is no possible way how without trying you could develop the intuition for what works, what doesn't work, like what the precise intuition. And if you don't do that, then how could you know what? what use cases are good or not. and it's not that someone is, I don't know, not talented enough or something like that. There's just, it's not humanly possible for you to just emerge with that intuition. And even now I'm still struggling to, to find a way to pass that intuition to people who haven't tried it much. And it's quite difficult. so I'm hoping I'll get better at that. But, one, one other way that we do it is. What I find is that we can have all the training sessions and then people still go back to their day job and they, they don't really then necessarily find time to, to do the thing for four hours. So sometimes I would still pick out a few people where we can do the use case, and we'll just sit down for half a day, and just get it over the line and then just that a bit more effort makes all the difference. But yeah, I could probably do that 30 more times and still have more ideas, but, ideally what I'd like to do is to build more of a network where it's people, more and more people do that. we don't quite have that yet, so there's definitely more that we can do in that space. One
Isar Meitis-1:I want to summarize some of the critical things you said, because you touched on a lot of really important points. And again, I do this, I've been teaching AI courses to businesses like private courses where companies hire me to do the things that you just said since April of last year, I taught hundreds of companies. And so I want to touch on some of the points that are very critical. The first and most important point Is you have to block time on people's calendars to do this. And when I say block time is not give them a task on Slack is actually have a meeting when they're going to show up and their department's going to show up and their boss is going to show up and they're going to be there for an hour, two hours, half a day, two days. That is a condensed time where you focus on learning how to use these AI tools. There is nothing that gets you more benefit than doing that, because as you said, people have day jobs. And even though people know this is important and they want to do this and they know the company's saying strategically, we've got to move in that direction. There's always more urgent stuff that happens. So this is the number one, most important thing. The number two is what you said that people have to experiment, not with the concepts, but with the actual use cases that they're going to use. So the most effective thing that you said, and that I do with companies is these are like mini hackathons. Here's the use case we're going to work on. Let's work on it for an hour, two hours a day. It doesn't matter, but everybody sits in that room and working a group or individuals, and we're going to solve this problem together using AI tools. And that does two things. First of all, it forces people to get their hands dirty. And like you're saying, they're like, The concepts are awesome. And you can listen to this podcast, which is fantastic and follow people on YouTube and on LinkedIn and on tech talk, wherever it is, you follow people is great because it's going to give you a lot of ideas, but eventually you have your data, your company, your limitations, your co partners, your processes, your licenses, your. Unique solution is going to be different than most of the stuff that you see by other people. These other people give you great ideas and starting point and cheats to jump through a few hoops, but you will have to figure it out on your own. And the only way to do this is to actually try out. And if you're going to try out for five minutes between meetings, it's not going to work. And then it goes there. this is all bullshit and you're going to leave it. And that's going to be a big loss for you and your company. And You have to do these two things like free time for somebody who knows with a bunch of people who doesn't know to sit together after some initial training. So they have the basic concepts in place. What are the tools, what's prompting, what data you should not put in there. Like all these things needs to be there, but then dive into the actual hands on let's build this thing that solves this problem. And let's invest the amount of time that's required because then people Get the understanding of how it works. They understand what doesn't work. And then they also have something that actually does work in the end. And maybe it doesn't work perfect, but it works 70 percent and they have enough data and knowledge and scale and excitement to go the other 30 percent and then they're going to start using it. So great points. I want to move into the. Practical tactical side of things, because I think we covered a lot on the strategy side. What are some of the use cases that, that moon pig are currently using AI for in an impactful way that it's actually generating great results and people are excited about?
Peter Gostev:of the best categories that I would recommend to do any business and certainly applies to us is basically looking, looking at your own structured data and seeing what you can do with that. And the, what I mean by that is, any customer interactions, any, customer transcripts, chat transcripts, for example, you could, Analyze them and say, was this a good conversation or bad conversation? What were the entities mentioned in this conversation? was the, did the agent do a good job of handling this? And, I know if they had a bad, sentiment, what were they talking about? What, which specific bit they related on? Was this like a bug on a website, for example? And there are many things like that, that you could just extract from the conversation. And, I think most business, certainly the way it worked for us otherwise is that. No, maybe the agent is raises it with their manager and then, but the manager raises with the developer team and hopefully get closed off. But, it probably wouldn't happen. Then the, one of my favorite things is, normally you get an NPS score or whatever feedback method you're using. And maybe you get 1 percent of your conversations to get an NPS score, whatever it might be. No, I, I think, I don't know what it is for us, but it's like low digit numbers. And, but now you could get, put it through LLMs and basically, get the model to estimate what the NPS would have been for that conversation. And, it's not something we've deployed yet, but through just light experimentation, it seems to work really well. So the kind of calibration seems pretty good. So imagine now suddenly you go from maybe getting 1 coverage to a 100 of Coverage. And by the way, you can also do it fairly instantly. Okay. Maybe pragmatically next day, you can have a dashboard and you can have different correlations about why do we get that kind of, I don't know, why did you get that score? You can dig into that a lot more. Maybe people talking about that specific problem. And so we've, in terms of what we've deployed, we've done like bits of it. There's still the reason why we haven't done it instantly is actually data pipelines and just, we do have some data that we're pulling through, but not all of it, then the data pipelines complicated and anyway, real things like that, that make it hard to actually deploy things in real life. So we're definitely going to push on that more, but in terms of high level experimentation, that, that works really well. And in terms of
Isar Meitis-1:I want to pause it just for, I want to pause it just for one second, because you touched on a few very important points. one of the. Maybe most magical things that these LLMs can do is qualitative data analysis at scale, which something that was not possible period stop before that, because the only way to analyze large answer unstructured data, like transcription of calls was to have people, lots and lots of people to go and listen to the calls. take notes, try to calibrate it in a way that they're all giving it the same level of score and whatever, which is impossible. If you have lots of data coming in and you have 50 people analyzing it somewhere in India, where the math would make sense somehow from a cost perspective, how do you train those 50 people to give exactly the same score? It's just, very hard to do. And the only way to do this was to hire like a McKinsey who has this army of people to do a one time project for you. Like it was impossible to do this ongoing. When I was running my travel company, I was running a hundred million dollar travel company. We had a call center to do customer service, to do outbound sales, to do inbound sales, like all of that. And what we had to do is there were. People full time job, people would listen to calls randomly because they can't listen to all the calls. And when there was something interesting happening, they would pick it up and we'll make it into a training session. And that's exactly your 1%, right? And now you could do this for a hundred percent of customer communication. Doesn't matter whether the communication is verbal over the phone or on a chat or through email or through. A third party tool like G2 software. If you're a software company, or it doesn't matter what the data source is. You can take all the communication you have with. customers as well as prospects and learn from it at scale. I want to ask an interesting follow up question because you said you're already experimenting with this and I'm experimenting with this. So I just want to, I want to make it a little more tactical. How are you, even if you haven't deployed it and you just started playing with it, how are you practically planning to do this? Meaning are you sending the data to LLMs and then putting it back into some kind of a database? Are you using a third party tool like intercom or something like that to do some of the work? what's the practicality of what you said
Peter Gostev:the likely pattern for us is that we'll get data out of a tool that we're using for customer service, then putting it probably in Snowflake. So just via ETL process. And in there, we've got choices. There's snowflake. They've actually got some inbuilt LLMs now, so you can do the analysis directly there. So we just got access to the experimental method. Another option was just to pull data out, put it through LLMs, via whatever, open air, whatever. AI tool you're using. And then there's, there is a question of how we exactly, make that surface that to the end users. one pattern we've got is to have a Slack bot and basically just make that visible or just passive updates, via Slack bot for now. So it's not going to be like. a chat to a data kind of use case, but you just get updates and people subscribe to the channel and so on. And something I want to explore as well, whether we can make it more available as well to the managers directly, for example, in customer service centers. So they, it'd be more powerful if they've got access rather than people, some other departments have access. So there are probably a few things that, that we can do that, but. Yeah, it, we need to sort out a few of those steps in terms of pulling data in the In a way that makes sense that's things like that are hard. yeah, it's funny thing. AI doesn't really make that easy Like you still need to do the data pipelines and so on and in visualization and all and deployment So there's one bit of it is a lot easier and then or at least became from not possible at all to possible And but other things are possible pretty hard as well.
Isar Meitis-1:to two cents to that? And then I'll let you continue because I pause you in the middle of a sentence. but this is a really important point for people, but two, two things that I want to add quickly. One, you can test it at small scale. On your own without any developers, very quickly, just using tools like, make or Zapier or NA10 that can grab any transcription that shows up on whatever platform you're using to transcribe it. And by the way, if you're not recording your interactions with your clients, please start recording them. even if you're not going to use them now, at least six months from now, you'll have historical information to work with. So start recording everything. But, You can literally grab the transcription off the recording from existing tools that either zoom itself for or teams or Pond recorders or whatever it is that you're using and run it through an automation tool with a predefined prompt through a chat GPT, or a cloud or whatever, and get whatever summary you want. And then you can run a different automation that every time there's 50 of these go through those summaries and look for similarities and look for insights and so on. And that you can set up. Right now, today with zero developers, with one day of effort of tailoring everything together. So it's not going to be perfect, but it can be an amazing starting point at almost no investment. and so there's ways to solve this very quickly as a prototype in order to see what kind of benefits you can gain from that. Now let's go back to your process.
Peter Gostev:Yeah, and actually just to add on the prototyping, the way I actually start every prototype is in just chat GPT. To be honest, not, it doesn't work for every single thing, but it's pretty much build a custom GPT and see how well it works. And quite often the normal cycle for other project is typically, I know you have a meeting with the stakeholders, maybe then you do some prioritization. Then I dunno, you get developer time, maybe you build a prototype and so on. And quite often. I have a cycle is I have a meeting with the stakeholder in the morning and then the afternoon I send them a custom GPT and we just see what do you think? And then it doesn't mean that it instantly gets converted, but at least we are having a real conversation about something rather than about hypotheticals about maybe this will work. Maybe this wouldn't work. yeah. And then there are other things that. yeah, but OpenAI Playground is great, so we can do that. and it, which actually does have some extra features, for example, for structured outputs and JSON mode and so on, which is just a little bit tighter than what you can do in custom, GPGs. so that's helpful. yeah, so prototyping is super easy and it's just worth, always just do the prototype before just doing any prioritization sessions and so on. so then in terms of the use cases, so Yeah, but the field of just looking at unstructured data is. Incredibly rich. So we just really spend time on that. And it could be, so unstructured data includes things like customer conversations, your internal documentations that is a rich source. So if you've got the re documents written, chat documents, applications, are quite nice and custom, GPT is easier for that. but if you need something bigger, more, more robust, I think a Slack, bought a deployment with, With the documentation behind it are quite good for like HR policies or something like that. Then, something we, did as well is, looking at all of our, product descriptions and images and, putting them through LLMs or vision models specifically of the LLMs. And then we, in improve the way we tag. the products, for e commerce and it just, it's something that we probably spend, I don't know, 300 on an API course or something like that. maybe 500. And, and even if the uplift, if like next to zero, the fact that we can just do that and put in the extra tax without having to go through a big project of doing that. It's, it just makes it so much nice. And the nice thing about it as well is that if you halfway through, you decide, actually, you're going to change your tagging strategy. Like you want more tags or fewer tags. You can just do that again, just run it again with a different prompt and just test it. And you can swap between different tags. And so that, that makes it a lot more flexible. You're not locked in into some decision that you made earlier. And we, we do have like different approaches with different brands, for example, so we can also test it. And we certainly see an uplift depending on where we started from. something we haven't done too much yet, but I'm keen at looking at all of the data we've got, such as maybe the customer journeys and looking at the logs of customer journeys, see if we can, whether we can put that through the lens as well and get them, classified in a way that's probably very hard for to do statistically, but maybe if you kind of reason about it. That like you and I looking at the logs, we can say, Oh, you know what? This customer really struggled to look for, to find the product, but maybe statistically a bit hard to say. so that's probably another area, but we haven't done that, probably yet.
Isar Meitis-1:want to pause you for one second, because on that particular topic, I've actually done something as a volunteer project. So there was, there's a huge issue right now with antisemitism that's happening on college campuses in the U S and there's several different groups who are collecting that data from multiple sources, but they did not know what to do with the data. So they had data from actual students submitting it. They had data from news. They had data from the universities themselves that collected, like there's multiple sources, all in open format. there was no, One form that everybody was using, what to look for. And one of the big problems was exactly what you said. Like, how do you categorize it? Like, how do you define the categories that then you can go and look for and so on? And we literally did just that. Like we took, we put everything in one, CSV file and it has, I don't know, like thousands of rows. And we told the large language models to tell us how it would be the best way to categorize it so we can have something actionable about it. And it gave us. Several different options on how to categorize it. We picked one and we finessed it a little bit, but most of it was as it came from the large language models. And then we use that to categorize it. So it's even the first step. Like sometimes you have all that data. Like you're saying, you don't even know what you're looking for. It's there's so much information there. What can I learn from it? That is beneficial for my goal. In my particular case was to learn about. In these sematic events in your case It's how do people engage with our products? What are they looking for? What are they not finding? it doesn't matter, but so that the AI is really good at finding patterns. And so it can find the patterns for you and can recommend things. Along aligned with your needs, if you'll define it to the LLM. So yes, there's even the first step in the process you can solve. that, that was like you said, almost impossible to do before.
Peter Gostev:did you have any problems with like models not wanting to engage with that kind of language?
Isar Meitis-1:So the way I'm doing this, and maybe that's why I didn't have a problem, but it's, I'm bringing all the data to a CSV file. I'm open it in Google sheets and I actually have a. Code Google sheets that allows me to bring multiple large language models into the Google sheets itself. And it's just running on the APIs in the background. I'm using a tool called, open router. So open router, for those of you who don't know the tool, it's like a bridge to. Almost any large language model API out there. So you get one API call, but then you can pick which model you're going to use. So the way I'm using it in Google sheets, I have four or five different columns with the same prompt running on four or five different models. And then I can see which one is working best for that particular use case. And then I can. And then if there's more than one that's working good for that use case, I go for the cheaper one. some of them cost 3 cents for a million tokens. Some of them cost 70 for a million tokens. The spread is pretty big. it's still cheap. Any one of them is still cheaper than giving a person to try to do the same work, but if you can save two orders of magnitude, why not do that? So I, so that's how I use it. I never had issues with the API not working because it didn't want to deal with that data.
Peter Gostev:Yeah. Yeah. Okay. Okay. Interesting. Yeah, that's actually the, in terms of doing the categorization initially, that's, that was actually one of the use cases that worked pretty well for me. With the latest OpenAI model, the O1 preview. because I. I have tried to do get it to do the categorization initially and maybe for that kind of use case that you're describing with the antisemitism maybe it's maybe it can understand it well enough so it can work out the categories for us I found that quite often the things we would care about is, it's not something a model can guess, but for example, I would care, we are doing some debugging on the tool, that detects, that basically helps customer service agents, just tweak the messages automatically. and we want to see. What was the original message that they got and what was the one that they sent and basically analyze the difference. And the things that I would care about detecting there is maybe not what the model cares about. So I don't really care about formatting, they can, that's like more personal preference, but I would care about like whether, It like removed Moonpig branding, for example, because that was a recurring issue. Everyone that tested, but it was interesting. I would say the O1 preview actually did really good job on that. So I was using quite, quite a lot and I was saying, yeah, here's all the data go nuts, categorize it and make sure it's like Missy and like all of that. So it did it did do a good job.
Isar Meitis-1:Awesome. So let's do a quick recap of use cases. We talked about in general, unstructured data analysis, wherever it comes from. We talked about customer service. We talked about categorization of specific types of data. It doesn't matter where it comes from. So you can do more with it. we what other use cases are you guys using it for right now?
Peter Gostev:Yeah. So then the, so within customer service, the, there are quite a few like smaller things that, that, we are looking at. So, one, I think a lot of the time when people look at customer service, they say, Oh, we can build a chatbot that just, Kind of automates everything. And I think that's fine enough. I think, we do as well. And a lot of the questions we get, where's my order kind of question. And to be honest, it's not interesting for a human to respond to. They don't really add any value. So it's not it's not a fun job. So like that part is more or less automated. And then, when we looked at the processes of what agents are doing as well, we found that a lot of. The time they would select like a template that they would respond with to a specific issue, but then they would just spend a lot of time like changing it basically to insert the customer's name or insert the issue that they're about like things like that. No, it's not fun to do. It's not really adding any value. So we came up with an idea of basically having a little application on the side where we can basically take in the context of the conversation so far, embed extra rules behind the scenes where we are basically saying, if it's a chat channel, then you should format the message in the chat way. If it's a if it's a channel that's like email channel, it should be email channel. That kind of stuff took a lot of time as well. Just no value added. They just like reformatting a message. And, the important thing is the important decision we made there is to not to be too ambitious in terms of how much we want to, how much thinking we're going to give to the model. And actually. More or less, maybe it's a bit harder for new starters, but the agents like know how to solve issues they know they can think about if it's a, if it should give refund or not should give, or we shouldn't give a refund or if so then how much, and there's some discretion then sometimes they go to the system and be like, We probably wouldn't give a refund normally, but this is a really good customer for us, so we should give it and so on. So we basically decided we're not going to give the model any power like that. And we are not going to rely on it for judgment. Judgment still stays with the humans. And actually the change, that what it meant for us in terms of the scope of the project and the complexity of the project was probably like 10x. Of we had to do so much more work to get it to perform, even understand when you're supposed to give a refund is actually quite hard question because you need to know what kind of policies we've got for giving refund. We need to know what kind of information we even need to know before we can give a refund. Then we need to have a judgment of, I dunno, send us a picture of your gift being like destroyed. there's so many. Things, and this is just for one set of use cases, and to be honest, it's so much easier for us to say, you know what, for now, as a first version, let the agents who are humans who are good at their jobs, just do the hard part, and we'll just do the thing that they don't like doing, and we'll just automate that. And so far, it's only been live for a little bit, but, the feedback's really good. It's doing a good job. There's still like some careful work we need to do to make sure it works well. But I really like that, that I think we, we made something that we could actually deliver that works well, that we don't need to maintain constantly, in terms of like knowledge base and so on, but we're still living where I think we're, I hope we're making the job more fun, more, more engaging that they're not doing the tedious parts of their work and actually just helping customers. And I personally like that. that's a use case that I feel good about.
Isar Meitis-1:Yeah, I want to, Cassie Kozykow, who used to be the chief strategy officer for Google. So a very smart woman, she's not there anymore. She left. She's now doing her own thing, but she had a very famous lecture that she has a great YouTube channel, by the way, look up, Cassie Kozykow, anybody who's listening on her YouTube. She talks a lot about how she brings. Really complex advanced AI concepts into monkeys like me to understand. So it's a great channel to follow, but she has a very famous lecture about thinking versus thunking. And I actually got to see her live talk about this about a year ago. And she talks about some of our job is thinking, which what you're saying is like your human judgment or whether that makes sense or doesn't make sense, thunking part, which is everything other than thinking, right? It's a word she made up. which is. How do I now write the answer? How do I format the answer? What format, like which platform do I need to put this? all of that, nobody likes to do it's just overhead that we have to do, or we did so far. And now that could be completely eliminated or mostly eliminated or assisted by AI. And so actually the direction you're describing is awesome because it. It's a significantly lower investment to develop the solution. You're keeping the employees versus the Klarna case. We said, oh, we can let go of 700 customer service employees because the bot now does it. And you're letting people be happier with their jobs because they're really focusing on helping people and making the right thing versus having to deal with crap they don't want to deal with. So I agree with you a hundred percent. It's an awesome solution.
Peter Gostev:And I think with the cloud like approach, we did consider doing a project. Like along those lines and the thing you realize is that for us to actually implement something like this the key missing element is actually the context that the model should have It's basically the data, but I don't mean when I say data I don't mean it in the sense that I know we didn't collect enough data or something like that What I mean is it's more the llm Specific documentation that you need to write in a way that's not Just good for humans, that is written for LLMs. to give you an example, we have a, a policy, it's just a little, table, which just shows when you're, like, supposed to do what. It's, give a refund if there's, if it's delayed by more than a certain number of days, then you can give a refund, that kind of thing. If you're just to give, if you were to take this as a document, to give this to the model, it would just, It will make no sense to that model. You have to provide so much context about what is it that is happening. You need to describe all of your processes. And by the way, you have to do it perfectly because they cannot go and speak to their manager live, right? What a human agent can do. so there's no real room for tolerance. and the threshold for actually getting this right is So much work, so much documentation you have to write. Perfect maintenance. And I think, I don't know what cloud is doing, but I would, I imagine they would be doing is something like they probably have a big operation to actually maintain the documentation and make sure that it's up to date is correct. It works well. And yeah, that I think that is a promising direction as well. I think You can go down that route, but it's not free. You can't just like plug in the model and it just works. So I think maybe for operation that size, it might make sense. I think for us it will be too much, too big of a project. And to be honest, there's so many other ideas I want to explore. I don't really want to spend like next eight months writing documentation for chatbots.
Isar Meitis-1:So you bring a great point that I actually want to follow up on. On that one is that it's an ROI game. Yes, you can. Can you do this? What? Probably yes, it's going to cost you X number of dollars and X number of months to develop this. And is it worth doing? And let's say even if it is worth doing, let's say it is going to Yield a positive ROI within one year. But what are you not doing at that time? Because you're not doing other projects and other initiatives because you're investing all your eggs in that basket. So my question to you is how do you currently really pick those projects? You gave us a general idea, but you said you didn't want to do this one. You have other ones like. What are the things that you're looking into right now, as an example, and if it's sensitive, you don't have to say that at least tell us the concepts, because I think that would be very interesting for people to know, what are the things that somebody in your position at that size of company is looking into as far as implementing in the next six to 12 months?
Peter Gostev:Yeah. and I'll start with a bit of framing is that I want to have a portfolio of things. So I want to have big things that I'm delivering to make sure that, there is impact. And then I want to have little things that, that we can go on day to day and deliver. So part of the little things are things like running training sessions and picking up with specific teams and just helping them or kind of thing. There are, there's a really nice category of use cases where It's the teams just need like a little bit of inspiration, a little bit of push, and then actually they could just go and explore those use cases. so I don't think I can say some specific things we're doing just because it links to other things and so on, but generally. It's where they have a some specific problem that I know they basically can't launch the product or like it's makes the user experience so much worse. And then, then what I try and do is to be in the places where those kinds of. Problems I discussed. And then I could basically say, you know what, actually, if you did use LLMs, you can probably do that a lot easier. And quite often I think people ages wouldn't know how it works. They don't have the right intuition for picking up, picking those problems up. And, my job is to have that intuition and to have, pick up the problems and then have the intuition to solve them. and then they just never actually tried it. And there's a little bit of a barrier of, yeah, you, even if you have the intuition, but to literally see like, where do you click? What does the like API structure look like? Oh, actually. Yeah, you have to actually define the schema here. There's still like a bit of friction there that just developers wouldn't necessarily know out of the box. But once they have the kind of ergonomics of using it, it's actually the, they just go and build it. And it's, I had probably three examples like that already where my involvement was probably like, I would say a couple of days. And then the teams just went and built something. And, yeah, these kinds of projects are my favorite. Cause I can do like nearly unlimited number of those. and then the teams just do their normal job and they can just do it a lot better. but yeah, some, and then, I mentioned the experimentation. So certainly try and think about. Any new things that we could potentially do many new models come out and we just explore, build little prototypes and experiment, and then hopefully you'll see some things come out in the next few months of the, from, from olympic. But yeah, it's basically new customer features of this stuff that, we just couldn't do it last six months and now we can. So that does the kind of things that we're experimenting with as well.
Isar Meitis-1:That's awesome. I'll piggyback on the geeky side of things and going back to our very first point, that would be a great way to have full closure. And then we can say, thank you. But, in the committee or like in, maybe you have subcommittees for specific things you want to have geeks. As part of the team, because the geeks of the company who love playing with these tools, who find it exciting or like me, when a new model comes out or a new tool they found, we'll spend between 10 PM and 2 AM playing with this to try to figure it out and be exactly those people that can do the thing that Peter just described. if you will let them be the champions of these little projects, you will have significantly more. Successful projects across multiple departments, just because you let people do it and these people can become your champions within the departments to do the stuff that Peter cannot do all on his own because he's one person. And so if you have a few more people in the company who are actually. Enjoying doing this and they have that intuition and understand what can be done and you give them the freedom and you give them the, quote unquote title of okay, you are now the AI champion of the finance department and you are allowed to use these tools and you're not allowed to touch this kind of data, but now go do whatever you want with it. You will get amazing stuff out of this very quickly. Exactly. In those kinds of scenarios that Peter described. Peter, this was amazing. A fascinating conversation. I personally learned a lot. I'm sure that people are listening, learned a lot as well. If people want to follow you, work with you, learn more about your journey, what are the best ways to do that?
Peter Gostev:I think probably the only way, the best way is on LinkedIn. So I think that's probably the easiest, way. you can, connect with me, message me. I'll try to respond, but, I'll sometimes, don't check my messages for a little while, but I will get to, at some point. but yeah, this was a great conversation. yeah, thank you so much for inviting me. I really enjoyed it.
Isar Meitis-1:No, thank you. And I want to thank again, all the people who joined us live, who are here on zoom and on LinkedIn live. I know this was. very valuable because this is a phenomenal conversation that is really critical to anybody who's trying to do this in AI. So again, thank everybody. Thank you, Peter. Have an amazing rest of your day, everyone.