DELIVERED

Navigating AI security and governance in 2024 with Jonathan Dambrot

July 09, 2024 Infinum Season 1 Episode 9
Navigating AI security and governance in 2024 with Jonathan Dambrot
DELIVERED
More Info
DELIVERED
Navigating AI security and governance in 2024 with Jonathan Dambrot
Jul 09, 2024 Season 1 Episode 9
Infinum

In this episode of Delivered, you can learn how to adopt secure and compliant AI practices and build a thriving business.

We sat down with Jonathan Dambrot, the CEO and co-founder of Cranium AI, a leading AI security and trust platform that spun out of KPMG. Tap into Jonathan’s 20 years of experience in IT, cybersecurity, and third-party risk management to successfully navigate the complexities of the AI landscape.

Key learnings:

  • Learn how to address the biggest AI security threats companies are facing today
  • Discover the steps to successful, secure, and compliant AI implementation
  • Find out the key business decisions that turned Cranium into an AI security leader
  • Understand the role of user feedback in shaping a product’s success and how to get the most out of your product team
  • Get inspired by advice on successful leadership and building a culture of innovation



Thanks for tuning in! If you enjoyed this episode, hit that follow button so you don’t miss our next delivery.

Delivered newsletter
Delivered episodes are recorded live. Subscribe to our newsletter to stay informed about upcoming live events. When you join a live event, you can ask our guests questions and get instant answers, access exclusive content, and participate in giveaways.

About Infinum
Delivered is brought to you by a leading digital product agency, Infinum. We've been consulting, workshopping, and delivering advanced digital solutions since 2005.

Let's stay connected!
LinkedIn
YouTube

Show Notes Transcript

In this episode of Delivered, you can learn how to adopt secure and compliant AI practices and build a thriving business.

We sat down with Jonathan Dambrot, the CEO and co-founder of Cranium AI, a leading AI security and trust platform that spun out of KPMG. Tap into Jonathan’s 20 years of experience in IT, cybersecurity, and third-party risk management to successfully navigate the complexities of the AI landscape.

Key learnings:

  • Learn how to address the biggest AI security threats companies are facing today
  • Discover the steps to successful, secure, and compliant AI implementation
  • Find out the key business decisions that turned Cranium into an AI security leader
  • Understand the role of user feedback in shaping a product’s success and how to get the most out of your product team
  • Get inspired by advice on successful leadership and building a culture of innovation



Thanks for tuning in! If you enjoyed this episode, hit that follow button so you don’t miss our next delivery.

Delivered newsletter
Delivered episodes are recorded live. Subscribe to our newsletter to stay informed about upcoming live events. When you join a live event, you can ask our guests questions and get instant answers, access exclusive content, and participate in giveaways.

About Infinum
Delivered is brought to you by a leading digital product agency, Infinum. We've been consulting, workshopping, and delivering advanced digital solutions since 2005.

Let's stay connected!
LinkedIn
YouTube

Well, hello Jonathan, welcome to Delivered.

Welcome. Yeah, thanks for having me. It's great to be here.

It's great to have you. And thank you for joining us. I know you're a super busy man and this topic as well that we're about to talk about is something I know comes up again and again. I'm really excited to get into it and hearing your credentials as well. I know how much you have a legacy of this kind of knowledge in the space. So I'd love to kind of just ease into this really and maybe find out just about you. Who are you? How did you come on this career path to Cranium AI?

Yeah, so as you mentioned, I'm the CEO and co-founder of Cranium. This has been a wild journey for me, to be honest. So if I look at my background prior to spinning this out, I was a partner at KPMG and focused on a couple of different areas, one of them being third-party security. So, I led the practice for third-party security at KPMG globally, and I used a lot of AI in my practice. And it was really interesting because we were trying to solve problems. I had run another company called Prevalent prior to going to the firm. I sold that to a private equity firm and I was trying to figure out what I wanted to do and KPMG reached out and said, do you want to run third-party? I said, yeah, but I want to do it in a different way.

If it's just about doing the same things that we've done historically, I'm not that interested. And KPMG offered me a unique opportunity to come in as a senior partner and start to work on how do we solve third-party risk and security? And I was able to work with just some tremendous talent when I was at the firm around AI and machine learning and we had built a set of models, we built 16 different models in support of this, and we recognized that basically there really wasn't any security realm. So I started seeing this, and so I pitched the firm on creating a platform to support the security of AI. And this firm was just super supportive. So I've always been an entrepreneur. I ran, this is now my fourth company. I started in high school, I started a magic company, so I was a magician growing up.Most people don't know that. That's like a little known fact. And that's how I paid for college. And I basically manufactured three magic tricks I sent around the world and I performed every weekend, and that was a lot of fun. And then when I got to college, I decided I wanted to be a chocolatier. My grandfather was in the chocolate business. I said, oh, that sounds awesome. And my mother's like, no, you're not doing that. You're staying in college. And so we had this whole discussion about this. And then finally I started Prevalent when I was in graduate school and getting my MBA and kind of built a business focused on how do we change the way things work? And that's been kind of the mantra since then is like, how do we change the way third-party works? When I got to KPMG, I wanted to continue that mission and had an opportunity to really start looking at machine learning and then recognizing again that there was an opportunity to think about security. And I pitched the firm and they gave me my first $3 million to go start building this business. And we had an agreement that we build something to go spin it out, and we became the first spin out ever from KPMG studio and it was a wild ride. We worked with Infinum on that project as well. And it's just been wild to watch it go. And now we have some of the best investors in the world. We've got some of the biggest clients in the world. It's been absolutely wild ride.

Yeah, I love that story as well. We've had the pleasure of working with yourself, you've worked with KPMG and I know how much goes into these things to even validate, prove something is worth even building investing in just generally turning into a product or business. So I mean, kudos to you and your team for making that happen, which I think is a commendable feat. And I also love your backstory as well about the magician, chocolatier, like true entrepreneur from daily days.

You never met a software guy who had a chocolate business and a magic business before. What do you mean?

I feel like I've missed out on something in my childhood. I think the best eye I wanted to be was the movie The Karate Kid that was, or the Karate Kid. That was exactly. Yeah, that was the thing. My parents were like, you can't be that Chris. They're like, but I want to be Danny LaRusso. That's all I want to be.

I think we all wanted to be Danny LaRusso. I have to tell you, that was growing up. We all wanted that and we all have to settle at some point.

Absolutely. Thank you for that. I feel validated and heard all at once now, so thank you for that. So going back to the business, so Cranium AI sounds great and again, well done on that whole raise and growth of that. Let's talk about the idea of the company, how it was born. Where did that come from as a business?

Yeah, so when we started incubating the idea of Cranium was born because inside of my practice I was starting to take a lot of sensitive data for some of the largest companies in the world. We were putting them through those models I was describing. So we were using these AI and ML systems to help support really changing the way third party risk was happening. And then we were putting 'em into endpoints and into production, and I said, when are we going to secure this stuff? When's the security conversation happen?

And they go and they said, what do you mean? And I said, what do you mean? What do I mean? It was just this conversation I couldn't understand. So finally I started recognizing there's so much innovation happening so quickly, but people aren't necessarily ready to have a conversation around the security of these pipelines. And we're starting to see this go at a pace nobody could believe, and we were solving really big problems. In one case, we had an assessment project where this customer had tens of thousands of vendors and they were only able to do about 500 manual assessments a year. And with these models we had built out, we were able to actually do 50,000 assessments a day. It was just this wild exponential opportunity. And so we said, this is the future. This is where we're going to spend our time, but we now need to focus on security.

So I wrote my business plan and my pitch deck. This was done over a weekend, to be honest, it was probably the worst pitch deck anyone has ever seen in the history of the world. And I pitched it to the firm, and as I said, people have a different view of the Big Four or KPMG. I will tell you my experience was fantastic. I always had a lot of support for these initiatives at the firm. And so the firm said, let's go. Let's get going. We stood this up inside of the KPMG Studio, which at the time I wasn't even aware had been started. We were the first project to go through that, and we really started thinking about who would our design partners be? And I started talking to the clients that I had been serving, again, the largest financial services, largest life sciences companies, largest CPG companies in the world.

And by the way, these still remain our clients today. And so we started to identify and as we were building threat models and thinking about how the software would interact and who our users would be, we learned so much through that process. We learned that people were not ready for this. We were at the very beginning. People in a lot of ways didn't even understand what artificial intelligence or machine learning meant inside their environments. In some cases, didn't know what was happening with different businesses that were using AI and ML in their practices and in their divisions and how all of that was happening. So it was eye-opening as we were going through this process, like asking questions, who should own this? What would be in a perfect world, what would be something that people would want to know about? And then trying to outline what the threats and risks would look like based on their use of AI and their intended vision for what that would look like. And then obviously GenAI as that hit and we started thinking about these pieces where the machine learning environment was going, and then all of a sudden having to pivot with these companies who wanted to start using generative AI and those models as well. And how that intersection happened was mind blowing. So as we started to really work with our design partners around how to think about those things, it was really, really intriguing.

And you really made me think of something there. And I think a lot of the client base we work with at NY as well, it's almost like they don't have to be necessarily an AI centric based business now, but obviously they want to be at some point. And I think what you're saying there about the idea of consulting design partners meets engineering as well, that's something that we specialize in quite a bit here just to help almost guide the kind of process of dare I say, transformation, but almost more around the idea of how do you embrace AI in a useful sense, but a secure way as well. And that's what really made me lean into what Cranium does because one of the things at the minute is I think efficiency is always an outcome that people want to achieve from AI, but I suppose with those outcomes comes risks, and with that security, I suppose can sometimes be overlooked chasing those low hanging fruits. AI promises a business to have a competitive edge or something different to their other competitor over here. I wonder what's changed over the years though, because like you're saying, the maturity of what AI is and what ML is now is more common parlance to people in a language sense. But from your experience, how have you seen the security part of that really say exponentially go up with the growth of ai?

I think there's more awareness. I think what generative AI did for a lot of people, it prioritizes AI in a real world sense. Right now. I could use ChatGPT, I could actually see the power of this in my own on my laptop. I could go to a website and I could have a lot of things that fundamentally I just never could do before as an individual. It was like an iPhone moment right now, I think you're actually seeing even more machine learning and generative AI coming into these environments at a pace which I find astounding. I don't think we've seen this level of transformation from a technology perspective even as we've all lived through the birth of the internet and well, maybe not all of us, but many of us live through the birth of the internet

And seen a lot of these things. We're seeing a transformation happening in real time at the pace we've just never seen before. So what does that mean if I'm integrating either GenAI or machine learning into almost every application and every business is now an AI business, now I need to think about these threats in a different way. And maybe these discussions have come up, but now they're really becoming meaningful as again, we're putting the most sensitive data we're bringing in. Every application that we run is now AI-enabled or trying to be, we've got GenAI use cases and some enterprises that are reaching hundreds or thousands of use cases. And a lot of this just happened, right? It happened within the last year. We haven't even set up in some cases the appropriate governance. So this is the collision, right? It's the collision between large scale transformation, emerging technology, and the need to get these types of returns, but understanding that if we don't do this right, we're going to potentially face real problems in the future. The impact of this, the impact on people, the impact on our information, and how we think about how we move that forward.

Yeah, for sure. I mean, I always think about the idea of when we talk to clients from a discovery phase from our world before we even get close to any kind of production side, and it's almost asking what are the opportunities, first of all that AI presents? And of course everyone has a ton of ideas around that, but then it's like, okay, well what are we actually doing with it? Is it a process automation? Is it product and service documentation? Is it a whole new business in your case to be spun up? I think you're right. There needs to be a threshold in the middle between the technology being checked, possibly the ethical and security reasons right after that as well, because as important, and actually then what does that mean for the people involved? Before you start to even start to MVP and POC and all these other buzzwords that we use in product to build a new thing, it's like where does that sit both in the technology but almost in the human part as well, because they're almost becoming one really where the whole ecosystem becomes almost like a cyborg that needs to be considered on both sides, right?

Humanistic and the technology part.

It's so interesting when you say that because when we first started this process and Infinum did a lot of the design thinking and workshopping and design work that helped us build the product out, even from the very beginning, it's changed. It's fundamentally changed even over this short amount of time because I think the prioritization of AI, we were even just trying to think about the features that would be necessary. What are the threats? How do we think about it? They're happening at such a pace. So I think what's happening now is that as organizations are thinking about the future of emerging technology and building this in and how AI is going to fundamentally impact, they're having to create architectures that are going to support that level of innovation. So how do I change it? How do I create a design, at least in the case that we worked on, how do we create a design that can fundamentally shift and is easy to shift into those new things and add new features as we start to see these types of technologies take foot?

And we've been very fortunate because we brought in really, really strong talent on the AI security side. We brought on strong talent on the AI side, and we've obviously had great design support. So the combination of these things has allowed us to really build something that we can kind of move at pace and as necessary because a lot of the initial things that we started talking about weren't really the things that ultimately clients really needed. And so we've had to be flexible and be able to pivot into those areas. And I think that's really a requirement today. The cycle times have just gotten so tight.

Yeah, so true. It's so true. I always think back to the idea of what it looks like as a business owner these days, you're already under pressure to build and operate in such a competitive market. But if I was to be not Chris at Infinum, but a business owner of a large scale organization, you want to wave speed and do these AI things, but obviously the security and threats out there because of the byproducts, how would you say that Chris, the business owner, how would I address them in a sane manner before leaping firstly into the promise of AI without addressing the perils?

Yeah, so let's start by talking a little bit about, I'm a huge believer that AI is going to fundamentally change things for the better. So in that way, I'm a bit of a utopian, I look forward and I think about the fact that we are going to be in a better fundamental state 50 years from now than we are today, just like 50 years we are from 50 years ago. If we look at our life and the impact of technology, I would say the mass majority of people are way better off. And I think that's something that I think we'll start to continue to accelerate. And we have huge problems. We have major problems on this planet, and I think we need things like AI to help support. So just I'm going to put that there. Now we're going to talk a lot about some scary stuff.

When you talk about AI threats, you get into these interesting conversations that I think people, especially growing up, you mentioned wanting to be the Karate Kid, right? Well, we grew up in a time where you had major trust issues with AI. We grew up with AI. Movies like the Terminator people think about that was scary. People think about AI-enabled robots, and in every single sense, in every movie ever, they kill us. That's the ultimate end of the story. And I think when we actually talk about the AI that we're using today in a corporate sense, we have an opportunity here to really fundamentally manage these risks appropriately today. That's why we went on this mission because we felt like we had an obligation, but also if we don't do something, these major issues that we think about could potentially happen. So as we're bringing really sensitive data, and we've seen the impact of data loss generally from a cyber perspective, but as we're bringing really sensitive data into AI pipelines and we're building using either machine learning models, in some cases, open source models or generative models or foundational models, and we're bringing these together to solve these incredible challenges, now we need to think about threats to that data.

So we need to think about things like poisoning. We need to think about back doors that are getting put into this in a real sense. They're different in some cases, traditional software back doors, and those are still issues. So in addition to all the standard cyber stuff that we've been talking about for a long time, we now have unique threats. And some of our AI leadership, we had a gentleman who worked at Mitre, some of you may use Mitre attack framework, or in this case Mitre Atlas. So Josh Harguess on our team came over with a couple of people from Mitre and help develop some of those adversarial risks and understand them and help to develop what it meant to red team, an AI system, and how to think about that. So in a real world sense, there's very unique challenges to these AI systems and in some cases they're open and you can touch them and you know how they're trained and you get some visibility. But in a lot of cases, especially for those organizations that aren't necessarily building their own AI systems, but they're dependent on AI services, you now have more gray box or black box and you need different techniques to really open those up and understand them.

Yeah, absolutely. And you mentioned a phrase there I think for the people at home, the red box team, I think that seems red. Yeah. Could you give a little bit of insight as to why we need one of those and what it means for a business?

Yeah. So let's say you look at historically what red teaming really has been is more like penetration testing. So if you think about a system, we're going to do everything we can to break into that system and then we're going to publish a report on what that looks like. I think in AI, an AI system, if you look at how it's made up, the data, the toolkits, the model, the endpoint, et cetera, there's a set of unique things around the model. There's the interaction in the pipeline and certainly different architecture. So in some cases for generative systems, if you're building like a retrieval augmented generation system, how you have to think about the threats generated to your business based on what's in that architecture will determine how you actually have to go think about attacking it, what vulnerabilities there are. In some cases, they're not completely adversarial the way you think about them in terms of somebody attacking them as a hacker, but also just fundamentally, you mentioned some other areas, there are other areas you need to think about. So how do you actually take an AI system, understand it, understand what those unique vulnerabilities are, whatever they might be, how do you actually go and see if the threats can be penetrated, and then how do you report on that? So you need a special skillset. You need to have people who understand cyber, you need to have team members who understand the AI, and then you need tools that can bridge the gap there.

Yeah, it's so true. I always think about this statement we always come out with when we're leading people through a process to understand and discover what they need. It's like we're just there to be the guide, they're the heroes. But it also reminds me of a separate conversation we had on the previous episode of Delivered where we talk to an AI specialist about it being circular. This whole thing is circular. So it's almost once you start this process and being guided through it, that whole process isn't just about the design anymore and the engineering now it is about what you're saying, the security, making sure it's doing the right things at the right time for people. And that black box solution has some level of, I guess security or control or understanding to help this circular motion keep going in a nice way, which I guess helps ultimately get over trust issues. I think that's probably something that's quite prevalent as well in the AI. Yes, it has all this cool stuff that we keep seeing with the B2C stuff, with ChtGTP and also the kind of corporate level. But I think it's also fair to say, and we see this a lot when we're talking to clients about new potential AI solutions, there's always a level of trust there. I wonder if that is, going back to what you were saying earlier, is that because Terminator burden into our brains or how

I find it very strange because although generally there's a lot of fear, I would say people fundamentally still trust AI systems more than we admit. From the very beginning when machine learning started to take foot, you heard these stories of people using GPS systems and then driving into lakes because the GPS told 'em to go there. We see similar things here. There is a gentleman who helps run AI red teaming for Microsoft. He wrote a book about this and one of the chapters, his name is Rahm, one of the chapters starts off by talking about a study they did that this scientist did. And what he did is he actually took a robot but made it perform in a way that was unintended. So when they led people into a building and the robot led them in, but it got lost. So these people followed this robot to get to their room, but it was not necessarily taking them there appropriately and they had to actually bring them back to another room.

Well, then they simulated a fire. And what was interesting about this is that there's smoke everywhere. There's alarm going off and this robot, which that got them lost before everybody still followed this dumb robot to their death, even though there were exit signs like go this way, and everybody still followed the robot. So we have this innate trust in the technology in AI. I would say most people have fundamentally trusted the output of generative systems, and we've seen impacts to that as well. They've overt trusted the architectures that they've built into those systems and integrated them directly into applications as an example, and seeing some of those issues. So you saw something in the US recently where there was a Chevy dealer, which basically integrated an AI driven chatbot and it basically sold a car for a dollar. You saw Air Canada where they had an issue where they had similarly a refund policy that was completely hallucinated and a judge told them that they had to live by that.

So there's just these types of things that we've been overly trusting both on the user side as well as the enterprise and consumption side and need to bridge that gap. We need a mechanism so that we can start to feel more comfortable. And you've seen this in large enterprises where AI governance communities have now stood up cross section between AI leadership, AI security compliance, legal privacy and others. And I think that's a really healthy starting point. But now we need to start really making sure that we have tools and technologies to help support that effort as people are really looking at AI governance more meaningfully.

For sure. I think it definitely comes down to having that full holistic approach across the business now when this type of technology's being integrated, for sure. I always think as AI is the Microsoft won't mind me saying this, but using it as a copilot for humans to then be better or to have more reach or more, I guess access to things quicker, thereby be able to maybe keep up with the exponential change. I think that's something we've always looked at Infinum, is like how can we leverage AI for our clients, for ourselves is constantly, it's ongoing, not a replacement tool, but more as a copilot to give the human beings we're working with a bit more like superpowers to deliver things better, faster, stronger. And yeah,

It's hard you say that we want to do that, but then when you think about doing that in an organization, getting people to make that change has been very difficult and you're seeing organizations having to take some radical changes. So at Cranium we build it into our OKRs. How do we use our AI platform not only in the product, but also to how to actually use it in the business so that we can get better advantage? And I think this is going to shape how people are going to be successful or not in their companies fortunately or unfortunately really take this to heart and build AI into their workflows and use this. And businesses that enable that process are going to be more successful fundamentally than those that don't.

Yeah, I love if you said that actually, because some of the process in which we run say discovery at Infinum, things like user research, like MVP or prototyping or just even wireframing, things like that, or using synthetic data, synthetic testing, it is already here. So actually leveraging that, it's just about helping us help our clients faster, get to where they want to be in the market. So yeah, I think you're right. I think the trusting is definitely becoming a bigger thing, but yeah, you're right. Leaning more towards it is it's a healthier way, I suppose. And I just wanted to step back from the tech and the trust for a minute because I was really intrigued about, we started this conversation saying you were spun out KPMG, which I think is a great accolade in itself. And I know from experience with myself working with startups and having a venture company back in the day and things like that, just leveraging that experience and using that kind of model and support innovation through KPMG, how did you guys go through that as a corporate innovation piece? It feels much more complex and say, hey, I'm going to start a company in this, find an investor. I imagine there's a lot more red tape and a lot more things to consider when you are trying to build in-house inside an innovation studio.

I think anytime you're trying to do something for the first time, it's always hard. So first, go through this KPMG Studio, but we learned a ton through that process. I do think what's really interesting to me is that I think that what it shows is that there was a huge desire on KPMG's part to really lean into innovation, to invest in it in a way where it could not just be in the background and not just be part of a discussion, but actually show how we could actually build inside of the firm, do something that would enable us to build a market leader. And that was the premise. And I think we've successfully done that. We have spun a business out, which has really shown leadership in the market now has again, some of the largest clients in the world working on one of the most meaningful problems in the world, and a challenge that if we don't solve again really, really interesting things happen downstream that we have to be careful of.

So I find that is why I get up, why I am so excited about this business. And I think when we looked back, when I look back at the time with KPMG, I think there was an awareness. I don't think anybody really knew exactly how this could get done. There was a lot of negotiation around, okay, how do we deal with issues related to spinning a business out of one of the biggest entities in the world? Highly regulated. We need to make sure that we put the proper guardrails in place, we understand those challenges that we could face. And I think now that we've accomplished this, I think it's been highly successful. When we were first doing it, I wasn't sure it was the first time. So I would say there was just a recognition again that we wanted to do, build innovation into the world, and not just from a consulting perspective, but actually true applied innovation. And I think we achieved that goal.

Absolutely. Yeah, I mean the main proof here is the work, right? The business exists, it's out there doing IT things and market leader. So most of the time I imagine that's when I'm speaking to clients, the ideas are great, but it is really the work that matters and it's the outcomes that generally are the guiding light rather than just the big idea. It has to be the work. And I suppose on the idea of that plus what we talked about with trust and safety, I can only imagine against constantly shifting regulations from different countries as well. Maybe US and EU are different, UK might be different. Again, how do you manage that and how would you almost recommend the business owners to keep an eye on these regulations that must be as fast moving kind of as the technology to keep things contained or at least monitored in some safe way?

Well, I think a lot of people have started to hear now about the EU AI Act. So this has become one of the most focused efforts over the last couple of years to drive a regulatory standard for emerging technology and AI, especially as the architectures have moved around so much in that same period as we talked about, so that you could put some legislation that would enable EU member firms to be able to at least get a handle on how AI is going to impact these countries. And so we've leaned in very heavily into that. We launched a hub in Ireland, so we worked with KPMG and Microsoft. So KPMG Ireland specifically wanted to build out a capability. We wanted to support that as well with Microsoft and think about what it would mean for an organization to become ready drive readiness for the EUAI act, even at an early stage, knowing that downstream AI systems that would be in scope for this non-compliance could be extraordinarily costly.

So you saw GDPR historically, that was an intensive effort by most organizations in a lot of ways. They only started to comply with that after the law hit. And there was a recognition that in a lot of ways people got behind the ball, but there was a lot of teeth in that one, right? 4% turnover, those types of things. I think the EAI act is actually even more aggressive. So as people are really pushing forward, they're building AI at this massive pace, and now the regulatory burdens in Europe are starting to take hold and non-compliance for this will be at 7% of turnover. So there's even almost double, right? So people are starting to really think about how they're going to comply with that. And as a backdrop to that, similarly to what you saw with privacy in the us, you now have presidential executive order in the US around AI, which details a lot of the same things in terms of risk, safety, security, and the goals of the federal government.

You see individual states in the US now you're seeing leadership from New York, New Jersey, California. I mean almost every state in the union will likely have an AI act or executive order as well tied very heavily. We sit on the NIST AI security consortium that was a group of about 200 companies to really think about the standards with NIST as it's implementing these against the presidential eo. And every single major country in the world is also now starting. So you saw Singapore launch, they launched theirs, UK has launched theirs, you've seen Australia, you've seen Canada, and this is going to go unabated. China has their own. And so every country is now concerned about where AI is actually being built, how it's coming into these countries, what data, citizen data is getting impacted. And so you'll start to see all of these tails. It's going to be a pretty interesting couple of years as all the drugs start.

Yeah, absolutely. We did some research here at Infinum seven months ago just as we're getting into more into the AI world. And maybe just to get a sample as to what you're saying, because it's an interesting time, it's a fast paced time and we get lots of people talking to us about innovation and product build, and of course AI comes up all the time. And so we did this research to figure out what is the appetite for it and actually what's the preparation for it? And actually we found 78% of people we talk to obviously plan to invest in AI tools, which makes no surprise, it's a hot topic, but what's really surprising slash shocking to me at least anyway, was that 73% felt unprepared to actually integrate the tools they build into their operations, which is probably partly to do with regulations, safety and uncertainty around how to actually facilitate that into their business correctly.

I suppose there's a question around this as well. How does one, I've got my new tech company over here or my new banking company, hypothetically ALS Bank we'll call it that, is going to step into AI in some way, shape or form. From your experience, how would you even begin to say, look, how do you create a successfully secure and compliant AI implementation of anything? What's that first step? Because it just feels like there's such a wide variety of things to think about and do at the start of this journey. You've done it, you are building a company specializes in this, but where would you steer that ship a little bit for my hypothetical bank for example.

So what happened when GenAI first started to hit, you saw a separation, you saw those people that recognize that if they don't get on this train and in a relatively quick way, they're going to get left behind. And with exponential technologies, especially like ai, if you don't actually start building around it and your competitors do, the dynamics start to change dramatically. So there's a recognition you have to do this, you have to do something, you have to have an AI strategy. But I think on the flip side, now there's also a recognition if you do this incorrectly or with an architecture that leaves you open to these threats and risks that is almost just as bad as getting left behind because the pace at which the impacts happen are just as swift or swifter than they were when we saw other technology transformations. And so I think that's why people are so uncomfortable in a lot of ways.

If I talk about cyber topics generally, that's been in the news so frequently for so long, you kind of know them off the top of your head what a phishing attack is. Generally, you know what malware is, what ransomware is, these topics generally are part of the dialogue and in some ways how to solve for them. I think we now are at a point where we need a level of education and discussion around AI specific threats and risks and how to deal with them and the regulations thereof that are going to be very similar. I would say a year from now, two years from now, you're going to know what those look like just because we're having these dialogues, you're going to start seeing some of these pop up in the news. You're going to start seeing conversations that, oh, well, I need to now think about this again, there's different architectures that people are really looking at when we talk about retrieval augment generation or we talk about data poisoning or we talk about these other types of risks and threats, these will become more top of mind.

And I think that's what we're working through right now. It's this, how do I get more comfortable? And I would say the first part is driving an understanding of how you think about AI. It was amazing to me over the course of the last year and a half, we actually have brought a lot of these round tables together with different organizations and leaders from those organizations. And in general, I would say there wasn't really a good understanding of what even AI meant to them. There was definitely not consensus around that, what AI was, but definitely not an understanding of what it meant inside the organization. So I think that's the first step. How do we think about it? What are we going to do? What are we not going to do? And then how do we put a policy around this so that we can start to really govern that?

How do we stand up our AI governance teams? And I think those are the things that people have tried to do now, at least in the larger enterprises over the course of the last year and a half. The next step is really now thinking about how do I tool, I find it interesting that there are a lot of security companies that talk about AI security, but they don't have an AI team. And I always find that very strange. So we have one of the best AI teams in the world focused on retrieval and kind of next generation AI systems use. And then we tie that with our best in class AI security capability and talent so that we can really stay abreast of those things using AI. How do we actually build it into the product? How do we actually help support the pace and scale that we need in order to deal with these really, really fast moving significant difficult challenges? And so now this year I think is all about how do I start getting that set and building? And I think you're starting to see that happen.

Yeah, it really makes me resonate a lot where we've been into clients of late, and you're right, we have an AI committee and you think, oh great, that's good. We we'll come and talk about AI. What can we do to help you? But it's usually that kind of, okay, we know it exists, we have to do something, but we don't know yet what that means for each of our different verticals in the business or our teams, but how do we do that? So I think what you're saying there is so true. It's like yes, it's by engineering to a degree, and that's always the bench of everything getting done from a technical side. But first of all, the first engagements, many engagements really are that kind of how do we understand what it does for each person in the business or the verticals they represent and how does that actually get protected and governed and understood for you even touch anything technical really.

And it's quite interesting to hear your take as well. That exists all over. It's not just, I always say to people I'm talking to, it isn't just you, lots of people who have a need for it, but not a way to get there. We even started at Infinum, the kind of quick AI assessment on our website actually that we'll put a link to somewhere in our chats, but it's as simple as what do you know at the minute? It could be a hundred dollars or zero. It just depends what data do you even have at this point in time? Do you even understand what regulation needs to go into this? And I think all these questions are quite triggering sometimes they're like, yeah, we should do, but we don't. Or some people just say, we have no idea. I think it's such an interesting time for this just to really be the guide of taking people on this amazing journey, but hopefully a safe one with products like yours in the world. And just on the topic of products, I mean, how do you guys take feedback and feature implementation as a product team? Because I imagine you must hear and learn new opportunities and new risks all the time where you're thinking, we got to address this somehow in the product. How do you guys deal with that?

Yeah, it's interesting. I would say there's two answers to that. One is we work with our clients directly every day to really think about how are we improving the things that we do already. So if you look at our platform, it's really how do I get visibility into my AI systems? There's so many new AI capabilities and systems and products coming on the market. We need to know how those are going to take form. We think we do that great, but there's always going to be this kind of continuous improvement of, well, now I have this other system, I need to think about these things. Or somebody's integrating this into another code set and we need to be able to support that for us. We recently launched at RSA, something called exposure management, which kind of is a combination of a lot of the things that we've been building, but to try to drive exposure management on top of that.

So how do we actually think about those threats when we do the threat modeling? Now we need to keep up to date, we need to think about the threat intelligence and building that out and using what we're learning as we're building with clients. So that continues to form inside of the product. And then other clients get benefit as we add more clients to the product. So it becomes this kind of very virtuous cycle of seeing something, protecting others and being able to do that. The compliance piece of this is also, and we think about compliance and third party risk through the use of our Cranium AI card. And part of that innovation and thinking about it comes from, we know that market really well. We think about those things and we want to be able to build around these securely. So if we're using it and we know that there's an opportunity.

So it's kind of like the combination of really getting deep with clients, understanding how they're building and changing and doing it, and then looking forward, what do we need? How do we think about it? What are the things that we think the market could really benefit from and building towards those and then being able to support that as well. So we do that. Our engineers have a lot of latitude in terms of how they pick features, but also giving enough time on the research side so that people can really spike those things that are interesting to 'em.

I love that. I love that we're talking in worlds of technology and AI and all the future stuff where actually some of the best ways to just get things done in the right way is like that humanistic, let's work together. Humans is collaborating to figure out what people want, how to use it, what is the best way to develop a new feature or avoid risk. I think that's a big part of, we believe in that quite strongly where we're always trying to add continuous value and discovery with our clients. So it's never just get something done here. It's like again, back to that circular motion, let's just figure out how can we just keep adding value and keep helping them navigate because the world changes so fast, especially in this space, what can we do to support them as humans?

Absolutely. And look, some things aren't going to work out, so you have to make sure those are, you've got good design partners and clients that are going to work with you, especially as a startup, something's not going to work. We need to be able to move around it. And so I think keeping that flexibility and having that ability to move fast becomes paramount, especially at our phase.

Totally. Yeah. And I guess keeping on the human riff as a leader as well, how do you get the best out of your product teams, the humans in your business?

Yeah, I mean, look, this is a stressful time for our teams. So we're in a really important space. It's becoming more competitive. We have to make sure our clients are, we are driving value for clients and we have a group of people that are the best in what they do and they're in high demand as well. So I think it's a combination of building a culture that really focuses on really hard problems because people want to know that their work is meaningful. So sometimes people talk about, well, I want be able to, I want this or I want this benefit, or I want dry cleaning. I think the most important thing that people want not only just to be in a place working with really talented folks and they expect to work with smart people, I think is solving really big challenges. I just find that more and more if we can offer a place where we can resource appropriately, we can give people an opportunity to go solve these big problems and do it with really smart people. That is just an amazing opportunity that people they see very infrequently in their lifetime where you can be at a startup that's one of the leaders in their space. We're working with great clients, solving a big problem, and that's really how we focus on it. And our core values also help buttress that. So no jerks.

We really made sure we want people to have respect, we want to make sure we drive diversity. We're doing things that are going to be helpful to the environment. So we're always careful to think about that. So we buttress all of these things appropriately. But I think at the crux of it is if we can solve those really big problems, we're going to drive success into the market.

Yeah, I love that. No jokes rule number one. I love that wherever you work, whatever you do,

We try that really hard. Sometimes you get into heated arguments that border on some of that sometimes, but I think that ultimately need to be healthy and respectful. And so I think that's where we draw the line.

I love that. And ideally have a background in magician based stuff and chocolatier, and

If anybody gets beyond that, we just make 'em disappear. Yeah, that's it. That's it.

Okay. Well that's a great way to wrap that up. So look, we've got a bunch of questions from audience and I'm conscious of time, so if I may, I'm just going to ease into a few questions and see what we can say here from our wonderful audience. So the first question we've got here is from Steve, and how do you assess the current robustness of your business model and evaluate it in the face of AI and GenAI?

We started very early. This is a great question. So we started very early. We said, we're an AI company, we're an AI security company, and we need to be able to use AI to build a business. So we've actually set goals specifically around, and not every problem is an AI problem. So let's be very clear, there are certain things that we don't want to solve, but we've actually built an AI strategy that says we want to use AI in our product. We want to use AI to build our product and we want to use AI in the business to solve problems so that we can scale. And we set out, originally we said some startups, they start by hiring a thousand people and when they hit a wall, they fire 700 of them. That's like a classic startup kind of mentality. We said, we don't want to do that.

We want craniacs, which is what we call our cranium team members. I love that the benefit from innovation that they're building to solve problems by leveraging this, getting leverage that most organizations can't get. We set out targets, which I'm not going to bore everybody with, but that we're really significant and that if we can build a business model that is built around this, not only will we keep flexibility, we'll be able to go at a velocity that no other business at our phase can go. But that business model will help us support helping to drive domination in a market. So that's the ultimate goal, is to really go drive that market forward. So I feel like we're on our path to do that. It really takes strong talent and a will to build that in the business. And it's hard, but that's kind of how we think about it.

I love that. I love thought they're all called craniacs as well,

And you should see it. Our craniacs, we just got 'em all together. We call inside the company, we call our praise system a head bank. So if somebody does something great and somebody wants to recognize it, they head bank them. And we called our get together the Headbangers ball. It was great.

Love it. Culture. Beautiful. Yes. Look, we've got one more question I think we'll take just to keep us to time. So moving away from craniacs and all that good stuff, it's more of a question about LLMs and the company. So who in the company should have the ultimate responsibility for which LLMs you'll be using to train your models? So who in the company who would have the responsibility to further LMS and train in those models inside a business? That is a good question.

Yeah, so let's talk about that question first. So when we talk about the ways that models get trained, depending on how you think about this, we do have some models we use that are open source. We pull those in, we put them into a rag architecture, we fine tune them, and we do that work. We think about training those models in some cases to support, especially smaller models inside of our product. So we've got a combination of foundational open source and smaller models. And then there's also just the use of more closed source foundational LLMs, GPT-4 or 40, some of these models that you would integrate into your system. So we kind of have all of those things. Our AI security chief, Josh, I mentioned to you earlier, our AI security teams roll up to him. Our product teams roll up to him and our research teams roll up to him at Cranium.

He is the responsible person for making sure that those things come together so that we can research at pace, we can build at pace, and we can build our AI security piece. And we have a framework called build attack defend. So as our yellow team, the build team is building, we use our red teams to attack, we use our blue team to defend, and we use AI in the provision and support of all those things. I think you're asking about our products mostly, but we also use the AI in our platforming so that we can support those efforts. And so that's at least at Cranium, that's who we have doing it. I think in different organizations, we see large enterprises that this AI governance, about 50% of the time it's a CISO 50% of the time or the rest of the time. It's a combination of either head of AI, somebody like a CISO or somebody in risk or privacy generally.

Love that. Yeah, whole education there about red blue teams and the yellow teams in the middle. Love that. So I have one more question. You've got the energy for it. One last question then we'll let you go back to fighting the good fight in the world. How do you create a culture of innovation at Cranium AI? It's a big question, but it could be a small question. Depends how you look at that. How do you do that?

I think we try to say yes. I think when we look at the things that people bring to the table, what I've always found is we want to create an environment where people are looking to solve big problems. And when we see something we want to solve, if we have the capability to solve it, and we can go do that. We've written multiple patents now around some of the things that we have gone this way. We want say yes, and we want to go after it. And some times it's going to work great, and in other cases it's not. I think being at the phase that we are at as a startup enables us to do that in a way that I think more mature businesses struggle with sometimes. So innovation generally, it comes from your people. It comes from great ideas and solving problems. In some cases, again, those that clients are bringing, that we know are being asked for and developed in need or support of a client demand. And in some cases they're things that are going to launch you into outer space because people say, I think we can do this and I'm not sure, but we want to test it and we say, yes, let's go after it. And that's kind of how we think about it.

Yeah, love that. Say yes, be a craniac.

Be a craniac. Say yes. Well, I don't do magic as much anymore.

Oh, okay. Well hopefully one day. One day. Well look, I think that is a beautiful way just to wrap up all this big conversation. I feel like we've gone all over the world and round again to get to a point where there's some great knowledge there. So thank you again for coming, Jonathan. It's been a great conversation, you part delivered and yeah, let's keep in touch and keep doing the good work with Cranium AI. We really appreciate your time. Thank you so much.

Thanks for having me.