The Human Code

The AI Game-Changer's Guide to Business Success: Insights with Sultan Meghji

Don Finley Season 1 Episode 64

Send us a text

Navigating AI Implementation with Sultan Meghji

In this episode, we welcome Sultan Meghji, co-founder and CEO of Frontier Foundry, to explore the nuanced progression of AI implementation in organizations. Sultan shares his extensive experience in artificial intelligence, cybersecurity, and financial technology, highlighting the risks of rapid AI adoption, the importance of security, and the transformative potential of quantum computing. The discussion covers the evolution of AI in businesses, from simple automation to AI-powered autonomy, along with insightful use cases such as AI's role in combating fentanyl trafficking. Sultan and the host emphasize the importance of proper metrics, governance, and organizational commitment to change for successful AI integration. This episode offers valuable insights for tech enthusiasts, entrepreneurs, and businesses looking to stay ahead in the rapidly evolving AI landscape.


00:00 Introduction to The Human Code 

00:50 Meet Sultan Meggi: AI Pioneer 

01:57 AI Implementation in Organizations 

02:47 Challenges and Failures in AI Adoption 

05:00 Metrics and Governance in Digital Transformation 

10:00 AI in Counter-Narcotics: A Case Study 

22:02 AI in Legal and Compliance 

26:11 AI in Private Equity 

35:43 Future of AI and Business Innovation 

43:02 Red Flags in AI Projects 

47:59 Conclusion and Sponsor Message

Sponsored by FINdustries
Hosted by Don Finley

Don Finley:

Welcome to the Human Code, the podcast where technology meets humanity, and the future is shaped by the leaders and innovators of today. I'm your host, Don Finley, inviting you on a journey through the fascinating world of tech, leadership, and personal growth. Here, we delve into the stories of visionary minds, Who are not only driving technological advancement, but also embodying the personal journeys and insights that inspire us all. Each episode, we explore the intersections where human ingenuity meets the cutting edge of technology, unpacking the experiences, challenges, and triumphs that define our era. So, whether you are a tech enthusiast, an inspiring entrepreneur, or simply curious about the human narratives behind the digital revolution, you're in the right place. Welcome to The Human Code. Today on The Human Code, we are honored to welcome Sultan Meggi, a pioneer in artificial intelligence, cybersecurity, and financial technology. As the co founder and CEO of Frontier Foundry, Sultan has spent more than 30 years driving AI innovation, working in senior government roles, and shaping the future of technology. In this episode, we explore the progression of AI implementation in organizations, how businesses evolve from simple automation to complex AI decision making, and eventually to AI powered autonomy. Sultan shares his insights on the risks of rushing AI adoption and the importance of security and decentralization, and how quantum computing and AI will transform industries. We also dive into AI's role in human longevity. financial systems and cyber security and what individuals and businesses need to do today to stay ahead of the AI curve. Join us for a fascinating discussion on how AI is shaping the future and why understanding its progression is key to successful implementation. This is one conversation you won't want to miss. I'm here with a good friend of mine. It's probably a fair statement now. I feel like

Sultan Meghji:

Yeah, I think so.

Don Finley:

just a really nice experience. So Sultan Meghji is here with me and we're going to dive into basically implementation of AI in your organization. And really going from that zero to one perspective, but like things that we're seeing in both of our businesses, Sultan is the CEO of Frontier Foundry. they've got a great product that I'm absolutely loving on. And additionally, it works in a variety of spaces. So Sultan, I just want to jump in and get your insights and thoughts and kind of, we'll freewheel this for a while about what it takes for companies to go from that zero to one. And additionally, what are some of the failures that you're seeing in that, the aspect of I'm sitting here going, I got to get AI implemented. The boss is harping on me. where do you see the success and where have you seen the failures? and I'll jump in and drop some of our insights from FINdustries as well.

Sultan Meghji:

Yeah, this is, I think one of the biggest challenges that most organizations are going to face for the next few years, Everybody got so excited about AI over the last, 18 to 24 months. And a lot of people just jumped in with two feet without really thinking about it, in terms of digital transformation or in terms of what the actual results of that work would be. And you and I have both spent time in digital transformation and that if you don't know what the end result is and you don't, In essence, map backwards from the end result, you're not going to get there. And I think there are a lot of organizations that have gone down this AI path and either are going down and they wanted a, and they're getting B and B is not as much value or they're going into it. And it looks like every other enterprise IT program where it's a three year journey and they never quite get there and the CIO leaves after two years and, we've seen this, rinse and repeat, and then finally, they don't understand that it's not something you get to get wrong fundamentally. Whether it is a competitor if you're in the private sector or it's another nation state that might be hostile to the United States if you're talking about the government. AI and the subsequent automations that are coming from it are non negotiable in terms of how you operate. And if you are thinking that the way you've always done it before is going to be true two years from now, three years from now, at the rate that this technology continues to grow and expand, you're wrong. Or you're building a three year program against, what the technology looked like five years ago, which is, where a number of the name brand AI firms live, you're really making a huge mistake. And I think we're at a moment where it's time to stop playing around with it and actually think about it cohesively relative to the overall business that you're running.

Don Finley:

It definitely resonates as far as like the companies that are successful in this transformation so far are looking at it from the standpoint of use cases. The

Sultan Meghji:

Yeah.

Don Finley:

getting some quick wins, knowing that you can get that feedback loop tightened. around Hey, is this implementation actually helping us, hurting us? are we engaging our customers in a positive way with this technology? Or even internally, we have some sort of metrics that showcase the result of it? and lacking that has been a, a key indicator of whether somebody is going to fail.

Sultan Meghji:

Absolutely. the metrics comment is, I think, a great place to start is, I think a lot of people are inventing new metrics around artificial intelligence projects in their organization. And that's a mistake, You should be looking at how your AI projects are actually working against your existing metrics, So that way you can actually study longitudinally if you're having a positive impact. Whether it's on the bottom line or wherever you measure your organization. And, that to me is one of the first red flags when I talk to a customer as to whether or not I think they're going to be able to be successful. Whether they're using us or anyone else is, are they thinking about it backwards from a metrics perspective? and secondly, do they have an existing governance mechanism for that change that they can then lean on? And I'm sorry, like we're talking, we're supposed to be talking about AI, but now you and I are fully in the vernacular of digital transformation. It's metrics and governance, And it's not sexy. It's not interesting, but you have to have that.

Don Finley:

and I think you're hitting on this, we're both like tooting each other's horns here. Cause we are looking at these as digital transformations. and if you look at like where the last time we probably all saw metrics being created, it was around the two thousands in the internet, bubble that was created and like companies being valued on views, clicks, everything else. But it comes down to fundamentals. You know your business and you know the metrics that are going to move that business just because you're adding AI to the equation doesn't really change what those metrics are. and at the same time, yeah, digital transformation isn't sexy, but it's about how do you bring the organization along with you? And I think that's one of the areas that I resonated with what you're doing at Frontier Foundry, is that it's a solution that allows you to bring the organization along, do it in a safe way, create the governance structure that's necessary for, an organization to succeed. And additionally, crosses that chasm of the AI solutions that are out there that you're going to go buy and integrate with, are cool. Consultants. and I don't mean that to be disparaging, consultants have a place, but how do you transition that knowledge and that infrastructure to be part of your core team as a member of your organization is?

Sultan Meghji:

Yeah,

Don Finley:

that's what Limney and Kuhnney offer to the environment.

Sultan Meghji:

thank you. the ability for anyone to be able to understand that there's a difference between a consultant and a valued member of their team is a big deal. And that's a great way to think about it because so much of the artificial intelligence that's out there right now is a consultant, as you've rightly pointed out, It's, it comes in and it throws a PowerPoint at you or throws some generic. You answers at you. but because so many of these systems aren't tuned on your data in your environments and are not, in essence walked from generic to specific by people who are domain experts from your own organization, you don't get to that level of value. You don't get to that. AI becoming a team member that really does radically accelerate, the best of your organization and help with manage the worst of it out, And that's really what we're trying for, And it's a fascinating moment to realize that this technology is at a point where you can, and we do it with Limny and Kundi often, and I can talk about some specific use cases, and where you put this technology in the hand of a very small number of people inside an organization to take it that last five yards in the build process and that turns that generic tool into something that is uniquely yours as one of our customers and you just have it and it's your own tuned version. It's private. It's secure. it's a black box to us, but it's entirely explainable and transparent for you. And it can fit inside of regulatory environments. That's one of the key features that we built into the system, out of the box. And so it can live in a variety of different regulatory regimes. And the idea is that you get all of that and then it becomes an expert in your business. And it's you've grabbed that really bright. young kid and giving them the three year tutelage they need to become an absolute expert. And you get to do it over the course of a few weeks or a few months. And on the output of that, you have a trusted partner. You have someone that you can be talking to interactively. Daily and getting direct value out of daily because you're building it with metrics. You're building it backwards from that and you're building it with the domain experts on your team. You're not trying to get rid of them. You're trying to augment them. You're trying to make them make their decades of experience available to everyone else in the organization, but guided in a way and automated in a way that humans just can't do.

Don Finley:

does it look like getting from that, day zero to two months down the line when you have that person in your organization, Because I can go to one of the main frontier models today, sign up for an account and, I'm spitting out, new emails. I'm spitting out presentation decks, Like I'm getting that consultant sort of flavor right away. But how do we get like somebody inside the organization? Yeah.

Sultan Meghji:

you about one use case we did earlier in this year that I can talk about now. And it's a project we did with the Department of Homeland Security on counter narcotics. We were specifically looking at fentanyl interdiction. And what we were trying to do was identify ways to basically make it riskier for drug traffickers to cross and how to, in essence, guide their behavior. and out of that what we were able to do is. start with an end result, which was we knew we wanted to have a map of the southern border that looked at every single border crossing that basically ray raided it in real time, trailing a couple of years to say, hey, red, yellow, green. If I was a drug trafficker and I was looking at that crossing, would I want to cross there? And when I say cross, I mean like a significant amount of fentanyl. I don't mean like a backpack or a bag. something that would kill. thousands of people, And that map is what we need. The output needed to look like. And and that's what they asked for. They wanted something that the domain experts look at. So then we worked backwards and worked with a bunch of domain experts from a variety of different organizations, both current and historic, and came up with, in essence, a tuned Artificial intelligence that had a built in volatility model, which anyone from the hedge fund universe would know what a volatility model is, but we actually built a custom volatility model for drug traffickers. that would be as if you were the head of the Sinaloa cartel looking at the volatility risk of crossing the border. And so we started at zero, we took our technology out of the box and we applied a specific humans and specific domain data and created a custom AI platform that would And this week, by the way, we did all of this. The first version of this went from zero to first test in less than six weeks. So just to give you a sense of this time of scale, And we were looking at the last couple of years of data and that gave us the ability to say, Hey, listen, if you look at Yuma, Arizona and compare it with, insert, insert some other place, I can tell you that for a certain window of time, Yuma was the hardest place in the United States to get fentanyl across the border. And there are a variety of reasons why that's true, which our system called out, some of which was obvious and straightforward, and some of it was subtle and nuanced. And then there are others where, you look at, in the San Diego region or you look at kind of Western Texas and there are places where it's in that same window of time, it was far easier. and at one point, there was a one and I'll just hold back which city this is because, I, Prefer not to name it. there was an over 93 percent chance that if a drug trafficker tried to take a truckload, an amino truckload of phenol across the border that they would get across.

Don Finley:

Woo!

Sultan Meghji:

Yeah. And you go to Yuma and the likelihood of that getting across the border is like 2%. 3 percent depending on, and that's a pretty, that was a pretty consistent number for a while there. and kudos to whomever was running the operations in Yuma, as I've said, so that's what the result looked like. And that's a really interesting and sexy result, but from a, how would that apply to someone who isn't trying to counteract the Sinaloa cartel? What is it? It's having a small number of domain experts with decades of experience being. brought into a conversation with a system that's 95 percent of the way they're already, and then you add the relevant data, you add the domain knowledge, you add their domain experience, and you get to a unique custom AI model that, as new data comes in, it just keeps running through it, and you have two interface points. One is, you can interface with it like you do with any of the large language model systems, and you can ask it questions. You can say, hey, what was going on? I saw a spike on this day. What was happening in the news that day, as an example, was there something in the local news that relates to this, or you saw a drop off and then that allows the domain experts to say, Oh, when we see something like this, like we see a drop off in seizures over a two week period. Maybe there was a shift change. Maybe one of the senior guys was off training somewhere else. Like you, you can find causal factors and you shorten the cycle time very quickly to get to a place where the system is just sitting there automated and can send alerts and say, Hey, listen, I'm, So and so is going, on vacation. Maybe you want to shift the duty roster around and make sure the next most senior guy is taking over because you don't want anybody to, because they're, they've tagged that guy's phone so they know he's off. So maybe there's a vulnerability window that's opened up, you those kinds of explorations become, Almost, too many to handle early on. And so that's why, again, focusing on really narrow use cases with the domain experts as you're doing the tuning and training is important. But that gives you a sense, I think, of kind of one example of how we've used this technology.

Don Finley:

it's a great, one, it's a use case that clearly has benefit. Like we all understand the challenges that we're going through with Fentanyl. And then additionally the speed at which you're able to get the system up and going and that trained. And then in today's world or yesterday's world, we would have seen that probably take a couple of years actually go through as well as not even being able to handle the amount of data points that you're talking about on a reliable basis. And

Sultan Meghji:

Yeah.

Don Finley:

the old solution would have been to do that. Basically as an audit, once every year,

Sultan Meghji:

Something like that. Right.

Don Finley:

and then basically have another year that goes by as far as that feedback goes. But now you're talking about a feedback loop of a couple days in regards to, Hey, Steve's going on vacation. Let's bring in, Emily to ensure that we actually have some stake.

Sultan Meghji:

Yeah. the interesting thing about this, and I'm glad that you're highlighting the age of some of the technologies and kind of these cycle times, these kind of batch cycle times that we've, we're used to historically, because that's just all humans can handle, it's very rarely. Will you be able to have a person getting email at eight o'clock in the morning every day saying, here are the four things you have to be better at, that's a tough, work culture. And I wouldn't. Suggest anyone use technology to do that. but you get to turn around and create systems that do that, And so then it's a recommendation. It's a highlight. And it's basically saying, Hey, listen, Joe over here happened to notice this one thing this one time. now the system can check for that. every single time with every single data point, And you don't need to build it against a specific technology set at that point or a specific data set at that point. it becomes a dialogue and it's interactive. So it's not just that you're going from batch to real time or batch to iterative. It's batch to interactive. And I think that Evolution is something that if the organizational culture and the organizational processes can support it, then that is a green flag for me, in terms of them asking those kinds of questions as a potential or current customer.

Don Finley:

So what are some of those green flags around going from batch to interactive? Because I think if you sit down with any executive, they would love to go in that direction. And like everybody that I talk to wants to be there. But few are, Like it does take a special case to be fully ready to handle this amount of transactional information.

Sultan Meghji:

Yeah. again, it's funny. Let's take the metrics and governance stuff and just take that as done. Those are kind of base stakes, the next step becomes organizational will it change and at scale change. and that's, again, you narrow the use case to make that, a less difficult conversation for organizations, but fundamentally from board members to the C suite all the way down and translate that to whatever your organizational model looks like. You have to be willing to take people out of the normal processes, the normal daily processes and start chipping away at that from a human perspective, If organizations don't, for example, have a built in model that allows for their staff to have time to get comfortable with the technology and have time to get to learn about these kinds of technologies and to create training and educational resources availability and time just time in the day so that people when they're in their critical thinking, bands of high effectiveness can actually do this, and have staff that they're capable of doing it like that. becomes a huge green flag for me if they have those or if that becomes part of the conversation, if we get lots of questions for, Hey, can you come in and do a town hall? we've signed this deal with you. We were going to do this thing or, Hey, can you come into the office? we're going to do a brown bag lunch once a month and X percent of the company is going to be there. And we'd just like you to be in the room and have the conversation and talk like you and I are, again, a huge green flag. so that's on the human side. The other big green flag for me is that this isn't a big enterprise IT project, that this is fundamentally a business project. find that the vast majority of people who come to us thinking about us or thinking about any AI project is a big enterprise it project. that's more of a red flag for me. if it comes in as a business value proposition conversation, then I'm more likely to, I would say, entertain the nuances because, you know, the, there are only so many hours in the day for us.

Don Finley:

I 100 percent agree. And I think along those lines, I do enjoy when people want to have the whiteboard sessions, the bag lunch sessions as well. Those are great. And then the other side of this is I completely agree that if this is being driven by technology, your adoption is likely going to be low. when it comes in as a, from the business. Those are the projects that you know that they're buying into making those changes and they want to see that ROI come through.

Sultan Meghji:

Yeah. I'll give you an absolutely fantastic example of a customer that it was a mind blowing discussion because maybe second conversation they brought in their chief risk officer. And I immediately downgraded the likelihood of that organization doing anything massively. cause I'm just like, Oh, bring in the risk guy. What it ended up doing was just creating a third use case for us to talk about. Because the risk guy was thinking about this in the way that you and I are thinking about this. And so when the chief risk officer of a big financial services firm is saying, wait, this is going to make us better at this so that my people don't have to bug. And he pointed to the guy sitting to his right, who was purely on the business side. He's if my guys don't bug his guys, then that's a better day for everybody. Because my guys can sit and stare at their screens and not walk over to Joe and tap him on the shoulder and say, Hey, why did you do this? and that turned out to be, I would almost call it a green flag. Now, if someone wants to, thinks about a risk management oriented use case or compliance use case or something like that, that to me becomes really interesting because then, it's Not only just a business decision, but it's a risk management and conversation. And our company is not fit for everybody, but we're really designed for organizations that have really strict regulatory or compliance burdens, or, you worry about, senior members of your organization being called up in front of Congress or something like that. and we want to be, focused on that. I guess it makes sense in hindsight that I was definitely apprehensive and now I think of it as a green flag.

Don Finley:

and I love that because we were also talking about some use cases pre show. And the idea that risk could be an hindrance to the implementation. And it's along the lines of there's use cases that I think we see coming from the banking sector that people get excited for, or even from the legal sector where I was talking to a family office and they were looking for, AI to help them with legal solutions. And my first take on that was, I don't know if you really want that today. from the standpoint of even the best LLMs and like the lack of ludicinization that we see is, going down, but we're still not 100 percent there. And I don't think that anybody's really accepting, AI written contracts as like pure, good legal standing, but There is a place for AI in the process, and it could be used as a, I think we were talking about, as a review, or identifying places where there is human bias in these natures, and that could be part of the process and part of the compliance framework, going forward to double check the efforts of what you, the human, is doing.

Sultan Meghji:

absolutely. I was very surprised that to watch our legal business grow this year, because I approached it with quite very similar skepticism, I would say to what you just described. and it wasn't just a technology quality issue. I think that, the legal industry as a whole is very broad, obviously, but also it's a very human part. It's very human. process, and I look at our first couple of legal customers and they all came in I would say, looking quite similar in terms of what their asks were, It was fundamentally advanced document management, let's call it that, and every single one of them has ended up in a completely different place. we're working on a project now to identify, basically ways to optimize discovery processes in certain categories of court cases for a fairly significant legal organization at the government level. And you're talking thousands of different documents, thousands of different cases, and simply giving them the ability to basically highlight subsections of documents. So that instead of reading 150 pages to get to the one answer they need, they can read four paragraphs and get to the answer they need, With a source that they can click on so they can see the actual document and, things like that. And it's absolutely fascinating to look at how the fundamental process of shortening the human intellectual effort from a lot of low value activity and a very small amount of high value activity to removing as much of that low value activity as possible. So that again, if you've got a prosecutor who knows exactly what they're looking for, that person who's got decades of experience should not be sitting there like scrolling through a screen or finding a junior person to scroll through a screen for four hours, It should be a, Hey, here is the tuned, multimodal system for this case. It has. All 3, 286 documents or whatever the number is. I need you to tell me with all the history and everything you've been trained on, where are the places that similar cases have started? Where are they going? What documents are missing? what haven't we asked for yet? What haven't had been disclosed yet? And you can do all of that out of the box basically with these technologies now. And, but you have to be able to be in a compliant framework and. data model because you can't put 3000 documents from a prosecutor's office into a cloud environment beyond it just being unfeasible. It's just, no one should do that. That's a terrible idea and you shouldn't put it into a generic model because it won't be able to give you any useful information. But if I say, Hey, listen, I have 47, 000, and this is a real case, police reports of a certain type used to train a system. guess what? That AI is going to be pretty darn good at understanding that specific police report, and the police officers and really being able to find ways to drive out the low value human activity so that the humans can focus on the high value stuff.

Don Finley:

I love this example because you're also talking about organizations that have gone from 0 to 1, but are now going for that, next phase of the iteration, we talk about it from, crawling to walking, perspective. And, what are you seeing as the questions that successful organizations are asking themselves as they get the first batch done and then they're looking at that second batch?

Sultan Meghji:

the single most interesting thing is every single customer that has graduated from zero to one, Onward, they have one common trait, which is before they even get the very first version of the MVP of the tuned, that very first tuned model in the zero to one process, they are already adding things to a parking lot of what they want to do next because just the discipline that you have to go through to do the tuning and training. we ask, a lot of very specific questions. We have people who will go and literally sit and be like, okay, we're going to write a script for you. And this is how you interact with the system. And, we do this kind of interesting way of doing it, just that process of asking those questions. exposes what they want to try next. And so to me, I would say everyone that I see as successful in going from one to two, you can see the key indicators of it because the creativity, the key players, the domain experience, to use a slightly technical term, they grok. That there are orders of magnitude, more value to be captured, And you see it in the zero to one process. And here is a great example. So we work with a couple of private equity firms and we've applied the same model, this exact question, like how, they are obviously looking at managing long term investments and things like that. And it's you can absolutely see in due diligence documentation for an acquisition. The key indicators of success for the sale six years later, four to six years later, depending on the PE firm. that's a super interesting, very early use case for us.

Don Finley:

Say that again? You're talking about you have an early indicator of in your early discussions with them, something that is going to come out six, six years later.

Sultan Meghji:

Yeah. So when a private equity firm acquires a company, they go through, a due diligence process. they look at documentations, they interview people, et cetera, et cetera. I've actually written a bunch of, sub stack articles on the specific applications of AI to private equity. it's an area I actually really love and I've spent a lot of time in, and I think it's a fascinating area that people should look at. It hasn't really used AI as much as it should. And I'm trying to get more awareness, which is why I mentioned this use case, but you can look at the due diligence activity that was done for a private equity firm to acquire one of their portfolio companies, and you can see key points that will make the sale either more likely to succeed or less likely to succeed when the private equity firm decides to sell the company. You 4, 6, 7, whatever the term of the fund is. And we're starting to see some really interesting data. that is very easy to get, in those environments. Cause you know, these private equity firms will have tens, if not hundreds of portfolio companies in their different funds. And you can absolutely without a whole lot of heavy lifting, identify places where the guys on the ground, on the banking side will want to get a deal across the board. line because they want to get the sale done because they want their commission. And then they will come back after the fact and you will see they're like, Oh yeah, this due diligence document was slightly off or, whatever the consulting firm did something wrong or whatever. And you can find very easily a bunch of places where the emotional will to get the transaction across the finish line actually violated the scripture, if you will, of the spreadsheet that it had to be sat inside of. and you can see it.

Don Finley:

that is intriguing because I'm also thinking about this from the standpoint of you're looking at what due diligence showed you and then additionally when you're going to exit from that company as well in the future and so if you have those highlights it's also another point of information for the dealmaker to basically say is this deal looking any more you know

Sultan Meghji:

Attractive. yeah

Don Finley:

yeah and

Sultan Meghji:

That's it. And that's exactly what it's being used for. Yeah.

Don Finley:

Yeah, that's an exceptional piece of information because just coming back to this, whenever we help companies do that due diligence, they're looking for something that is going to give them that alpha on that company, what makes it attractive for them to buy, but emotions always come into deal making.

Sultan Meghji:

Especially, especially in that universe. Yeah.

Don Finley:

Yo, yeah, absolutely. And I'm still processing this because I love the aspect of what you're doing. And, when we look at what AI can help you with, that's probably not a zero to one kind of like implementation. That's really that that one to two, that crawl to walking type of space. You're comfortable with the technology. you have an understanding or familiarity.

Sultan Meghji:

I would say that on the private equity side, no, that's more of a zero to one kind of thing. think the real value, yeah, that the real value gets, much more, I don't know. Private equity is early. And when I say early for us, it's less than two. the company is not two years old yet. So just you have to allow for that. but I would say that on the private equity side, there's just so much data out there. And so many of these funds have so much of this data that it's really not a challenge because they've already self selected down to a narrow subset because of how the funds are structured, whether it's banks. By market or, portfolio company, target size, et cetera, et cetera. a lot of the documentation that the consultants do in the due diligence process is incredibly similar. So this is the old McKinsey joke, the photocopied McKinsey PowerPoint joke, the fact is there is a lot of that and they're just changing a few numbers, which makes it easier for the systems to identify, issues and find commonalities and create structured data out of it that they can use as an intermediate steps in the quantitative analysis, but as you go through that process, I think a lot of people don't appreciate about private equities, A, just how much awesome data they have because they have amazing data and it's very reasonably structured and they don't have big, clumbering enterprise IT systems in the way. A lot of the times it's just a, a folder of documents, it's not big databases or anything. And we're past the point where there's a meaningful difference between those two. that's number one. Number two is, It is absolutely clear going into a transaction what the exit has to look like, because you can't spend the money to buy the company unless you know how you're going to make whatever you need to make on the back end of it, And so you have your metrics out of the box. You have that result out of the box. You have a timeline out of the box. You have an overall risk model for the fund out of the box. It is an absolutely target rich environment. And I'm just, incredibly excited about it because I think we're going to see a bifurcation in two markets, and this is one of them, between the nimble, fast, data driven guys and the others, as I would call them. And I think we're starting to see glimmers of that. I think, in the hedge fund space, we already see that bifurcation occurring. And that's why you saw so many, especially multi strat hedge funds in 2024, struggle. in an absolutely amazing market environment, and you saw funds that were, would have 150 people, billions of dollars, struggle to compete with a similarly sized firm that had 30 employees or 25 employees. And that's just the value of what that technology can bring and the automation that it brings. And so I think hedge funds got into this a little faster. but I would say that the other side of it is a little slower.

Don Finley:

I think we're definitely going to see a pretty strong bifurcation, as you pointed out, between the ones that actually can act upon this and the ones who are just deciding not to, or, like we talked about before, not doing this in a way that is a, digital transformation project, cause the farther you get away from that or just implementing tech for technology's sake, you're not going to see the results or you're not going to see the impact that you want to see in the business. And, along the lines of like, where we see companies, Succeeding, we've touched upon the crawling aspect of this, the, getting the basics done. We've talked about a couple of use cases, around like the actual like successful implementations or like how people can look at this, the augmented intelligence side of this, not relying on it to go. are there other areas that you're seeing, successful organizations, like what are some other green flags? That are coming out.

Sultan Meghji:

the other one that I think is probably worth mentioning is that, there are a subset of organizations that never are comfortable with a status quo culturally. And that gets expressed in a bunch of different ways, whether it's efficiency, bottom line, whatever, there's the old Ford model where you fire the bottom 5 percent of your employees every year, no matter what. and that eventually just iteratively over time, that you end up just pushing the whole group North, Whether that's true or not is a separate issue, but there's that kind of cultural process is in place. What's interesting is that we are now far enough down this technology discussion for the last 30 years that people need to do the same thing with technology. And so to me, I look at green flag being the people know that technology is always a federated architecture. There are always going to be multiple systems. They're always going to be talking to each other. And so if you are talking like that and acting like that and have a built in, okay, every three years, we know we have to put some, energy into modernization of function X, whatever that is. And you have this nice little pretty Gantt chart that just shows how every year there's like a. X percent of your operating, your tech operating budget goes into doing that. That to me is a big green flag. There's a huge law firm that we're, we're finalizing, I would call it the phase two of, and that entire discussion was a green flag for me from the first time I talked to their CIO, their head of tech. And that's how he was talking about it. and it was just, a, freaking fantastic conversation and he's a funny guy too which made it even better but it was amazing because his view was absolutely focused on listen there is a longevity that most people ignore in technology and we don't want to be 10 years out of date on something or five years out of date on something and then have a compliance issue come across or a cyber security issue come across. so that's one other big green flag. So the other green flag that goes in parallel with having that kind of longevity view around humans and technology is also understanding that there are new opportunities that those technologies automatically give you. That you have not thought of yet and to have a process in your organization to say, listen, we know every three years function X is going to get a technology upgrade. So how are we going to find another value that it creates? And so that inherent improvement model, that's in essence, greenfield. And so it's, and so it's the rarest, the top 1 percent of the top 1 percent of firms and organizations do this. But when I hear that, I know they're going to be successful.

Don Finley:

You know what's funny is I never put two and two together on that one, and thank you for enlightening me on this side. So selfishly, This whole conversation has been worth it just on that one idea. But think you're hitting a really strong note that we're in the middle of what some people are calling an AI bubble. And at the same time, I don't agree with that. I think we're just at the beginning of the implementation of the models that we're seeing today. And people aren't really fully, grasping what can be done with it yet. Like we haven't hit That stride, amongst the use cases, but having that concept, that idea that, that innovation is happening and that you're going to see something changing in the next year, two years, three years. And I'm specifically thinking about, We're just the beginning of like agentic AI inside of organizations and we're just seeing like reasoning models at the beginning of like where they're at and so what we can do in the future is gonna be a lot more than what we can do today and to have the idea that like you're building a foundation for what you're looking to expand upon and knowing that you don't know exactly what that AI will be able to provide you in three years is a pretty strong moment.

Sultan Meghji:

Yeah, and it's also, the way that I think a lot of firms, I get a lot of questions from firms about how not just the risk of AI, but the risk of their competitors getting AI and doing a better job than they are. Or new companies getting started that are hyper competitive to them. in the post Silicon Valley bank collapse venture capital universe, a massive percentage of the venture capital money, that's been spent over the last few years has gone to a very small number of companies, it's a consolidate, it's a massive consolidation that I don't think is, Quite well understood. But what it means is that, a lot of the air in the room for a lot of AI companies got sucked out because it's gone into a very small number of firms who are then investing in creating their own ecosystems, They're hyper verticalized ecosystems and they're doing what Amazon Salesforce, did over the last 15 years in the cloud and enterprise application space. And they're just Copying that, just doing it much faster. But what it means is that the innovation cycles are actually hyper accelerating in very narrow areas in those environments. But for the average organization, what it actually means is that since all that investment is happening over there, the actual place where AI money is being well spent in the early stage is being spent on businesses that are going to compete with existing businesses. Not on tech stacks. And so I would say that a red flag for me is if an organization doesn't recognize every organization is at risk. I've publicly said, I think at least 50 percent of the S& P 500 is going to change in the next three to four years.

Don Finley:

Oof.

Sultan Meghji:

we'll see that maybe I'm probably on the aggressive side, but the fact is it's not impossible and we're seeing more companies stay private longer. I think there was a, I think there was a joke that the series J is the new IPO, not too long ago. and as you step back and realize if I am an organization that is a human process organization. And you look at agentic AI, you look at multi modal AI, you look at, privacy centric AI systems that operate in those environments. You are now at a point where, if I was running a venture fund, I wouldn't invest in technology. I'd be investing in businesses. They're basically photocopying an existing business. And instead of having 5, 000 employees, have 50. Or instead of launching, having X customers per unit of work, have 50 X customers per unit of work, and this is where I think SpaceX is probably a really early and really good example of that, right? They fundamentally created a factory for building rocket engines. That's what SpaceX did. And then they built the rest of the organization around it,

Don Finley:

That's a fantastic way to look at it. Yeah.

Sultan Meghji:

SpaceX is an awesome company. it's like every, every couple of months they do something that I'm just like, wow, cool time to be alive. and that's great, but like SpaceX shouldn't be the exception. That should be what every business should be doing, And the, going back to the PE example, in our conversations with some of the PE firms, there are a number of portfolio companies where I've said, you know, hire, I will show you 10 companies that are going to try to disrupt that business. If this company loses 3 percent of its market share in the next two years, you're toast. So you should sell that company pretty quickly, unless you think that their leadership team is in the top 1 percent of your leadership teams. And that can go through this journey in 18 months or have enough of a balance sheet that they can buy that competitor and take them off the market, which is the normal historic way that's worked.

Don Finley:

I have a billion dollar product that was basically squashed like the big pen, moment by a large competitor that just wanted to take the technology Off the market.

Sultan Meghji:

The, that you see that in banking, there are three companies that control the vast majority of the technology in the banking system. And that's why 80 percent of banks in the United States have Tech platforms that are more than 10 years old. And that has been their MO for 20 years is anytime an interesting tech company comes along that looks like it's going to hit a reasonable, inflection point, they get bought out or, or commercialized out. One of the two.

Don Finley:

Yeah, I've seen that as well.

Sultan Meghji:

Yeah. Oh yeah. This is, this is not, none of these are new conversations, just new tools.

Don Finley:

I think that's, exactly. It's a new tool in a new space, and I love the concept of A, you need to be looking at the innovation that you're driving year over year inside of your own organization. additionally, there's a lot of hope and glimmers that we can identify from this conversation around the value of AI isn't going to be driven by the tech platforms. It's going to be driven by the businesses that are actually doing something with it to serve more customers, to lower their operational costs, to grow their reach, to hyper personalize,

Sultan Meghji:

yeah. and this is why the generic large language models, especially the non agentic ones, I think are on a really limited window of success time, And I would compare it to the late 90s, Google was not the first search engine, Not remotely. And there were a bunch of them that raised lots and lots of money that, Disappeared quite quickly. And if I were to talk about Ask Jeeves or Alta Vista, some of those, no one would have any idea what I'm talking about, but those were all, early ones. I think a number of people out there will be very surprised at some name brand. AI firms that are talked about in the news a thousand times a day, just flame out. and there'll be some pretty irritated investors I'm sure. But, the fundamental notion that generic AI with the current state of technology and by current, the next 10 years is going to work is I think a challenge for most organizations. They need something hyper. customized for them and they need dozens if not hundreds of those based on the different use cases the different value creators inside of those organizations and it is absolutely a waste of electricity to throw a 10, 000 GPU farm at something like that. When in reality, what you need is a 24 CPU, 4 GPU server solving one specific problem on a narrower case. Get 10 of those and you're not going to break the electric grid. we don't need you know, this whole thing, this whole race to nuclear just as a total tangent, I think is a long overdue because it's fundamentally a very reasonable, it should be a piece of our electrical infrastructure, but the notion that we need. billions of GPUs out there in order to do the vast majority of things that businesses need to be successful, I think is just people who don't understand computer science, hyping this up a lot. And that's to the hype cycle of AI, I guess we're in.

Don Finley:

yeah, I would definitely agree with that. we're looking at an efficiency metric on AI that is, your cost per intelligence is going to actually rival what it is to put a human in the seat, with

Sultan Meghji:

Oh yeah.

Don Finley:

at it from that standpoint of building out the supercomputers, 10, per setup. And for most aspects, I think you're 100 percent in correct that we don't need that. All right, so I think we've hit a nice like natural point in the conversation to drop this and I think we've covered a good, zero to one and one to two a little bit. Is there anything else that you want to drop into the conversation as we wrap it up?

Sultan Meghji:

No, we've covered quite a bit. I think The only thing I would say is you talked about green flags. I would say, there are a couple of red flags that we haven't explicitly talked about yet. and those are, I would say, maybe not quite as important as green flags, but there are a couple of red flags that certainly give me pause. And probably the biggest one that's worth mentioning is if I ever hear about a data consolidation program going on or that needs to be finished in order for the AI project to get going, it's a huge red flag because 99 percent of the time it is a three year program with a CIO who's a year into the job who will be gone within one to two years has nothing invested in getting it actually finished. It's a way of tapping the brakes on anything getting done. And, If we go out two years into the future, and we look at 2027, we can, with a reasonable degree of certainty, know that there are a variety of other things that people are going to be caring about in 2027. And they're going to be different business drivers than there are today in a lot of ways. The competitive environment is going to be different. just basic operations of cybersecurity are going to be different. every organization in the world over the next two to three years is going to have to replace non trivial amounts of their infrastructure to become quantum resilient. Just as one great example, So if you're two to three years into a data consolidation program for AI and all of a sudden you have to take the next phase of money and spend it all on quantum resilience, what have you got? You've put all your data in a single basket and can't do anything with it. And you're paying a very large monthly operating expense fee because you've probably put it in a big enterprise cloud and you're just gonna be spending a lot of money on data that's not actually doing anything. Because very rarely are you actually also changing the systems of records for those data. So you actually have a second, you're basically paying for a hot backup. and that's really all you're getting out of it. so that's a pretty big red flag for me.

Don Finley:

Okay. And I think that falls into the bucket of, Technology for the technology's sake. Right, like, data consolidation projects, I've rarely seen them tied to ROIs, or having a strong use case of like, why you're doing that. If it comes to my table as a data consolidation project. If it comes as hey, we're doing X, Y, and Z, in order to get this ROI, And part of that is data consolidation. That typically has a nice, metric to it. On a corollary that's similar to this, I sat on a non profit board as a volunteer, and all the board members were volunteers as well. And what I can tell you about that We had one of our members was also an accountant who specialized in non profits, and he said every non profit board with non profit members suffers from a lack of a cohesive vision that everybody can get behind. when you get into these organizations that have a very high drive as to what their vision is, but the implementation of that vision can happen 12 different ways. You tend to lack the focus of how to get to the next stage. And so I think you're hitting a great point on the consolidation piece, that it's coming back to the green flag, the opposite of the red flag here.

Sultan Meghji:

technology for technology's sake, I think is something that a lot of people really should take to heart. And I have this conversation in the crypto universe more often than I like, which is, what's the actual thing that's being done better, what's the use case? And the great thing is we're at a point now where we are seeing positive use cases in crypto, There are places where it's great. It's, I think, challenges, there are other challenges there, but like technology for technology's sake is a huge problem. And we now have Especially Since 2008, and especially since 2020, we are struggling with a cohort who just like to implement technology because they want to implement technology. They want the shiny new thing or whatever on one side. And then the other side is, it's worked for the last decade. I'm not going to touch it because I don't want the risk of touching it, And in both cases, those are extremes that aren't the right answer. Yeah. it's designing for organizational longevity, designing for technology longevity, right? That's where people need to be putting their energy in. and this to me becomes The antithesis of the technology for technology's sake, it becomes the, I'm going to build the best company or best organization I can. And then I want to build an organization that outlasts me, that is systemically relevant, et cetera, et cetera. And we're just not hearing as many people talk like that in the last 15 years as we did, in, let's say the better part of the 20th century.

Don Finley:

Yeah, which is interesting because, we were I'd say, blessed with the research of, Jim Collins, And good to

Sultan Meghji:

Totally. Yeah,

Don Finley:

I think it was chapter seven where he was basically like, technology is the, the match that lights the fire. And if the process is defined, technology can be great. If it's, just thrown in there, that's usually where money is burnt.

Sultan Meghji:

that's exactly right.

Don Finley:

Yeah, Sultan, thank you so much again. it's been an absolute blast having you on,

Sultan Meghji:

You too, man.

Don Finley:

really enjoy the time that we get to spend with each other.

Sultan Meghji:

Nah, this has been great. I'm, 2024 was, there were ups and downs in 2024, but getting, getting to call you my friend in 2024 was a definite up.

Don Finley:

Absolutely, my friend. Thank you.

Sultan Meghji:

Awesome.

Don Finley:

Thank you for tuning into The Human Code, sponsored by Findustries, where we harness AI to elevate your business. By improving operational efficiency and accelerating growth, we turn opportunities into reality. Let Findustries be your guide to AI mastery, making success inevitable. Explore how at Findustries. co.

People on this episode