Spend Advantage Podcast

The Safest Way to Leverage ChatGPT for Your Company

Varisource Season 1 Episode 41

Welcome to The Did You Know Podcast by Varisource, where we interview founders, executives and experts at amazing technology companies that can help your business save a lot of time, money and grow faster. Especially bring awareness to smarter, better, faster solutions that can transform your business and give you a competitive advantage----https://www.varisource.com

Welcome to the did you know Podcast by Varisource, where we interview founders and executives at amazing technology companies that can help your business save time and money and grow. Especially bring awareness to smarter, better, faster solutions that can transform your business. 1.6s Hello, everyone. This is Victor with Varisource. Welcome to another episode of the digital podcast. Today I'm super excited to have a Raven Thambapillai, which I have to let you guys know, I practiced it ten times and confirmed with Raven before the recording. But Raven is the CEO and founder of a company, a very 1.6s amazing AI company called Credel, who is part of the Yc Combinator Winter 23 batch. They are essentially a secure Enterprise chat GPT which we're super excited to have you on the call. 

U2

Yeah, it's super exciting to be here. Thanks for having me. 

U1

Yeah, no worries. Personally, obviously I could talk to you for hours. Just Chat GPT or the overall AI has taken over the world, but obviously there's enterprises have a lot of concerns about using it. And so the fact that even though the intro was so simple, secure Enterprise chat GPT is really a game changing product. And we're super excited to partner with you guys. But yeah, I know you Raven, you have a pretty cool reason and explanation of how you guys came up with the name Credel. So if you can kind of give us a minute or two on the background and we'll get right into it. 

U2

Yeah, the name is pretty simple. It refers to a type of decision making under uncertainty. And right now enterprises have to make a lot of very uncertain decisions and we really think that the right use of AI can help them make the best possible decisions despite all of the uncertainty. So we help enterprises make these really difficult decisions and that's why we named ourselves after a particular branch of mathematics which is all about doing that exact thing. 1.4s

U1

Nice. Well, you and your founder not only been YC, which there's validation, but also both of you have incredible backgrounds. Can you kind of give us a little bit of your background and founder story? I guess, yeah. So Jack and I, we met in 2019. We were both working at Palantir. I was a palantir for, I think, seven years. Seven and a half years. Jack Chuck was there for five years. So, yeah, we really saw that business grow a lot and evolve. When we met, we were in the life sciences team at Palantir. We co led an engagement with a really big multibillion dollar life sciences conglomerate. And actually, that engagement ended up being so critical to the business that the life sciences company, our customer, actually cited their Palantir contract as one of their existential components on their S One filing. So they said if this Palantir contract went away, it would be a material risk to our business. So it was really fun working with Jack, obviously having huge success there. But then after that, we kind of both went into different parts of the business. So Jack went into a lot of our work with the US. Department of Defense, and I, at the same time, was working on 

U2

Palantir's work, on the COVID response, trying to help keep America's hospitals afloat. And so it was really powerful, mission driven work. And I think that's something that we really care a lot about bringing into predor is that sense of, why are we doing the stuff that we're doing? Like, what's the deeper purpose behind all of this stuff? 1.9s

U1

It. So for those, obviously in the tech community, they all know palantir. Obviously, they got their fame from working with the military, revolutionizing AI for military missions, and saving countless lives and all these things. But obviously, recently they've got into the commercial as well. So what did you and your founder learn at palantir made you want to create Credel? Obviously, you guys are doing really impactful work there. You love what you did, right? But what made you say, you know what, we need to go create this product? 

U2

Yeah, well, you can imagine 1.6s jack was working in the department of defense. I was working with HHS on the COVID response, and we were really handling some of the most sensitive data you can imagine. 1.1s I won't talk too much about what jack was doing, but I think it's pretty obvious 

U1

that the data there would be super sensitive. But even the stuff I was handling, we're integrating data from seven and a half thousand different hospitals in america, 60,000 nursing homes. You've got to be really careful about how you handle the underlying data sources there. But not only was it like understanding the sensitivity of the data, but we were actually finding ways to operationalize AI. So the government was using the AI systems that we built on top of this data to make decisions about where to send emergency personnel, where to send COVID therapeutics, where to send vaccines, that kind of thing. And we really saw that if you figure out how to operationalize AI on this kind of data, it's just really incredible what you can do. We took the amount of time it takes to get emergency personnel to a hospital, for example, from about four weeks at the beginning of the pandemic to essentially zero days so that the personnel would show up at the hospital on the day that it 

U2

ran out of bed. And I think the realization of how powerful these AI tools could be was a big motivator. But then looking at the same time and realizing that large institutions, enterprises, government agencies, 

U1

were not really going to 

U2

be able to operationalize that data simply because of the complexity of managing 1.1s all of this sensitive information. And we looked at the ecosystem of startups that were building this kind of stuff, and we pretty realized that no one was taking data sensitivity seriously. So from day zero, we set out to take security seriously, and thereby help some of the most critical institutions in the world, quite frankly, actually move into the 21st century and keep our society and economy, like, dynamic and competitive. 2.1s

U1

Yeah, we have a lot of amazing questions coming up on all of that. Right? And you're somebody who, before AI became popular in mainstream, let's call it, you've been in using it, working it, seeing it in different areas. So what's kind of your thought on the virality of AI in the last few months? Where right now it used to be something that's only in sci-fi movies or really military and really big companies maybe is aware, but now literally anybody I've seen a YouTube video where, like an 80 year old grandma is talking to chat GPT. 1.9s Right. Literally, AI is everywhere. So what's kind of been your thought because you kind of been seeing kind of the evolution of all of this, right? What's kind of your thought on the last few months of the virality? 

U2

Yeah, I mean, obviously, as a longtime AI practitioner and someone who's been building in the space for many years, it's super exciting for me to see 1.3s the breakthroughs in terms of getting folks to use, understand, and actually be able to engage with this technology and see the power. I mean, obviously, the main recent breakthroughs have been really associated with a specific sub branch of AI, which we call large language models. And these large language models were really pioneered, actually, by Google 1.2s a long time ago. But OpenAI have just done a fantastic job of taking the underlying technology that Google invented and finding a way to make it accessible to the people. And I think one of the things that I've realized over time about what was so brilliant about their approach was 

U1

that. 1.5s For me, if you engage with these models, if you try using Chat GPT, it's actually often the case that 1s the first answer that you get from a question from Chat GPT, it isn't necessarily exactly what you want, right? And so if you're used to using these AIS in these very zero shot ways, as we say, like where you get one go and the response has to be perfect, otherwise useless, it's really actually quite difficult to get what you need. But the moment you introduce this chat conversational style model, it really creates this very powerful experience where you can sort of coach the AI into giving you what you need. So if the first answer is not exactly right, you can kind of instruct it a little bit and say, okay, this is what I want to be different. And gradually, it takes three, four, five prompts, but you can get like a really high quality response in a way that's really difficult if you only get one chance. And so all of the skills that people had to learn about how to prompt these large language models, the barrier to entry for that became way lower because you could sort of iterate a few times pretty quickly and get what you need. And seeing that sort of change really break through into the mainstream has been honestly, it's been really delightful for me. I've really enjoyed seeing it. 

U2

And now it's not just open AI, right? There's loads of other really good models out there on the market today. I love the anthropic models. I think the Dard model is really promising. The open source models are getting there. I'm sure they'll be really good in a few years time as well. So I'm really excited about what the market will have for us in one, two, three years time as well. 1.3s So another quick question I want to add on to that. Is there's Anthropic, there's Bar, there's again, even Stanford has its own open source LLMs, meaning I think, since Chat GPT came along, there's been so many other companies who have large language models open source or coming out with that. But I think for the general public, there's kind of this first mover advantage, as they call it, right? Meaning you kind of just think of Chat GPT and you don't even hear or see any other company. That's the most amazing advantage any company ever want. Right. That's just amazing. Right. Obviously people in technology, they know like, oh, well, there's other kind of language models. But to the general public, can you kind of give us a thought of what is really the difference between 1.2s Chat GPT or Anthropic or other language model? Because end of the day, I think that the difference is so subtle, right. From a normal user perspective, that they can't really tell the difference. Like, if you're asking a question, it's giving you answer, and somewhat if it's correct enough, you can't really tell the difference. Right, right. 

U1

So what's kind of your thought on where maybe some of the differences are that people can think about on these models? Yeah, so 

U2

if you think about how these models are worked, they're usually trained to try and achieve some kind of particular objective. Right. And different models, I think, have slightly different perspectives. So if you look at the Anthropic model, 1.3s what we've seen is that it's really targeted more at being trying really hard to be honest and trying to avoid some of the pitfalls that we see sometimes with things like Chat GPT, where it can kind of just make up things that aren't really true or that 1.4s we see. For example, we'll occasionally cite papers that don't really exist, or pretend that a paper was authored by someone that doesn't really exist. And so the Anthropic model, I think, is pretty good at avoiding some of those pitfalls. But then it's a trade off, because if you look at what the OpenAI folks seem to have focused on, from what I can tell, and I don't have any inside information here, but I'm speculating a little bit, seems like they've really focused on trying to make the model really useful, right? Trying to give you an answer that might be useful as often as possible without obviously, hopefully saying anything too dramatically harmful. 

U1

And 

U2

there are these subtle trade offs between these different models which create these slightly different behaviors, where one model might be slightly more likely to hallucinate, but it also might be slightly more useful in a slightly larger set of prompts or questions. So there's very subtle distinctions, but honestly, like you said, my experience of using them has been at least with the really high quality models like Anthropic and OpenAI in general, the similarities far outweigh the differences. But. 

U1

It. Yeah. So one thing to kind of pull back into the enterprise use case, right? Because both of us sell and work with enterprises. It's an amazing technology and potential, but companies, especially the bigger the company, are going to be very careful on how to leverage this technology. They're all trying to figure it out, right, but also there's a difference between using it external, adding it to their product set, versus using it internally to help what I call leveling up kind of the internal workforce. So how do you guys well, first of all, maybe two part question. One is why do most companies, I think, struggle with that? Is it just because it's a new technology, they're still trying to figure it out? And two is how do you guys make that easy or simple or work for them? Yeah, great question. So I think what we see, right, is that these enterprises often have a pretty broad array of regulatory constraints on what they can do with the data that they have, where it can go, how it can be processed, all of this kind of stuff. And yes, the technology is really exciting, but at the end of the day, you can't just forget about all of your legal obligations and do whatever the hell you want with this data. And so the responsible CIOs that we speak to are both really excited about the technology and looking for ways to utilize it, but are also looking for ways to utilize it in a way that's compliant, that protects their employees data, that protects their customers data, right. That's also really important and just generally is respectful of the norms that we have around data privacy, especially 2s in this modern era where the possibilities of this technology are still so unknown. So I think that's one really big area that we see as being important is customers that are really concerned about the data privacy and governance of their data. And that's what we help them do. So, on the one hand, there's a really basic stuff, right, like identifying the sensitive information in the data that your employees want to use with Chachi, BT, Anthropic, et cetera. So PII, like SSNs, people's names, Phi, any health information that you may have, have bank account numbers, financial information, material, non public information. So if you're a public company, you don't want to accidentally disclose to OpenAI, like the. 

U2

Important things are happening in your business, right? So we automatically detect all of that information in your company's data and then allow the team, the infosec team or the It team at the customer to write policies that get automatically enforced on that data. So never send our financial projections to 1.1s OpenAI or anthropic only ever allow that kind of data to be used with the on premise LLMs that we have total control over. That's like an example policy that someone might write, and in fact, many of our customers do write in order to make sure that their data is always protected and secure, but still allowing the employees to use the most powerful AI for the given problem 

U1

that they have. 1.3s You so you and I talked about that 1.4s this is like the Internet 30, this is the future. And it's hard to I guess the Internet was the future 20 years ago, and I think AI is that same. 1.3s You know, evolution, right? And so meaning every company on Earth, especially the Microsoft, Google and Amazon, all these companies are jumping in and has a lot of advantages. So when you're talking about security and AI, when we talk about Microsoft as AI, they have security. OpenAI also have some functions that they're coming up with. Where do you see, do you see them as competition? Do you see them as 1s why would a company work with you versus, let's say, work with the Microsoft, who has, I'm sure, security functions and AI functions? Can you kind of talk to the audience about that? Yeah, 

U2

absolutely. So we don't see ourselves as competitive with those players. I mean, we actually see ourselves as partners to those players. We work with both of them, actually. And so I think if you think about the AI stack, there's the foundation model layer, right, which 

U1

OpenAI is a player, Anthropic is a player, Google is a player. And we're definitely not like, trying to provide our own foundation model, right? We're not providing our own large language model. We're working with the existing providers for that. And what we really do is provide the enterprise with the visibility and control over exactly how each part of their data landscape is used by these foundation models. Now, part of that usage happens from your employee trying to go into Chat GPT and type a query in, right? So we help It and infosec regulate that kind of experience, the sort of Chat like experience. But there's also another really big component as well, which is 1.3s every company I know right now uses something like 30, at the very minimum 30 software providers, right? And the bigger the company gets, it can actually get into the thousands. And so each of these software vendors, like you said, this is sort of Internet 3.0, each of these software vendors is going to start adding AI features into their product, right? And so if each of your software vendors notion, asana Jira, like Atlassian are sending your data to these third party models, you also need to get visibility and control back over how those companies are using your data with these models as well. And so what we do is we publish APIs that those companies can integrate with in order to get audit logs, in order to get visibility, to provide visibility to your It team about exactly what they're doing with 

U2

your data. 1.1s So really we don't see it as competitive with Microsoft or OpenAI at all. All of our customers typically 

U1

use us with at least one, if not both of those players. And what we do, really, is we get all of your data together in one place, format it in a way that OpenAI, can read it really easily, or that makes it really easy to use your micro soft data. Alongside your Atlassian data, alongside your Slack data, say, for example, 1.1s

U2

and then give it and infosec visibility and control over exactly how that entire data landscape is being used, not just by your employees, but also by all of your software vendors. So it's really very collaborative, actually, with both Microsoft and OpenAI than they are. Yeah. 

U1

I love the approach you guys took there. 1.4s I got a follow up question and actually I had a poll on LinkedIn a couple of weeks ago about who, because it's really fascinating right when. 1.4s When you buy a Windows Computer, 1.3s edge browser is the default browser, right? Is this? Whatever. The next level of Internet Explorer edge. And obviously Microsoft has that as a default, meaning they do everything possible to remind you and show you and tell you to use the browser. And the funny thing is that everybody knows the first thing you do when you open a Windows Computer is you go download Chrome, the first thing you do. And it's just so fascinating because from a browser perspective, really, it all goes to the website. I mean, what exactly does it do differently? But it's just such a user behavior now. But the funny thing is, because of OpenAI and Microsoft really gaining traction and just partnership and virality with it, man, I've seen people really, even me, myself are looking at Edge like, maybe I could give that a try. Meaning that the perception of Microsoft has completely changed in the last four months. As far as when you ask about who is the leader or who's going to win this race in AI. People probably six months ago says Google, right? People think Google is like the AI company and now six months later, 1s the kind of perception has completely changed. Which is incredible, I think. But I'd love to kind of get your thought on, 1.2s your thought on who do you think has an advantage in this race and do you feel like Microsoft or Google? Because Google is kind I wouldn't say they're behind, but they've had a couple missteps, right, with marketing PR and some of these things. What's kind of your opinion on that topic? Yeah, so I have to be careful what I say here because I have slightly spicy opinions about some of this stuff. But I used to work at Google actually a long time ago, at the beginning of my career in 2010. And if I'm being as direct as I think I can be, I will say that 1.6s in that time, if you look at what has Google really released since 2011, that has really ignited the commercial world, 2s that's really been very value accretive 1.1s other than lots of really great research, they certainly did a lot of really great research. It's not really obvious that there have been many new products in the last decade that have come out of Google that have really set the world alight. And actually, honestly, it's sort of funny because. 1.6s Back then, back in 2010, everyone was like, well, Microsoft is a dying IBM style beast and Google is obviously the future. But if you 

U2

look at what's really happened, if anything, it kind of feels quite the inverse. And if I'm honest, I've sort of felt that way for quite a long time. That the era of true innovation coming from Google has, you know, it's, it's not been obvious where, where any of the innovation has really happened, at least in terms of innovation really making an impact in a commercial world. Lots of great research, like I said, now companies can, and in fact do turn this around, right? Microsoft is a great example where it wasn't that long ago that we were all writing off Microsoft as a technology company. And then it did is it brought in a new CEO, made some really good strategic investments, really good acquisitions, and all of a sudden now it's really seen as an innovator with GitHub copilot and OpenAI and all this stuff. 1.3s

U1

So I think google can do it. It's certainly behind, but I think it's going to need a really fundamental strategic shift in Google's approach if they're going to be able to succeed. And I think 2s from what I've seen, that level 

U2

of sort of fundamental change typically is accompanied by changes in the executive leadership, actually. And so I'm not sure if they have the appetite to do that, but 1.8s

U1

it seems to be something 

U2

that they will likely be thinking pretty hard about right now. 

U1

Yeah, it was like a lightning in the bottle, and every suddenly you were using word Excel for ten years, 15 years, and suddenly now it's like copilot for everything. It's like, wow, literally in two months the world changed. It's like, literally really pretty amazing, exciting stuff. But no. Getting to the next topic we want to cover with you. So obviously having discussions with you, I know you want to democratize sort of AI to be able to be used by every person in the company 1.2s to help level each person up, but obviously having a tool that every person in the company can leverage is very difficult. Right. And I think you guys have the potential to do that. But what are some use cases that you've seen where 1.3s different departments, different people are able to leverage AI for themselves? Because their use cases may be a little different, but can you kind of give maybe a few use cases is where how can a company's entire. 1.6s You know, employee base leverage AI. Yeah, absolutely. Love it. So, great question. We actually just did a really deep analysis on this very recently, and we've seen some really fascinating stuff. So the biggest use case, which I think is kind of what you would imagine, so I won't dwell on it too much, is what I call internal business, not knowledge retrieval. So this is something like a sales manager wanting to ask, how much revenue did we close last month in the manufacturing vertical? Right? And instead of having to log into salesforce, figure out the salesforce UI, click like eight buttons, and then download a CSV and then do a little bit of analysis in the CSV. You can actually just ask it in a conversational way, and it's like you have your own junior employee or intern to go and do all of that for you and come back with the answer right. So that's like a salesperson, but then at the same time, you've got the engineering people. Let's say you have a new software developer on your team and he's been given or she's been given a problem to go and figure out how to add this feature to a new service. Let's call it the generate prompt service. Right? So they're working on that. They're trying to figure something out. They get stuck. They're trying to figure out, okay, who should I ask to help me solve this problem? So 

U2

they can go to the AI, and they can say who's had the most commits recently on the Generate Prompt service. The AI can go integrate the data from GitHub. It can look at the commit history. It can figure out who has the commits that are relevant to this specific problem and automatically identify. Okay, these are the three engineers who are going to have a good idea of what the right answer to this question is. So really powerful 

U1

stuff that could otherwise take hours for each individual question. In some cases, we were speaking to our customers who were telling us that actually, on average, they see up to 4 hours a week per employee, right. Saved in this sort of internal business knowledge retrieval. So that's pretty significant, right? That's giving every single one of your employees half a day back every week. So that's pretty cool. 1.2s I think one of the other really big use cases is communications, right? So I know I personally, I'm not the most naturally gifted communicator, and so I really rely a lot these days in my written communications on AI. Not just to come up with what to say, obviously, but more just to figure out how do I frame this in the way that will make sense to the given audience that I need to send it to. When I worked at Palantir, sort of medium to big sized organization, you worry a lot about will your senior stakeholders see this in the right way? Will they react badly to a specific word, et cetera, et cetera. You can now really use AI to solve a lot of those problems by giving them, okay, this is the audience. This is what each audience cares about. How do I frame this particular email or this particular 1.3s document in a way that will resonate with each person individually? So that's been a really powerful use case that we've seen. Lots of managers, middle managers in particular, getting a lot of lift out, but also a lot of ICS, a lot of engineers who don't necessarily find communication to be their natural strong suit. The third biggest use case that we've seen, which has been really powerful as well, has been sort of internal sentiment analysis. So helping companies really retain their top talent by figuring out from the Employee Engagement Survey and Culture Amp, or Lattice, or from what's being discussed in Slack, summarize all the feedback from the marketing team in the latest Employee Engagement survey and identify the key areas that we need to improve in our culture, right? And in a big organization 

U2

of like 10,000, 20,000 people, that can be multiple weeks worth of work to do that your AI can actually do in two minutes, 1 minute, right? And that's a really powerful use case that I think I personally had never expected until we started working on Credel and we started seeing these HR leaders just doing it in real life and figuring out creative, smart ways to use this. So this is an example of some of the top three use cases that we've seen already. 1s But every week there's more and more that comes out, and we just see, you know, people are incredibly creative with what they can get AI to do. And that's kind of one of the things I'm most excited about with Credence expansion, is 1.3s the unleashing of human creativity on this new technology as well. 1.6s

U1

Yeah. So I think people that use software or buy software know that shelfware, as we call it, which is you buy it or the user adoption is very low. Sometimes maybe the UI, maybe people are busy and to have to, what we call it, you have to work the tool. The tool are supposed to help you. You have to do so much to work the tool, right. For it to be useful. And oftentimes what that means is it becomes shelfware, meaning it just sits there, right. Like nobody wants to use it. Right? And so it's not providing the value that these people pay a lot of money for. And so that's a big problem in software in general and I think love to get your thought on. I think the reason why AI has also democratized or really gone viral is this chat interface has made it so that anybody, regardless of technical skills or there's not a lot of learning to be done. You don't need to learn how to use the tool. You're just having like a text message with a conversation, right? Do you feel like that's really maybe what's caught on? Because people know how to chat, so there's less to learn or there's less to click and figure out. Do you feel like that's really the breakthrough? Guess? I 

U2

mean, it's certainly the case that there's less to click 

U1

now. I'm actually not so sure that there's less to figure out. And this is actually one of the risks of AI, right? Because. 1.7s Sometimes the response that you get from these large language models can be deceptive, right? And if you don't have a good sense of how to validate, how to interpret, how to actually sort of use the responses that you get from these large language models, you actually run the risk of being led down the wrong path. You get a response that says very assertive, that 1.3s cooking vegetables never causes them to lose dietary fiber. I had a friend who was telling me that they were prompting Chat GPT about this recently, and it turns out that when he really looked into the papers that were cited, none of them really said what Chat GPT was claiming that they said. And in fact, one of the papers was just completely made up. And so you certainly have to be a little bit careful with this stuff, because it's true that the Chat UI gives you this sort of sense of confidence that you absolutely don't need to learn anything. You just go in, type in the question, get a response, great. But there's now this subtle complexity, which is that 1.3s all of the uncertainty is now hidden behind that really beautifully simple UI. And I do think that users and companies will need to be careful about not necessarily how do you train users to use the tools, because certainly that's like a component of it, but I think typically a lot of users will. What we're seeing is that a lot of users right now learning because it's so valuable, they're learning how to get the right answers out of the tool themselves. But I think one of the things that we as, like, you know, builders 

U2

in the AI space need to do is really focus in on, okay, how do we 1.3s build this AI safely in a way that we don't give these very assertive, very confident seeming answers that actually turn out to not hold up on the closest scrutiny. So certainly agree with you. Taking away the complexity of, like, which menu was the button that I really need hidden behind. Taking away that complexity is great, but it comes with these hidden sort of side effects that we really need to think as builders about. How do we solve those as well. 1.2s

U1

I know. I 100% agree. But I do look at it as I don't know if there's right or wrong, but when you look at Google, obviously maybe there's more validation. And people always say, if it's on the internet, it must be true. Right. Like a joke. Right. So, meaning even if it's on the Internet, it may not be true either. 1s Nobody can just write a blog who they don't know. Right. To me, I think there's always a level. Even if it shows up on Google in a search result, doesn't mean it's always true. 1.1s You still have to kind of validate or take it with a grain of salt. Right. 1.8s I agree with you. I think there's still going to be a level of that. 1.3s But yeah, to kind of wrap up with our last question, and you're a great person to ask this. Right. 1.4s Once AI came along, we thought, okay, next year things are just going to take it to the next level. Yet little do we know, really, it takes like, two weeks later, things are already going to the next level. Every two weeks, every two months, things are going to the next level. So with that in mind, 1.3s pretend you're a fortune teller. What do you think in the next one year and three years will look like for AI? Is the world we know it just completely changed? Like we don't even recognize or what's kind of your expert thought? Yeah, it's a great question. The trillion dollar question right there. I think 1.5s I'll share my opinion. Obviously, none of us are truly oracles yet, so I'll give you my perspective and encourage viewers to form their own opinions, obviously, or listeners to form their own opinions. But I think the way I see this evolving, it's very clear now that AI is reaching the point where there are certain information retrieval, information synthesis problems that it can handle at least as well, and quite possibly better than humans. And so, certainly for things like building interfaces for every software product you use, it will be very surprising to me if three years from now, the typical software interface looks the way that we're used to salesforce looking or that we're used to jira looking, where there's all these menus and buttons and everything's confusing, really. AI should be able to take away all of that complexity pretty easily and 1.4s it should really make a more sort of conversational way of interacting with software. The default, if anything, now to sort of offer a slightly contrarian opinion to go alongside that. Where will AI not take over? Right? Where will AI not actually replace everything we see? I think from what I've seen, I really do not believe that the fully sort of automated AI applications we hear about auto GPT, which is this idea that you get two versions of chat GPT, basically, and you get one of them to prompt the other and to iteratively prompt itself 1.8s until it completes some task. I think those sorts of things make great demos, but they don't really work, actually. And the reason they don't work is because scientifically, the AI is just not yet there that it can get exactly aligned with what a human wants. So to give you an example today, if you use Copilot, I think you get a Copilot has a 35% acceptance rate, according to the GitHub team. If it suggests a particular 1.8s code completion, you've got a 35% chance that the developer is going to accept that. Now, they believe that with the right engineering, with the right sort of technical solutions, they can maybe get that up to, like, somewhere around 60%. But then it's going to get stuck at around 60%, and that actually getting it to sort of 98. 99% is going to require these fundamental scientific advances in the way that the underlying AI actually works. And so what that means, actually, is that for the foreseeable future. 1.6s Humans are still going to be in the driving seat of AI, right? It's still very much going to be humans giving guidance, giving direction, making the final decision, but using AI as a tool to get a lot of stuff done, to operate with ten times the productivity and ten times the efficiency, not having to figure out the details of every user interface, not having to remember every specific fact. So all of that stuff is going to go away, but humans, as decision makers are going to become even more leveraged and even more important as a result of that. And so 2.1s if I was thinking about, okay, what skills should I be learning in order to position myself as as well as possible for the coming AI spring? I think really the biggest one is going to be, how do I 

U2

use AI to make better strategic decisions? How do I use it to get the right information I need to make the most impactful strategic decisions I can. And that is going to be the sort of companies that win in the next 510 years, will be the companies that get their employees to be really good at that. 1s And if you can do that, then I think you'll be extremely well positioned in the next five years already. 1.1s

U1

Yeah. And I think one of the best way for any company to do that is to utilize Credel. And that's why we're super excited to partner with you guys. Never miss a chance to give you a plug, for sure. But no, obviously you guys seen it, right, at Palantir, and you guys are building this now for the general public. So amazing stuff. We always have one last question for our guest before we close out the show, which is you've seen a lot, you've done a lot. If you have to give one personal and or business advice, it doesn't have to be AI. We talked a lot about AI, but if you have to give one personal and or business advice to people out there that you're really passionate about or something, maybe you believe in strongly, what do you think that would be? 

U2

Yeah, great question. I think I look back on my career a lot and I think about the times when I feel like it was really great and the times when I felt like it was difficult. And the common theme between the really great times and the missing ingredient in all the difficult times has been really working on something that I truly believe is important and needs to exist. Right. And this is true both in work and in personal life. Whatever you're doing, if you can't do it with your whole heart, you're going to do a pretty mediocre job of it. Right? And so whether it comes to the work you're doing or the people you're surrounding yourself with or the environment that you're in 1.7s

U1

for those things that are really, really central to your life, you really, really want to make it like something that you really love. You want to be around people that you really love, you want to work on problems that you really deeply care about. And if you can do all of that, everything else kind of fades into the background, right? The grind, the challenges, the ups and downs of life are really, really a lot easier to handle. And so that's my own personal learning. The biggest and most important learning for me is really put center my life around the things that I really deeply care about love and am passionate about, whether that's like the people that I love or the problems 

U2

that I love working on or the environments that I love being in. And I think if I can manage to do that, then I'm typically pretty happy with how my life is going. 

U1

Yeah, one of the things, I mean, for me, obviously excited about the product and the value that you guys bring as a partner, but I also love working with good people, as I call it. And obviously been talking to you several times, I feel that from you, you're very mission driven, you're very 1s truly, sincerely care about kind of the impact technology. You're trying to save the world, but just through technology, right? In a way, or try to impact the world. And so, yeah, really cool stuff, man. And again, what you guys are building is going to be really important for AI moving forward. So, no, I appreciate you being on the show and look forward to partnering with you 

U2

guys. Yeah, likewise. Victor, thank you so much for having 

U1

me. That was an amazing episode of the did you know podcast with varisource. Hope you enjoyed it and got some great insights from it. Make sure you follow us on social media for the next episode. And if you want to get the best deals from the guest today, make sure to send us a message at sales@varisource.com.