What's New In Data

Reimagining Business Intelligence Through AI: A Conversation with Zenlytic's CEO Ryan Janssen

March 29, 2024 Striim
Reimagining Business Intelligence Through AI: A Conversation with Zenlytic's CEO Ryan Janssen
What's New In Data
More Info
What's New In Data
Reimagining Business Intelligence Through AI: A Conversation with Zenlytic's CEO Ryan Janssen
Mar 29, 2024
Striim

Unlock the potential of AI in the world of data analytics with Zenlytic's CEO, Ryan Janssen, as he takes us through a journey from collectible DataMons to the sophisticated integration of AI in business intelligence. Imagine transforming industry pros into trading cards – that's the kind of innovation we chat about, highlighting the whimsical yet calculated steps towards making data not just informative but downright engaging. Ryan recounts the evolution of Zenlytic, from its machine learning beginnings to its current status as a conversational analytics platform, opening up new avenues for how we interact with data.

Data is the new gold, but only if you know how to mine it. This episode peels back the layers of complexity surrounding data modeling and the resurgence of semantic layers, unraveling the intricate dance of accessibility, maintenance, and user experience that businesses must perform. We discuss when your organization might be ready to embrace a semantic layer and the unmistakable signs that it's time to elevate your BI tools for a self-serve experience. Ryan and I also tackle the importance of iteration and soft skills in delivering successful data projects that are not just functional but mission-critical.

As we wrap up, we cast an eye on the horizon of data analytics, where AI isn't merely a trend but a series of incremental innovations shaping the future of data products. From the significance of trust and compliance in AI adoption to the debate between building versus buying AI solutions, we cover the strategic moves companies need to consider. Listen in for a candid discussion on the dynamic roles of data teams, the transformative power of AI like Zenlytic's Zoe, and how different data structures can cater to the divergent engagement levels of users by 2025. After all, the future of data isn't just about numbers—it's about the stories they tell and the decisions they drive.

What's New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What's New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

Show Notes Transcript Chapter Markers

Unlock the potential of AI in the world of data analytics with Zenlytic's CEO, Ryan Janssen, as he takes us through a journey from collectible DataMons to the sophisticated integration of AI in business intelligence. Imagine transforming industry pros into trading cards – that's the kind of innovation we chat about, highlighting the whimsical yet calculated steps towards making data not just informative but downright engaging. Ryan recounts the evolution of Zenlytic, from its machine learning beginnings to its current status as a conversational analytics platform, opening up new avenues for how we interact with data.

Data is the new gold, but only if you know how to mine it. This episode peels back the layers of complexity surrounding data modeling and the resurgence of semantic layers, unraveling the intricate dance of accessibility, maintenance, and user experience that businesses must perform. We discuss when your organization might be ready to embrace a semantic layer and the unmistakable signs that it's time to elevate your BI tools for a self-serve experience. Ryan and I also tackle the importance of iteration and soft skills in delivering successful data projects that are not just functional but mission-critical.

As we wrap up, we cast an eye on the horizon of data analytics, where AI isn't merely a trend but a series of incremental innovations shaping the future of data products. From the significance of trust and compliance in AI adoption to the debate between building versus buying AI solutions, we cover the strategic moves companies need to consider. Listen in for a candid discussion on the dynamic roles of data teams, the transformative power of AI like Zenlytic's Zoe, and how different data structures can cater to the divergent engagement levels of users by 2025. After all, the future of data isn't just about numbers—it's about the stories they tell and the decisions they drive.

What's New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What's New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

Okay. Bye. Bye. Hey, everyone. Thank you for tuning in today's episode of What's New in Data. Definitely super excited about our guests today. We have Ryan Janssen, CEO of Zenlytic. Ryan, how are you doing today? I'm doing fantastic, John. Thank you so much for having me on the pod. Huge fan of both the pod and you and Striim. So Very excited to be here. Oh, thank you. Yeah, yeah. The admiration, likewise definitely a big fan of the work you're doing as well at Zenlytic. And also you know, your side quest, your side passion I don't know if it was a one time thing or something you're continuing to spend time on, but making datamons and people in the, in the industry. I, I have to tell this story. This is one of the highlights of my career working in this industry. This was at DBT Coalesce last year in San Diego. Ryan and the team at Zenlytic made with quote unquote datamons, which are essentially, you know Pokemon cards, but of people in the data industry. And it was such a delightful thing. Everyone loved it. I mean, there was datamon for, you know. All the people you can think of who frequently posts about data on, on LinkedIn whatnot. And I just remember that, that, that just created a lot of like delight, delight. And one of the things that made dbt coalesce conference super fun this year. So Ryan, I have to ask you, like, what's the story behind creating those datamon cards? Oh, it was, it was, we had just as much fun creating datamon as everyone had receiving the datamons and sharing them. It was also one of the funnest things I've had a chance to do. We have at Zenlytic. Our, our goal with the datamon was to highlight some of the capabilities of all the really neat stuff that you could do with AI. And that's exactly what it was. And people don't even realize this, but those, those datamon cards were like 99%. You know, AI generated. So for those that don't know, it's a little card. It's like a collectible monster card or whatever. And there's a picture of the person and there's some, some text with like abilities and powers and things like that. The way that the tool worked and we built as a small tool for this is that you, you know, people would come to this analytic booth and they would put in their LinkedIn profile. It would take their linked in profile picture and use mid journey to create the image. And it would also use an LLM to create the copy on the card. And those are actually linked together. So it was multimodal. So you know, there would be, you know, a turtle related datamon would be like a turtley name and turtley skills and a turtley picture. And they were hilarious. Yeah, I was blown away by a few things. I mean, first, we were very surprised by how well they turned out ourselves. Just the quality of the copy and the images were I'll just kind of firing all cylinders. The other thing that really surprised me, and I think this is mission accomplished for like, you know, highlighting all the neat stuff that can happen with AI. Is you know, we were sitting in the booth and reflecting and I remember thinking to myself this would not have been possible at last year's coalesce, you know, this technology is moving so fast you know, last year the image would have been. like, you know, some kind of weird stick figure with, with nine fingers and like the copy would have been kind of disjointed and it would have been written in GPT three before our LHF and things like that. So it was literally impossible to do before the advent of this technology. And in less than a year between those two coalesces, it became possible. So we were really excited by the project. That was lots and lots of fun. And yeah, we're thinking about what's next too. I can't spoil anything yet, but we've got a pretty amazing whiteboard for, for next year's coalesce. We're going to do some more fun stuff with AI, I think. Yeah. And, you know, you've really, it seems like you're, you're, you're really ahead of the curve in terms of turning AI into, either, you know, data products that, that bring users to light or, or also these, these fun conference novelties, like, like datamon and, you know, I look at the, the actual, you know, product that Zenlytic is, and, you know, you, you, you phrase it as, you know, the business intelligence you can talk to. And it's really incredible how. Yeah. Quickly your company has folded you know, the latest innovations of AI into your customer facing product. So I'd also love to hear a bit about that. Yeah, for sure. The reason it feels like that happened quickly is. because we started early. It's, it's one of those usual you know, you hear the story of we're not, I wouldn't say we're an overnight success, but you hear these stories of real overnight successes like Instagram and they're huge. And people forget the six years before that, where they weren't huge. Our experience with, with AI started, you know, many, many years before the revolution you know, Paul, my co founder and I, Met when we were studying, studying machine learning in 2018. And you know, afterwards, when we set up to build Zenlytic, we had, we'd always had some AI and ML functionality in the tool. It was neat, actually, like the, the, the early versions before LLMs really started taking off used like the progenitors. Of GPTs. So before GPT, there's an algorithm called BERT, for instance. It's an open source model. It's much more rudimentary, you know, they're much smaller. They're the quality's not there, but we had always been incorporating these sorts of techniques into the tool. I think, you know, they say running a startup is just being one of the, one of the most important parts is being willing to be wrong for. you know, longer than anybody else. So you know, we had some basic functionality for a long, long time. You know, lucky for us that got better faster than we expected. It just kind of happened overnight. And we were well placed to quickly jump on, you know, the capabilities of these new models. What that meant specifically, was you know, first we could improve resolution, you know, so like, Bert, for instance, has a pretty hard time understanding the difference between, you know, last month and last complete month, for instance you know nuances can trip it up. So you can improve performance and resolution significantly. The other neat thing that it unlocks, is, is chat. You know, I think about search versus chat and before we had some of the more advanced, you know, the really big LLMs in, in play in Zenlytic. It was a search. There's a single bar, you type something in, you get your answer. We were able to jump very quickly into, you know, what we call Zoe, which is our chatbot. And Zoe lets you actually refine and answer, you know, multiple questions. So if you ask an ambiguous question at the start, Zoe will correct you and say, wait, do you mean like, what do you mean by top here? If you want to drill in or refine or say, oh wait, no, I actually you know, I meant this instead. You can do that in a chat. I guess. That, that's kind of that was, that was a step change for me. I think that's, that's really important. And that's, it gets me really excited. But the journey started a long, long time ago. Yeah. With some, with some pretty basic models. Yeah. And speaking of models you know, we'll, we'll talk about a different type of models. So, the way you frame the usage of your product, you know, the steps to success with Zenlytic, which is a, you know, a business intelligence product with AI built in and semantic layers. But you talk about connecting to your warehouse as step one, step two, define the semantic layer and step three, ask Zoe. Like you mentioned, that's the chatbot. Ask Zoe questions. So I want to get into the semantic layer part. So, I would love your perspective on sort of the, the both the foundational aspect and sort of the latest and greatest parts of data modeling. Yeah for sure. And that's, that's, that's such an interesting Way to phrase that question, because I think you cover an important sub question in there, which is like, is the semantic layer part of your data model? It's kind of like the age old question of like, should I do this in Looker or DBT? I I would answer yes, by the way. The semantic layer is part of the data model. And I would say that it's not a coincidence that we're seeing this resurgence in the community of these two concepts at the same time. You know, we've seen semantic layers sort of steadily gaining momentum and popularity over the course of the last few years. Led by a number of great tools. And at the same time, very recently, I feel like even just the last couple of months, people have been rediscovering data modeling. I think the interesting thing is that those are kind of two sides of the same coin. And you can't have a data model without due consideration. for how that data will be consumed at the outset or by your end users. I think a big mistake that people make when they're, when they're thinking about how to build the data model is they start at, you know, the warehouse and go out whereas I think they should probably start at the solution and go back, Let me give you an example. If, if, if you just want to, let's, let's just say you're dumping your big query into, into G sheets and letting people play around with it that's going to require a pretty different approach to data modeling than if you. Put a really effective semantic layer on and you're using a BI tool to make those queries live at query time. So yeah, I guess one, one thing that I would really, consider you know, when you're, if, if you're interested in data modeling ask yourself if it's time for a semantic layer, if you're interested in, if you're considering a semantic layer. You should think about the data model that enables that. Excellent. And you know, there's not all data modeling requires you know, semantic layers, you know, there's, there's a lot of approaches to it. But when should teams start asking themselves, like, Hey, should we start looking at the semantic layer? I think so you're going to get, I think you'll get different answers for that question, depending on who you ask. I'm generally, on the pro side. But let's look at the tradeoffs, right? So, the benefits of the semantic layer are they're incredibly helpful for a self serve experience. Right. That's the first thing is that if you look at you know, the history of how people consume data. If, if you're comfortable with SQL or Python or whatever, you know, 20 years ago, you were probably querying a window that looked a lot like the snowflake window. And today you're probably querying a window that looks a lot like the snowflake. Like things haven't really changed that much. On the non technical side, you know, 20 years ago, you were asking that other person, and now you're actually clicking around in a GUI and you're actually manipulating. You know, individual fields in, you know, a looker window, for instance. That's changed a lot, and that's because of the semantic layer, right? That's one of the big pieces of technology that's, you know, advanced to unlock that self serve experience. Another interesting part of the semantic layer which there's a few of us that believe this, I think it's an essential component for good LLM performance. There's two philosophies for, you know, data chatbots. One is text to SQL where the LLMs write SQL. The other is the semantic layer where it's governed by the LLM. I'm very much in, in the latter camp because. Well, it's governed for starters. If you if you ask a Texas equal chat bot, but a metric that doesn't exist, they could try and invent a definition that hasn't been approved. And I don't think the accuracy for LLMs is there to generate SQL with the reliability level of a B I tool. Anyways, so I think it's an essential part of the LLM experience. Now the trade offs the semantic layer. Why wouldn't you use a semantic layer? And it's complexity. It's work, right? And that's one of one of the problems of the semantic layer. And I think it's actually an endemic problem in our industry. That we're Only kind of partially acknowledging is that building and maintaining these pipelines takes huge amounts of effort and time. You know, you can have, you know, 1000, 10, 000 line yamls very easily in a semantic layer and it can be multiple full time jobs to maintain that. So. I would say to answer your question, the, the, the right markers for evaluating if you should be using a semantic layer or not are how the data is being consumed and sort of the size of your team and your resources. So if you have no data team you're probably not ready for a semantic layer. You know, if you have one or two people, yeah, maybe. It depends on how big and how much how comprehensive you want that semantic layer to be. Likewise, if you live in G sheets skip the semantic layer for now. But if you want to, or you know, if you have more comprehensive data things that you want to be able to dig deeper on you know people want to drill and explore if you're getting a lot of ad hoc requests, I think is a good, good, good, good indication. Those would be cases in favor of going for the semantic layer. Yes. And the you know, the, the, the never ending stream of ad hoc requests, Hey, do we have a report for this? Do we have a report for that? Oh, I like this, but you know, I just want to add one field to it. You know, I think these are, these are definitely symptoms of an organization that, that needs a semantic layer. And, you know tying that back to your point about. Yeah, if you don't have a data engineering team or it's a super small or you don't have the maturity yet to really maintain a semantic layer, then perhaps you're not ready. But at the same time, if you're getting all these questions from the business. You as a data leader are sort of tasked with, you know, qualifying you know, the, the value of that. And if it turns out that answering those questions and being able to you know, move swiftly with data has a lot of business value, then it's time to ask for that budget and staffing. So, you know, and that, that, that leads me to my next question for you, which is, you know, how should data leaders actually go about. Implementing self service analytics, both technically and then, you know, getting the, the, the business buy in as well. Yeah. Good question. Let me, let me, let me start my answer with that, with an anecdote that straddles those two questions. Which is you, you wouldn't believe how you've probably done this yourself, but I'm always astonished how effective it is. If someone sends you an ad hoc question and you have a semantic layer native tool instead of just giving them a plot, you send them a link to. You know, unexplored with the proper field selected, and you say, here you go. And then after that, the, you know, the questions just stop. It's amazing. It's like that, that just settles it. And the person says, Oh, I see, I can just click and change this. And they'll actually make the necessary changes. It's, it's it's, it's, it's probably the most effective way of seeing self serve in action. Now onto, how do you, how would you implement self serve? I think there are, we could, we could get into people processes tools. You know, I'd say generally on the tool side the important things would be transformation, semantic layer, and some sort of BI. It doesn't have to be a boil the ocean BI or boil the ocean semantic layer or boil the ocean transformation layer. But you know, those, those are essential pieces for a great software experience, I think. I think something that people fail to acknowledge today is that. A plurality, a solid majority of self serve experiences are still start on a dashboard, explore from here. You know, they're, they're, they're less often a de novo question where someone opens up, you know, a blank self serve page and starts selecting stuff that usually start from a starting point. So that means you need dashboards and some sort of BI tools as a starting point. I would say in terms of philosophy, the most important thing when you're deploying these tools, and I guess this comes into both people and processes, I think people make the mistake of trying to do too much, too fast. People try to boil the ocean. I think anytime, anybody starts thinking that your, self serve deployment is a digital transformation that's, that's a big red flag. And that means you probably need to trim it down. Right. I like that. Yeah. Yeah. So it's, it's all about starting small and iterating. You, there's now there's an inherent risk here, right? So like iteration is how you build good products. Data tools are products, but you, the difference between products and data tools is that your data tools can't really be wrong. You know, trust is a non renewable resource. And if, if you start giving people bad quality data or inaccurate tools, then they're going to lose trust and stop using it. So you have a very delicate balance between actually iterating to improve the tools and starting small, adding new things and fixing things as you go. And you know, just reaching the trust of your end users. So I think all along that iterative process, you need to be over communicative. You need to be, very, very clear about where the weak points are. So you can, so we've added this new metric. I think this is correct but please try it out and tell me if you see anything that you want to have changed. If you point out where there could be errors in data quality before they happen, you can maintain that trust and keep on iterating with that user. This is something, this is why I actually, talk ad nauseum about the importance of soft skill that everyone does. But I, I spent a bit of time as a consultant in a past life and I have a lot of respect for the people who are really, really great at, you know, managing stakeholders, great communicators and, I think the data industry's sort of, collective psyche is that we all understand that's really important, but I don't think we're doing it up yet. So I think you know, the right way to do this in summary, start small, iterate and do a re the, the, this project will succeed on soft skills, not on really, really great SQL. Yeah, 100 percent agreement with you there. And, you know, I've, I've, I've said this before, but data teams are really the bridge between engineering and the business, the way I see it, at least the way, you know, we operate internally here at stream. But I also see this across our customer base, where it's sort of like the You know, the, the data's coming from your, your, your prod software systems, your, your, your microservices, your databases, your, your Salesforce, things along those lines, your whole tech stack, whatever it is. Right. And then your, the data team's job is kind of curating that and creating insights from it. And yeah, I mean, you can view as purely technical, Oh, we're moving data from this stuff into this warehouse and transforming it, blah, blah, blah. But the, the reality is that, you know, you're trying to distill these details to, a level that, you know, business users can understand it and actually action it. And that's the most critical thing because you can, you can build a ton of pipelines and go completely off the rails and build some, you know, outputs that are completely unusable. And it's like, why, why even have a data team? You know, you could just go, go directly to the source and pull a report from. Salesforce to do a dump from your database, you know, ad hoc when, when needed, but the, like you said, the, the, the soft skills are critical in terms of managing stakeholders and, and making sure that you understand them and their needs so that your, your pipelines are providing value and can ultimately become, you know, the, the, the, the magic word is operational but there is a real path for that, That, that path is fascinating, by the way. You've probably followed it all the way down like I've seen happen a few times where you start out where people are like, Oh, it's a data team. That's neat. And then people start to take it seriously and, you know, build trust in the data. And there's that strange inversion point when that data system becomes a system of record. And then suddenly it's, it's a mission critical team and there's SLAs around that and there's pager duty for the data team and things like that. And it's you know, you could, you could debate if that's the right approach or not, but it's, you know, you definitely see that happen. And it's it's fascinating to see these teams come from, you know, new initiative to Yeah. And it's, it's, it is definitely fascinating to see that. And, but you know, once it becomes essential it's a really great sign. That's when the number of ad hoc requests goes through the roof and, you know, you get more people asking you like, Hey, are these, are these are these numbers? Right. And you know, you have your validation process to sort of prove it out. You know, it's, it's, it's, it's really a fun journey to see that evolve across the board. So one of the things that's sort of accelerating this and, and you know, has huge promise for data teams is AI. Now you've rolled out a product that, that seamlessly integrated AI into the customer experience. So I'd love your perspective, you know, Is, is data dependent on AI or vice versa? Or actually, why don't we back up a second? I want your perspective on AI for the data industry in general. Yeah. Well, my perspective is, not yet. Let me caveat it by saying my perspective has not yet been proven. So this is my belief. But, the way that I think about this and you, you hit the nail on the head which is, I think one of there's, well, there's, there's two. I think big things that have held self serve back. One is ambiguity, defining what self serve is, which maybe we can put a pin in for later. The second one, which you just mentioned, is kind of the paradox of self serve, which is the more access to data you can give someone, the more questions they can ask back. And that is, that is the challenge of, you know, how do you prevent self serve from self serve. Multiplying the number of requests instead of managing it. I think it's been a problem for 30 years. Right. And, I think every BI tool has set out with a punchy tagline like ours, the world's first self serve BI tool. And they pat themselves on the back and they build this really nice, elegant UI. And then someone wants to be able to do native cohorting. And you know, then somebody wants to be able to add a date spread in. And then somebody wants to be able to change the axis tick label color. And, and eventually you're like this overly complicated, you know, airplane cockpit of tools that it's no longer self serve. Data is inherently hard. It's, it's, it's a pinch point between. A huge number of boundary cases in the data quality and structure and a huge number of use cases and UI challenges with presenting it. My perspective and again, I'm hopeful this is the case that we'll continue to expand that through technology. I think, we've seen things like semantic layers have pushed that farther and they've gone further on the efficiency plane for that. I, I firmly believe the next big step for that is AI for data and it's it gives you the power to, you know, handle those use cases really, really elegantly, you know and, you know, to give you an example, someone, you know, someone in Zenlytic was wanted to change a, a line plot into a big number chart big number chart was grayed out and they didn't realize they had to remove the date column for it before they did that and then they're like, okay, well, I guess I could probably grog to the docs and figure that out. And, you know, kind of like, okay. And like, but, or you can just go into Zoe and say, make this a big number block. All of those things are actually kind of encapsulated in that sentence, as long as we have What turns out to be an absolute beast of a model that can, you know, encode the intent without having that over like huge amount of complexity. I, so I, I I don't know if, I don't know if you have a car. I I, I'm, I moved to Florida, so I had to get a car. I got a Tesla. And it's, it's super duper cool. So it's, it doesn't have a dashboard, right? I make a BI tool. I have a car with no dashboard. All it has is this one big screen in the middle. And the relevant stuff, when I'm turning, it puts those camera on and things like that. So it presents the relevant information at the right time. And they're actually working to offload as much as possible, right? The new Teslas, they've taken out the tree stalks and putting that onto the little screen. And, it's neat because it makes a very, very streamlined experience. And I think that LLMs can do that for complicated tools, not just in the data industry. And, you know, our example of that is. In the dashboard, you know, we've actually stripped out all those crazy controls that you have to learn how to manipulate in a BI tool. You know, there's, there's all these access tick labels, things and stuff like that. It's a big box and you ask Zoe and you say, Hey Zoe, can you make this into you know, just make these first three into bars and, you know, put a horizontal line at zero and give it the. A color scheme that looks like it's from the 1970s, whatever. And then you can do all those things. So having to know how to manipulate all those little edge cases. So long story short, I think that AI and in particular LLMs are going to be an effective tool for widening that pinch point between those edge cases. Yeah. And, every data team is sort of evaluating, you know, what role AI is going to play in their, in their organization. You know, I, I. I've, I've spoken to data engineers at companies of all sizes that are trying to see like, okay, for one, is this our charter or not? Or is there going to be another group that's going to be, you know, under a chief AI officer and that's, they're going to make all the calls or, or is this something that's sort of incremental for our data team to, you know, Hey, we're using this data warehouse. It's adding a vector extension and vector search. Does that, is that our AI strategy? And, you know, there's all this, there's all this choice, which is great. And I think that's why it's, everything's going to move so fast this year. Like you said, like two years ago, those datamon cards were impossible. Now, you know, Going forward, AI is going to just be a huge game changer in, in data products as well. And you might find that if you don't have a good solution in place by 2025, then you might, you know, people might ask questions like, Hey, are we, are we missing the ball here? Yeah for sure. I think I think everyone feels some anxiety around that. I would say one of their interesting take that might alleviate some of that anxiety to which is, I think that an AI strategy is not going to be like one giant, like, you know, boulder you drop into somewhere. I think it's going to be finding a lot of small things that can improve the experience. Yeah. And the good news is that's a lot of that is going to get thrown over to the tool builders and the vendors. And you know, I think about that thing that mother duck just added. It's just a little SQL fixer. And all it is, you know, when you write a query, it's called fix it, I think. And if there's an error, it'll find it and say, propose a solution. So you can click it instead of having to, you know, make sure you have the right syntax and fix your SQL. It's just, you know, little stuff like that. I think we're going to find a lot of, A lot of just nice little things, they're going to add up to a very different experience. Yeah, yeah, I think like you said, the, the, the little victories here and there can sort of snowball into this bigger effect. So yeah, I, I love the example of what, what Mother Duck rolled out that, that team did a good job of just kind of, you know, rather than trying to boil the, like, you know, using your terminology, boil the ocean with AI or like, okay, what's a, what's a quick little thing that we can add into our product that'll delight users and. You know, is, is relatively, you know, straightforward and in some ways pretty limited in scope, but maximal maximizing the the, the, the user's positive experience with the product. So, yeah, I think teams will have to kind of evaluate what that means for them. What they can roll out, that's sort of like a minimum viable product of AI driven delight. And, and then, you know, just keep building from there now. I think that's a great strategy. Solutions for problems, right? So it's like I think actually one big thing we'll see in 2024, 2023 was the year of wonder with this technology, right? And that led to all of the fun, you know, prompt writing guides and chat with the PDF apps and you know, like multi hall problems and whatever, right? People evaluating these tools and there's like just a bunch of really hypey stuff, but I think a lot of that is calming down now. Because those are, those are tools just like building your semantic layer. They start with the tech first and then they're like, okay, what can we solve with this? I think you're going to find the most effective tools are going to be like, but whether it gets doing, it's like, there's a problem. People don't want to have to write all the SQL. So. Can we solve this using these new tools? And I think the real winners are going to start with the problem and find a solution for it. Yeah, definitely. Definitely. Now, you know, there's also the, the, the category of companies where, you know, they, they sort of have to move. In certain ways, buy in investment, you know, really understand what's being developed, have a, have a multi year roadmap, a lot of compliance and control that they have to get into the process because it is possible that, you know depending on the size of your company, that these sort of kind of small incremental adoptions might not work because it doesn't have the, the, the, the right sort of buy in internally. In which case, you know, ultimately, even though you tried to be as nimble and agile as possible, it was still all for not given the, the, the way things are rolled out. So what are your recommendations to companies that, you know, have to have more of a, a longer term view on it? Well, my first, my first reaction is, I, I, I believe that AI is here to stay. That sounds like. Maybe it's, it sounds like a silly thing to say because it feels like it's so hot, but, you know, we, we have these every few years there's something right. And like you know, three years ago, everyone was figuring out their whatever ICO strategy and like, you know, there's, there's always something or. A lot of those things are not really, sustained, right? And it's like people worry about how they're going to launch their own, whatever digital wallet. And then it turns out that no one's using digital wallets five years later and they abandoned the project. So a lot of these technologies. You know, are just flashes in the pan, or they take, you know, several sort of swings to, to, to really get off the ground. I think that we are seeing value delivered by AI at a scale that's order or orders of magnitude higher than. But I would, you know, one of these false waves. So, I mean, the first thing is, is I would, you know, start now. I wouldn't, I wouldn't, you know, like hedge on it. I wouldn't say, you know, is this going to be important for us or not? So I think, I think that's a good first step. I think a good second step, do your compliance research. So, understand what your targets are for the company. You know, we, we have, work with companies with a large variety of appetites. From people who have their own, you know, on prem enterprise GPT 4 contracts through the people using open source models to the people using external APIs. So I think it's worthwhile having that conversation and understanding it. I think it's worthwhile having. A reasonable conversation about that. It's a little bit scary to do now because we're still testing the boundaries of, you know, like the compliance societal, like even legal boundaries of, of what these models can and can't do. Another sort of universal truth of these waves though, is you know, things that seem like they're a stretch for the appetite now. usually tend to become normal in a few years time. So it's, it feels a bit like when Amazon first launched and people were aghast that they would put a credit card on the internet. So I would say certainly don't be like imprudent or anything, but have a rational conversation about what you want to make sure. You can, you protect in terms of compliance, you know, and then also I think in general a really good way to start, I'm unfortunately biased in this regard, but I fall on the generally on the buy and the build versus buy side of this at least to get started. We we have users who are using our embedded product that want to build this themselves. And I certainly would celebrate when they do, but, it's, it's tough tech to work with. It's deceptively hard to work with. It's the kind of thing where you can go in the, you can go and build a GPT in 15 minutes in a toy example, in a very small scale. And you're, you're like, I'm a genius. Look at this. I'm making AI. I'm an AI engineer. And then you realize that one of the weak points of LLMs is it's very difficult for them to work at scale, and it's very difficult for them to work predictably. And, you know, the level of work to go from that toy example to, you know, scalable and predictable is a very steep, steep curve. I would say that, I think, I think there's a case to be made for both. And again, I've admittedly bias, but I fall on the start sooner and by, focus on the compliance aspects, you know, apply with vendors that you trust by the vendors that can show you that they are doing the necessary steps, that they are also compliant, Yeah, but you know, I would say that's probably the fastest way to get started. Absolutely. Someone's been fidgeting with this stuff for a year now, trust me. Yeah, yeah, absolutely. And, you know, you've had great success actually implementing this initiative. So, you know, it's great for people to get your take on it. And the, the, the other thing I wanted to talk about, you know, we talked about data modeling, we talked about AI, we talked about self serve analytics and even, even datamon cards, all that stuff. But you know, one of the, the, the core things is always comes back to is like, how will people actually use your product, right? Whatever you're, you're working on. So, you know, and data teams need to think the same way. So like, what are the best, and you know, we also talked about the soft skills required, things along those lines, but you know, what are the best ways for data teams to actually model their data with the end user in mind? Yeah. Great question. I would say that there are. So first, there's two philosophies that are the most popular, I think, right now. The first would be some sort of star schema with a semantic layer. And this actually wasn't even possible a couple of years ago, even, for instance. So before dbt bot transform, the original dbt's semantic layer, dbt's semantic layer, did not support table joints, right? So the star schema was not possible. Now, you know, every modern semantic layer will do that. So that's, that's one approach. The other approach I would say is the I'd say the conventional classic approach that's been going pre semantic layer evolution is kind of the one big table, OBT. There's tradeoffs to both. You begin with the end in mind. You have to understand how users will consume that data. And that looks very different in those two examples. And it depends on what they feel comfortable with. When you're using a semantic layer you're thinking in terms of metrics and dimensions. And, you know, you're thinking in terms of. When you have one big table, you're thinking in terms of a spreadsheet, right? So then people have a giant CSP basically. Those are both good. There's trade offs. Some people prefer one or the other. I would say, some, so I'd say it's a slightly higher learning curve to go and choose those fields and dimensions. I would also say that once you understand, you get to learn to think in terms of those metrics it becomes very intuitive and much more powerful for navigating the underlying data. You know, and I, this comes back to how you define self serve if self serve, you know, there's a spectrum. It could be static dashboard consumption. Somewhere in the middle is using, you know, GUI explorer type tools to point and click. Then over here is sort of, you know, citizen data scientists are running experimentation and things like that. And one big table is, is pretty good for taking someone to dashboard but it's not very good for taking someone to sort of level two or level three. So yeah, again, it's like, what, what do you, what do you want to do with self serve? And if, if you think those static or, you know, kind of slight sliceable reports or slightly sliceable reports are, are enough yeah, one big table will be simple and fast. If you want to dig deeper, I go with a semantic layer and a star schema. Excellent. Well, thank you so much for the guidance, Ryan. Thank you for being generous with your insights there with the audience. Where can people follow along with your work, Ryan? I would say probably LinkedIn's best. Just Ryan Janssen, honestly, you can't miss me. I'm on Twitter too, less active on Twitter but it's at Ryan Janssen. Or you can go to our website and just hit the contact us button. It goes to me. It's a zenlytics. com. Excellent. Ryan Janssen, CEO of Zenlytic, business intelligence. You can talk to the first truly self service BI platform in your words, but I, I'm a believer. So we'll, we'll leave it at that. Thank you for joining today's episode of what's new in data. And thank you to the audience for tuning in. Thanks so much, John. Okay. Bye. Bye.

Innovations in AI for Business Intelligence
Data Modeling and Semantic Layer Integration
Implementing Self-Serve BI and AI
AI Impact on Self-Serve BI
AI Value Delivery and Data Modeling