What's New In Data

Turning Data into Actionable Insights with Bubble's Head of Data Elena Dyachkova

June 14, 2024 Striim
Turning Data into Actionable Insights with Bubble's Head of Data Elena Dyachkova
What's New In Data
More Info
What's New In Data
Turning Data into Actionable Insights with Bubble's Head of Data Elena Dyachkova
Jun 14, 2024
Striim

Ever wondered how to turn data into actionable insights? Join us as we sit down with Elena Dyachkova, the head of data at Bubble, who has an impressive background leading data teams at industry giants like Peloton and Spring Health. Elena walks us through her inspiring journey from economics to product analytics, shedding light on the critical role data plays in decision-making. She shares how simple analytics methods can be just as impactful as complex models, making this a must-listen for anyone looking to improve their analytical skills.

Data accuracy and reliability are paramount, especially when working with business applications like Stripe. Elena provides a deep dive into the challenges data teams face, from API changes to schema updates, and stresses the importance of proactive monitoring and observability. We discuss how to build strong heuristics and set realistic expectations with stakeholders to ensure seamless data flows. This segment is packed with practical advice for data professionals looking to navigate the complexities of modern data environments.

Continuous learning is at the heart of effective product analytics. Elena reflects on the evolution of education in this field, pointing out the gap that once existed and how diverse courses on business metrics, growth strategy, and experimentation have filled it. We explore the significance of data collection, structure, and engineer collaboration, all essential for robust product analysis. Elena also discusses the importance of maintaining a mindset geared towards iterative testing and learning, helping you avoid the dreaded analysis paralysis. Don’t miss her references to key figures and literature that have influenced her journey.

Follow Elena on:

  1. LinkedIn - https://www.linkedin.com/in/edyachkova/
  2. X - @ElenaRunsNYC
  3. Substack Blog - Dramatic Analyst
  4. UpLimit Course - Product Analytics

What's New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What's New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

Show Notes Transcript Chapter Markers

Ever wondered how to turn data into actionable insights? Join us as we sit down with Elena Dyachkova, the head of data at Bubble, who has an impressive background leading data teams at industry giants like Peloton and Spring Health. Elena walks us through her inspiring journey from economics to product analytics, shedding light on the critical role data plays in decision-making. She shares how simple analytics methods can be just as impactful as complex models, making this a must-listen for anyone looking to improve their analytical skills.

Data accuracy and reliability are paramount, especially when working with business applications like Stripe. Elena provides a deep dive into the challenges data teams face, from API changes to schema updates, and stresses the importance of proactive monitoring and observability. We discuss how to build strong heuristics and set realistic expectations with stakeholders to ensure seamless data flows. This segment is packed with practical advice for data professionals looking to navigate the complexities of modern data environments.

Continuous learning is at the heart of effective product analytics. Elena reflects on the evolution of education in this field, pointing out the gap that once existed and how diverse courses on business metrics, growth strategy, and experimentation have filled it. We explore the significance of data collection, structure, and engineer collaboration, all essential for robust product analysis. Elena also discusses the importance of maintaining a mindset geared towards iterative testing and learning, helping you avoid the dreaded analysis paralysis. Don’t miss her references to key figures and literature that have influenced her journey.

Follow Elena on:

  1. LinkedIn - https://www.linkedin.com/in/edyachkova/
  2. X - @ElenaRunsNYC
  3. Substack Blog - Dramatic Analyst
  4. UpLimit Course - Product Analytics

What's New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What's New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

I'm You Hello, everybody. Thank you for tuning into today's episode of What's New in Data. I'm really excited about our guest today. I recently signed up for one of her courses. She's a great thought leader in the data industry. Elena Dyachkova, head of data at Bubble, formerly at Peloton and Spring Health. Elena, how are you doing today? I'm doing well. Thank you. I'm very excited to be on your podcast. Of course, of course. Likewise excited about our conversation today. Elena, first tell the listeners a bit about yourself. Yeah, definitely. As John mentioned, I'm the head of data at Bubble currently. Generally I'm a big. a fan of product analytics. That's my, I think, still strongest area of expertise. And I built up a product analytics teams and processes and culture in my previous two companies from pretty much zero both at Peloton and at Spring Health. But my general background is actually in economics. I think that actually lends itself pretty well to product analytics because all we talk about is, you know, decision making, rational, irrational. So this like microeconomics and behavioral economics very close to product analytics, I think. But I started my career in sports. I worked in track and field for the first, about six, seven years of my career, full time, completely unrelated to data. So. To this day, I'm actually a huge fan of track and field. And that's another topic that I can kind of drone now for hours. This year is the Olympics in Paris. So I'm already kind of counting days to go to the a big trials. Oh yeah. That's one of the fun things on your your, your Twitter. You know, I like following you for both your data insights, but also random tidbits on track and field, which is always fun. That's usually a sport I don't, I don't follow. So one of those fun niche sports to, to, that I get to keep track of just by following your, your, your data insights. How, so I wanted to ask you a question. So, you know, you you mentioned you have an economics background and, you know, You recently taught a course on data, and I thought it was a great course. The one of the main areas you focus on is product analytics. So I'd love to hear, you know, your take on how data teams should approach product analytics. Yeah it's a very good question. And I think as anything in data, the answer is probably depends, you know, on the size of the company, the product itself, because from product analytics, I think the ROI of it really, really can depend on the stage of the product, the maturity of it you know, the user base and. the nature of the product as well whether it's the growth, whether it's product led growth or sales led growth as well. So it would really vary but generally what is product analytics, right? Product analytics is just you know, the ability of the product folks weaving Data and insights into their decision making that helps them connect the dots between what's happening with product and their users and what's happening with the business outcomes. That's kind of how I see it. Is that really enabling folks to make better decisions and to do it? You don't necessarily have to be it. As a data, you know, be called the product analyst, you can be a data analyst who works really closely with the product and, you know, focuses on the product data. So, you know, kind of semantics aside as long as you have some data on how the end users engaging with your product. That's kind of the entry point to product analytics. There's always something interesting to glean from it, but how much time to spend on it, how much effort, how much sophistication to put into it will then depend on, you know, your sample size and how important this is to ultimately the business model. And one fun thing about product analytics, which is, I think, reflected in my approach to my course curriculum as well, is I think it's very One of the areas where you can 80 20 for a pretty long time, you can kind of extract a lot of actionable and powerful insight with like fairly simple methods. So there is, you know, a lot that you can do with just SQL queries and looking at correlations even before you need to get them to some complex data science and machine learning models and all of that good stuff that sort of comes at the stage where you're more fine tuning the product. So yeah, I don't know. It's a, it's a pretty broad Broad answer. I guess maybe the last thing that I would note is the product managers usually are the first product analysts, right? And they try to make sense of, you know, what they should be working on next in the road map. So also, depending on the company and skill set of product managers, they can also get away pretty long for a pretty long time without involving an analyst. So that's another thing I think it could be. Especially in the era of, you know, AI and automation, where chat GPT can automate away lots of, you know, PRD writing and all of the more kind of tedious work of a product manager ability to embrace some of the analytical frameworks, concepts, understanding the data a little bit better is also a huge superpower. Amazing. I'll, I'll drill into one area that you mentioned where we're going to get more into product analytics as I'm super interested in that. But one thing you just mentioned is using chat GPT to automate a few. tedious tasks for product managers, even, even data teams to some extent. And this has been a reoccurring theme on my, on my podcast on what's new in data recently, where, you know, data practitioners, data leaders, data educators, thought leaders, every everyone's talking about how they're using chat GPT. Not to like replace the, you know, data engineer, replace the function of a product manager, but, you know, just kind of at least take away some of the tedious parts that are just time consuming and error prone. And I'd love to get your take on, you know, where, where you see. generative AI generally which chat GPT is, you know, in the category of helping data teams and, you know, data product managers move faster. Yeah, that's a good question. I think I definitely see the product management side of it. To be much more efficient and fast, even though I'm not, you know, officially a data product manager, right. I still have to write some design docs. So, you know, you know, RFCs and stuff like that, then there is this app called chat PRD that's Claire Vo created, she's I think A CTO or chief product officer at LaunchDarkly at the moment. And I tried it recently to write a design doc for a data initiative and it worked great. I mean, it wasn't perfect. I still had to go ahead and refine it, but it took away all of my kind of mulling about, Hey, which, you know, sections I should how should I organize my information? And it was. so much faster to just give it a prompt, you know, Hey, we're, you know, trying to create a database for this user research CRM that, you know, my team will need to put together. Here is the context and then just spout it out. And I was able to like quickly edit it and send it away. So that was awesome. I think it really would help with documentation and, you know, summaries, all of that. But chat, chat PRD, chat PRD. Yeah. And is this domain specific to like data product managers or is it general across like all tech? Okay. No, yes. General across all tech. But it helped me even though my document was not even the PRD necessarily, and PRD stands for I think product requirements document or something along those lines. It was a more of a design doc, but kind of the same general purpose of it. Summarize what we're building and why, what problem is solving, what are some of the things that are not in scope. So it was kind of close enough to still be pretty useful. But yeah, when it comes to data specific things, I'm trying to constantly push myself to actually be more intentional around trying to incorporate it. Cause I think I'm like a little bit of a AI kind of hesitator. I'm afraid of it taking my job, you know, so I'm like I don't know. I don't like you. But I try to push myself because I think it's definitely important to understand it's limitation where it's useful to actually be able to also kind of coach my team on it and find ways that we can be more efficient. I think like coding help is really important. Where it is super helpful and not necessarily like a wouldn't write queries for me because again, like the data quality and the semantics of it, it will need to know a lot about my database and my kind of business context to be able to do it correctly. But, let's say a regex. You know, otherwise I would always like have to go into some, you know, web tool that helps me with it or like read documentation, but that's great with regex. It usually doesn't need a ton of context for it. And that does great or Python, you know, my Python isn't great. I can do a lot with like pandas and more on the data science ML side of things, but less so with vanilla Python. And, you know, I need to write a function that calls API and incorporate some pagination and I can do it very easily, especially if I tell it, tell it which API it can like go and look up documentation and actually do it correctly. And that's great because, you know, some folks on my team, they're more junior, they've never done Python. It can be kind of hard to get started. started and now you know, either chat GPT or something like hex magic. It can be an assistant that kind of helped them get started. It is a little bit, I think it's like a little bit of bandaid because if I'm researching how to write an API call myself on like stack overflow, and I have to do kind of this trial and error, I'll actually learn it. So next time I would actually be able to do it myself if I'm relying on, ChatGPT becomes a crutch where I don't think I will ever learn it. I will always just ask it and they will always do it for me. So it's also a trade off of, you know, I should probably only offload things that I don't particularly care to learn myself. So that's kind of what I'm thinking about when it comes to more junior folks or even for myself, right? Like, am I just using it as a crutch and I'll actually never get better in certain aspects of the job. So. I don't know. That's a great way to think about it, actually, because, yeah, you have these tedious tasks that are maybe one off, ad hoc. You don't want to gain expertise in like some, you know, API spec of a tool you're only using for, you know an infrequent job. But there's other, on the other hand, you don't want to go to chat GPT for something that should be like part of your own foundational knowledge or, or area that you need to go from maybe. Superficial knowledge to expertise. So, you know, I think, you know, for other data engineers who are listening, and I think that's a great way to break down, you know, when should I use chat GPT or generative AI to, you know, help assist me with a task versus when I should really get in the weeds of the task myself and, you know, really try to avoid having generative AI do the work for me. Yeah. Yeah. And I've tried some tools that, I guess I'll give a couple of examples where it's already integrated in the product. One, the example is the Stripe Sigma. So Stripe has this new product where Stripe is a billing vendor, right? So our company uses it, for example, to charge our customers or their subscriptions. And the Stripe data, you know, if you're trying to like reconcile your internal data with Stripe data. It's always a pain because your numbers in data warehouse will never match exactly what's in the Stripe UI. And they have this new feature, Sigma, where you can actually query Stripe data in the UI, which is, you know, pretty helpful. And it has AI in it, where you can ask AI to kind of generate the SQL query for you. And it's like, I find it pretty dangerous, to be honest, because it's the same thing, where it, like, doesn't have enough business context, and it can give results that are just, like, flat out wrong. And if I don't know it, then It's dangerous. Like if I gave that tool to my stakeholder who is not technical and they just started asking this AI questions and looking at the numbers that spits out, that's just scary to me personally. I, you know, for myself, it's great because I can actually see the logic of how Stripe calculates certain metrics and I can understand it better. So that's, that one is an interesting interaction I had with it recently. And then another AI interaction that I had recently. Okay. Google published it's a better now. I think it's only available in the U S somewhere in like Google labs. I can find the link for you later so you can share it as well. You can upload the data set and, you know, give it the prompt and that generates a Google collab, not notebook and Python that does, you know, everything, data cleaning, data analysis. And I was itching to test that with the Spotify data set. I was interviewing Spotify last year and they have this data science Take home assignment that they give everybody. I don't know if they still do it, but at least like 2021 2022, it was just universal. No matter what data role you're interviewing for, they give you this gigantic 100 megabytes data sets with like playlist data with like a very vague prompt of, Hey, playlists are very important for Spotify. Analyze this data and figure out how to. What, which product recommendations to make? And I was itching to test this Google new Google apps, AI data analysis on this data set. And it failed miserably. So I was like, okay, it's not taking my job just yet. So that was also pretty fun. I don't want to bash it because obviously it's in beta and, you know, they're trying to improve it, but yeah, it's spent, 80 percent of my notebook that they spat out was like group by this field error. This field doesn't exist. Oh, this field doesn't exist. Let's try again. There's like 15 cells where it does the same thing. I'm like trying to figure out what the name of the column is, like tracks, number of tracks. Total tracks for like five columns. And then at the end, like it gave me some conclusion that wasn't particularly helpful for solving the problem. So I failed that take home at Spotify. AI failed it as well. So we're not hired. Okay. Well, if the AI failed it, then you have a good excuse. And yeah, coming back to the Stripe data. Yeah, that's something that, you know, I work with heavily too. And, you know, getting numbers a match from like the source application and your data warehouse is like this, you know, always evolving problem. And even sometimes when you think you got it right you know, you might find that two, three weeks later, you know, someone might raise their hand and say, Oh, wait, this doesn't look exactly right. Or, Hey, you know, we haven't seen any new data here for the last, you know, a few days. And particularly with Stripe, it can be problematic. You know, I've, I've done some things to solve it there with our. Striim Stripe reader. And then, you know, internally with our data engineering team, like validation is like this constant effort of, you know it's almost like a whack a mole right where these like unforeseen things, API changes, schema changes You know there was a flood in some data center somewhere and, you know our, our, our, our service went down, you know, all types of things gonna happen. So, how can, how can data teams like be proactive and monitor this so that, you know, it isn't internal stakeholders telling you that, you know, the data looks wrong, but your data team knows first. Yeah, that's a million dollar question. I think we're in the process of, you know, trying to figure this out. We've had a lot of issues with Stripe data for, Mostly, again, like something that we have no control over. I've had issues with the Fivetran integration, I've had issues with Stitch integration. We've had some issues in our data model as well where we weren't, you know, accounting for some edge case or some discount or something along those lines or like ops teams. Doing some one off thing they're you know, manually for a customer. And then we're like, Oh, we didn't account for that possibility of this happening. So it, you know, it's definitely constant and I don't know. I think, creating like good heuristics over time. around the expectations of what are some expected or unexpected amounts of variance and key metrics. I think that's ultimately the main thing that's going to help solve it. But I think it takes a lot of time because you need to build this like subject matter expertise on, you know, what are the usual daily movements and different revenue evolutions? What are some events. I think one part of it is also that, you know, your Stripe integration with your product can break Where, I don't know, you change something and how the checkout is implemented, then suddenly it's not going through. Right. And I don't think Stripe actually natively provides a great observability for engineers to kind of monitor that type of thing. So, some of it is also around, like, leveraging events to see if you actually have a bug in a product. And that's what's breaking or reporting. So, it's like a very end to end beast that you have to figure out this like heuristics and monitoring from everything. Like, is it broken in the product? Is it, is my ETL broken? You know, it's just like not ingesting something, which is probably this is easier to build observability around, you know, just like number of rows, updated or number of rows ingested, extracted and so on. And then your actual metrics and expected daily deltas.. I always feel like it's the biggest myth that, hey, you know, from the business applications, whether it's, you know, You know, Stripe or Salesforce or, you know, internal, customer facing applications that have a backend database. Oh, we're just going to magically bring that data into our warehouse. And the warehouse is going to do all the fun stuff, you know, generate the data models, and we'll have these pristine reports that are just perfect all the time. And I've never seen that actually happen with, with, with data teams. Cause. A, the engineering is always evolving. The source applications are always changing. Like you mentioned, like there's discounting logic, you know, what happens if there's a dispute over a charge? How do we represent that in a data model? And so it's like. This data teams have to be very flexible and they also have to, you know, build that expectation with the internal stakeholders that, you know, Hey, this, people don't love to hear that, you know, things are constantly fluctuating, but the reality is that it is so and data teams are just a mirror of that. Change. So I think you're, you're right that, you know, having the right observability in place is, is critical coming back to product analytics. So What are some of the common metrics that, you know, product analytics teams should be looking at? Yeah, that's a great question. That's a great question. I think ultimately going back to the premise of one of the main value props of product analytics is the ability to connect product changes or product, you know, user behaviors and the product to the business outcomes. Ultimately, which metrics are important in the product analytics realm. Are the ones that help you ladder up to revenue. So whatever metrics they are, they should be leading indicators to revenue, or, you know, it could be profit, GMB, you know, whatever the actual business metric is. Depending on the business model of the company. Right. So, and this would depend on again, the product model, the business model. But generally it is, I agree a big fan of metrics trees. Right. So generally you would think about it by starting with your business metric and sort of like decomposing it to immediate inputs, which usually is just like calculation. So an example could be, if we're talking about B2B SaaS, right, that your main metric is usually, you know, some variation of recurring revenue, MRR, ARR, something along those lines, that ultimately breaks down into new revenue. Churned revenue and your revenue evolutions, the expansion and contraction. And then from for each one of them, you kind of keep decomposing it going down the tree. So let's say if we're talking about a new revenue and again, like new revenue is not something that product team owns in silo. This is something that, you know, marketing team contributes to by, you know, helping attract more users, attract the right type of user or lead sales team. If it's a sales driven motion by, you know, finding the leads, closing the leads doing it better, more effectively. And then product as well. Product experience is important to convert, you know, new website visitors or new app downloads into the paying users, right? So at this level of, you know, new revenue, churned revenue is still not something that's a product metric because it's ultimately co owned by multiple departments in the company, you know, support team, success team, you know, even beyond just, you know, Product and marketing and sales. So then, and this is where you make that step towards creating a sort of department specific metrics. So specifically for new revenue, what are some key kind of levers that the product team owns that help us. Create more new customer revenue. So in this case, this would be something along the lines of, you know, converting a new website visitor to a pay user. So it's a flavor of, you know, sign up completion metric or onboarding metric, where users is sort of like primed for actually experiencing the value prop of your product. And then some flavor of this like activation metric or aha moment metric where how do we find what percent of users. You know, do something that proves the value of the product to them. So it's sort of like a funnel, a little bit of a funnel for the new revenue. And then for a little bit more complicated for the revenue evolutions right here, it would depend on, you know, what business model it is. Users paying you for usage, are they paying you for seats? What are they paying you for? Are they paying you for features? And then what are some kind of typical, expected repeat usage patterns of your product? Is it something that you know, they use daily? If it's B2B SaaS? Or is it something that you use, you know, once a year? Like TurboTax or something along those lines, right? So then you sort of like frame those metrics that would be leading indicators for your revenue evolutions if it's you know talking about the churn revenue you would look at some repeat usage metric that would depend on what is the expectation of like what's the frequency that the user encounters the problem that the product solves. And then you know, the revolutions, the expansion, contraction depends on how you monetize in them. So if you're monetizing by seats, selling seats, then you would need to track some product metrics around collaboration. You know, how well a new user is kind of like expanding to bring in, bring in more seats on board, how well they actually. Using this like multi seat model. What are some indicators that somebody might need another seat as a leading indicator. So here it gets like a little bit more, a little bit less universal, a bit more monetization and the problem that your product solves. So this would be kind of like the top line layer of the metrics. And then from there you have to get like a bit more opinionated on metrics that are a little bit more specific to the product features. For example, with repeat engagement right, then you might need to think about like, which are the specific features that are driving this repeat engagement? Which ones? more correlated with the same for activation. You know, what are, what do you consider to be your activation metric? Is it you know, some threshold of usage? Is it the combination of using multiple features in a given period of time? And then you have to get a bit more specific. And these metrics at that level also tend to be a bit less evergreen as the product experience evolves. You add new features, you remove features and they might shift a little bit as well. And that was a great summary for a very you know, complex topic. The main point is it really depends on, you know, Your, your products industry, whether you're B2B or B2C, whether it's a product that's frequently used or infrequent, like, let's say if you're like Zillow, right, your users are mainly coming in when they, you know, want to buy a house or if they're, you know, just browsing for fun or something there, you mentioned TurboTax, like annual when people do taxes or, you know, Maybe semi annual when they're just like looking up tax records. You know, from, from my perspective, given I've worked on a, a data streaming product, you know, I'm, I'm looking at, you know trial conversion aha moment metrics. It's just like, Hey, are you successfully moving data? Right. That's, that's our indicator of like, is this person going to, you know, raise their hand and ask to Pay and buy for the product. Then some sales metrics come into the picture as well. So whereas a B2C product, you expect everyone to just, you know, either they're going to keep engaging and there's, you have a ad based monetization strategy, or, you know, you're something like one of those services like Spotify, where you want someone to just put their credit card number in and you're going to pay like a 10 to 15 monthly subscription. So a, it requires understanding your business deeply. And then B the actual user integration the user interactions and the frequency of usage and then the monetization strategy. So you get into a lot of these specifics in your course, which is super great. I, I, I definitely recommend it for, for everyone. I'm, I'm, I'm signing up some people from my team to take it. Tell us a bit about your course. Yeah, definitely. I am a generally a huge fan of learning to the point where sometimes I have to stop myself, but I always take courses like at any given time I'm like enrolled in a course I am right now doing that reforge course about business metrics which is a lot of fun. But I've taken a bunch of these courses. When I started product analytics at Peloton, I actually didn't know anything about product analytics or almost nothing about product management. So the way I learned is by taking courses, reforge courses and learning about product management because there was not a product analytics course at this time that time more recently, I was taking a bunch of more technical courses on up limit statistics, experimentation, causal inference. And I've been really enjoying it as a student and as I was doing it, I was also constantly reflecting on. You know, what are some things that, you know, I've had to take so many of these courses to kind of get well rounded across certain product skills, certain technical skills that come together as, making me successful as a product analyst, then how would I. What, what type of a learning experience would I wanted if I went back to, you know, 2018 in my first days at Peloton to kind of get my get me where I'm now quicker and more effectively and that was reflecting on it. And they felt like there was not really necessarily a course that combined. together this like product sense and product thinking that is usually taught to product managers or folks trying to become product managers and specific technical applications that will kind of like take these product skills a bit farther and make them more applicable to a data person, right? Like with Reforge, I've taken some courses on, let's say, growth, growth strategy and taking some courses on experimentation and they would give a very good kind of like high level framework for how a product manager would think about it, but they would stop short at, Hey, here's a data set, you know, let's try to actually calculate this metric. Here's what statistical significance is. And like actually equip me as a data person to, you know, take it all the way. And if I were to go to a data course, that was kind of the opposite where it would give me a data set, like a Python notebook, and I would be already expected to know, you know, some Python. To be able to do it, or it would give me a toy data sets and teach me SQL or Python with something that's like not even business specific. Right. So I was thinking about how to bridge that gap essentially. And my course is a four week course. So on the platform called up limit, I run it currently about twice a year. So my next one was going to be at the end of September. And I framed it in a way that. It tries to breach that gap without going too, too deep into the technical specifics with the premise that folks who are not super technical and they are maybe constrained to, you know, spreadsheets and SQL even like very beginner SQL, they can still take that course and, you know, benefit from it fully folks who are. Very technical. They can do some like bonus assignments to practice some of their coding, but they will be mostly benefiting from that kind of subject matter expertise. And I go into metrics in more detail, kind of the spiel that I gave answering previous questions. Question, but more detail with more examples of infrastructure. I go into the area that I think is generally not very well covered, which is the data collection and the data structure right to be able to do the analysis. You have to be very practical as a product analyst around. You know, how do you want to see that your data? You know, collaborates with engineers and product managers about it. So I cover in detail, like the technical stack, the tooling, all of the soft tools that are out there, how to approach kind of picking the right stack for yourself. How would you name your events? How would you work with product managers and engineers to define the events? And I also go into experimentation, which obviously, you know, a huge part of product analytics as well. Once kind of the company hits a specific volume where it becomes feasible. And what, what is that volume? Depending on who you ask, I don't know. If you ask Ronny Kohavi, which is, you know, he's one of the thought leaders in this area he wrote the book that's called the Bible of A B Testing. It's called Trustworthy Online Control Experiments. I recommend for everybody to read it. It's a great book. And he was asked the same question in a podcast with Lenny and I think his answer was, To be very comfortable doing it, it would be in like hundreds of thousands basically of users. But I think you can really start drawing some benefit from it if you are in thousands. I think if you're less than thousands, you know, weekly users or weekly visitors, it becomes very difficult. Just based on the sample size, right? You either will need to be operated with like low confidence, or you would need to be waiting for a very, very long time for a test to become statistically significant. So you know, sometimes it's just doing kind of observational, kind of just monitoring trends over time and complementing with user research, it can give you enough of a conviction comparable conviction with, you know, doing a test that's like a lower significance level, I think. But yeah, it's, I think the, the kind of mindset around experimentation though, it's something that is very important for product teams and marketing teams to have, because otherwise it's very easy to fall into the pattern of over analysis or analysis paralysis where every single proposed product change you know, You can talk about risks. You can talk about questions. You can do lots of rounds of user research and going in circles and being afraid of it because there are so many risks, right? But as Claire Vo says, I'm referencing Claire second time in this podcast, as I'm a huge fan, she says fast beats right, generally, right. And the premise of just like. Putting something out there, whether it's like an actual experiment, a solid A B test, or it is, you know, just putting something out there to just see the data, even if it's, you know, an actual just a release, it's usually much more helpful than an experiment. Extended these discussions and let's do more analysis. Let's do more user research. Yeah, I think it's, yeah, these like, or analysis for us, it should probably be reserved to things that are very, very hard to undo something like, I don't know, changing pricing or something along those lines. It's hard to experiment, hard to, you know, make a mistake. It's going to have huge implications, but Yeah, so the mindset around experimentation, like trying to put something out there for the users incrementally and like, get some learning from it and then iterate, it's, I think, very valuable. Absolutely. Elena, where can people follow along with your work? I am on Twitter or X unfortunately still way too much time. My handle is Elena runs NYC. And then I have a blog on Substack, it's called Dramatic Analyst. You can just go dramaticanalyst. com. And I have my course on UpLimit. So if you go to UpLimit and search for Product Analytics, that would be my course. Excellent. Elena Dyachkova, Head of Data at Bubble. Thank you so much for joining today's episode of What's New in Data. Learned a lot from you. We'll continue to follow your thought leadership in this area and looking forward to connecting with you again in the future. And thank you to everyone for tuning in. I'm You

Product Analytics and Chat GPT Integration
Challenges in Data Monitoring and Analytics
Business Metrics and Product Models
Bridging the Gap in Data Analysis