What's Up with Tech?

Transforming Digital Security: Dazz on AI-Driven Remediation, Cloud Integration, and Industry Collaboration

Evan Kirstel

Interested in being a guest? Email us at admin@evankirstel.com

Unlock the future of cybersecurity with our special guest, Tomer, the co-founder and CTO of Dazz. Discover how AI and cloud technology are transforming the landscape of digital security. Tomer introduces Dazz, a platform setting new standards for unified remediation, enabling teams to address security vulnerabilities more efficiently. You'll learn how AI is not just a buzzword but a revolutionary tool that automates and streamlines processes, making handling vast quantities of data and vulnerabilities more scalable than ever before. Whether you're part of a large organization or a smaller team, this episode will provide actionable insights on managing and mitigating risks effectively.

Explore the strategic use of AI alongside traditional algorithms in solving complex cybersecurity problems. Tomer shares his expertise on when AI is the right tool for the job and highlights impressive applications like autonomous remediation cycles. We also discuss the critical importance of collaboration and trust between security and engineering teams. Celebrate the innovative approaches brought by young talent in the industry and look forward to future opportunities for continual learning and networking in cybersecurity. This episode is a must-listen for anyone looking to stay ahead in the ever-evolving field of digital security.

Support the show

More at https://linktr.ee/EvanKirstel

Speaker 1:

Hey everyone diving into the world of AI, cloud and cybersecurity today with a true expert and innovator from Daz Tomer. How are you Good? Thanks for having me.

Speaker 2:

How are you, Evan?

Speaker 1:

I'm doing well. Thanks for being here. Really hot topics, so we're going to dive right in with some introductions, maybe introduce yourself, and a little bit of the backstory behind Daz.

Speaker 2:

All right. So I'm Tomer. I'm a co-founder and CTO at DAS. In DAS, what we do is unified remediation. We're building a platform to help teams actually fix security issues. It's a pretty major topic and obviously we're using a lot of AI behind the scenes to do this. This is one of the topics that we wanted to chat about, but prior to that, I was running the Microsoft Security Response Center in Israel. I was co-founder and CTO of Armis. I was leading research for a company named Adlum, so you know been around the cybersecurity block for quite some time and this time around this company. I think that we're fortunate to be at the right time as technology shifts, as the market starts to understand this is a huge issue. That is finally we're finally able to solve with technology, and you know we're growing the company, we're growing the business, trying to build the right technology that helps the industry fix this finally.

Speaker 1:

Well, wouldn't that be something? And you know, speaking of buzz, ai, cloud, all the buzz words are there, but where do the cybersecurity teams fall between those? You know two pillars. And what are they facing there? Psychologically skill-wise, day-to-day practical perspectives.

Speaker 2:

What are you seeing there? So it's a revolution, right, what we're seeing is completely new technology that was in development for quite some time, but I think that what happened in the last year is that, all of a sudden, people became aware of how, how powerful this technology really is. And it impacts everything. Right, it doesn't. It does not only impact that cyber security. In our case, it's interesting because it gives us a lot of opportunities, but a lot of challenges, right? Uh, I think a lot of people are immediately thinking about the challenges. Right, we have this.

Speaker 2:

I think a lot of people are immediately thinking about the challenges. Right, we have this knee-jerk reaction of like, oh my God, this is a completely new technology. What kind of new threats are going to evolve out of this? And there's good reasons to think about this, but I think the threat models remain the same. Right, the cybersecurity industry is mature enough to understand that new technologies come up all the time. If it's cloud, if it's AI, if it's DevOps, if it's I don't know space technology, whatever, the next thing is gonna be right, but the concepts remain pretty similar, right, we wanna be able to understand what is the attack surface. We want to understand what is the threat model. We want to understand how do we create boundaries and controls around this. We want to know what kind of things we can prevent, what kind of things we can monitor. How do we react to all these kinds of pieces of information?

Speaker 2:

That model remains pretty much the same across a lot of different specific areas in cybersecurity if it's network or endpoint or cloud or AI, and of course, there's a ton of nuance. I don't want to oversimplify anything, but it is yet another pillar of technology that we have to explore. We have to understand what kind of threats are there. We have to learn about this. But what's interesting to me about AI is that it's also a groundbreaking technology that allows us to automate and simplify and even replace some of the processes that we've used to have and again, it's not just from a cybersecurity lens. We can think about every single facet of business, even for our day-to-day lives. We can think how AI can impact this. In cybersecurity in particular, I think the opportunity is huge to be able to change how we process data and how we think about a lot of lenses that we have into the environment and all of a sudden try to use machines to rationalize around it and automate some of these processes that we've been doing.

Speaker 1:

Well, that's super exciting and you've been a huge proponent of fighting AI with AI. But what does that exactly entail? How does that translate into the day-to-day?

Speaker 2:

So it goes back to the problem of scale. Right, when we talk to cybersecurity organizations, we have different customers, that some of them may have tens of millions of potential vulnerabilities and risks, and maybe some of the smaller ones would have only tens of thousands of potential vulnerabilities. But maybe they're a much smaller company, maybe they're a team of one person in cybersecurity All of a sudden needs to look at that. Cybersecurity all of a sudden needs to look at that. In your thinking about it, what's the right technology to be able to 10x, 200x the existing process that they've been doing and the process again repeats itself. Right, we want to be able to understand what kind of vulnerabilities do we have? How do we prioritize them? How do we triage them? How do we work with the business to actually fix them? How do we find who needs to fix them? How do we triage them? How do we work with the business to actually fix them? How do we find who needs to fix them? How do we verify that these vulnerabilities are eventually solved? And that process.

Speaker 2:

It seems to be repetitive, but in reality it requires a lot of depth. You got to be able to understand how the environment looks like. You got to understand how the organization operates. You got to know a lot of details and apparently these kinds of problems are really really fits. They really fit the model of AI right. We can.

Speaker 2:

If we have some kind of a structured way to capture all of that data, and even if it's tens of millions of records, we can analyze this very quickly. We can run AI algorithms on top of it. We can use generative AI to actually advise on human readable text and find maybe you have on the other side of the organization that needs to click that button or change that configuration file. We can tell you exactly what you need to do and why. All of a sudden it creates that much simpler process, allows us to scale. Now. I think that scalability is really important because the volume of security data these teams are handling is only becoming bigger and bigger over time. And yes, ai has a big effect on this. As adversaries pick up these kinds of technologies, they scale up too right. So we're seeing a lot more automated phishing campaigns.

Speaker 2:

We're seeing a lot more exploits being developed. We're seeing a lot more breaches. They're trying to scale up their infrastructure, just like we do, and, as defenders, what we're trying to do is to figure out. Well, now that we're at the scale of tens of millions, or expecting the scale of hundreds of millions, and we've got to be able to react very quickly. We can't really hire 100 more analysts every time. Right, it's more of a matter of how can we use technology to be able to scale, and what I'm seeing is that AI really provides that opportunity to cybersecurity teams, and this is why we incorporate so much of these technologies into that.

Speaker 1:

That's a fantastic approach and you know talk about those security teams leaning into AI usage. Where are they in terms of trusting? You know the tools and the methodologies, you know, and what does it mean for zero trust in this whole?

Speaker 2:

new world, because with any kind of new technology, especially if you're probably playing around with some of the models, you know that this is not 100% accurate technology. It requires a lot of tweaking, it requires a lot of different guardrails to be put in place to be able to really incorporate that technology in production and scale. But the example that I like to give is autonomous vehicles. Right, I think this is a technology that requires a lot of testing, especially because it can actually impact lives immediately if some of these algorithms fail. But in reality, in testing, what we're seeing is that autonomous vehicles are actually more accurate than human drivers, which is kind of interesting because, again, the initial reactions that people typically have is like well, it's going to be super scary to let a machine drive a car when I'm inside and drive for 65 miles an hour. Maybe it's going to crash if something bad is going to happen. In reality, I think that the technology is mature enough to actually allow us to build the rights of guardrails to understand when can we trust it and when we shouldn't. And this is parts of the things that we've been doing in dev and playing on those thresholds of really being able to explain to the user. Here's the result. Here's the kind of obvious that you have to keep in mind. In some cases there are none and you can completely automate this.

Speaker 2:

But I think that it requires some level of transparency from the technology. It requires also a lot of trust internally in the business. What we're seeing is, in that process between the security organization and the business, if I'm talking to you and I'm telling you, evan, here's the five most important things that you need to do today to reduce the most risk, and you're saying, well, that can't be right, I can't really trust it, then obviously I'm not in a great spot to be able to automate that process right. But if I explain to you here's why I'm thinking that these are the top five things and here's exactly what you need to do. Here's a button for you to automate this and we do this every single day of the next year.

Speaker 2:

Probably next year you're going to say you know what? Just click that button for me, because I already trust it right. So it requires creating that level of trust between technology and the teams, between the teams and the business. It requires a sort of complete shift of how we're doing certain things. It will take time for sure, right? I don't really expect people to automate vulnerability remediation tomorrow just because they've seen the live stream and they're excited about this and all of a sudden they're just trying to automate everything. It takes a lot of time to create that level of trust, but I do think that between three to five years from now, we're going to see that pattern in a lot more companies.

Speaker 1:

Well, I hope so. Here's to that. And the flip side of that, I guess, is where should AI absolutely not be used in certain approaches? What are your best practices around that?

Speaker 2:

That's a good question. I think there's a level of excitement around AI, and it's expected, right. It's cool technology and we've seen this. But in many cases, you have to find the right tool for the right problem, right? We're seeing certain things in our space that can be deterministically solved without AI and, yes, sometimes it's a little more challenging, but the result is to your point is much more accurate.

Speaker 2:

So when we're playing that game of where do we need absolute certainty? Where can we create more transparency for the user? More transparency for the user, where can we concretely identify the right solutions for certain problems? If it's solvable without having fuzzy logic behind the scenes, if it's AI or any other statistical model, then we'd rather solve it with concrete means, as long as it's possible for us to develop it. But if I'm comparing this, as the CPO, to the problem of actually generating human-readable data that is completely unstructured, that is able to process a lot of information, ai is a great tool for that.

Speaker 2:

We're able to do a lot more things than what we could have done three years ago without the evolution and generative AI. But what we're seeing in our space is things like root cause analysis, for example. We have concrete algorithms that allow us to identify exactly the origin of certain vulnerabilities without going through a generative AI model. And when we did that testing, we realized that our existing algorithms are much more resilient to certain problems that we're seeing in customer environments than a generative, ai-based alternative. So it's really about being able to pick the right tool for the right problem, and I think, as a CTO, this is the kind of things that we're doing in my team all the time. We've got to be able to find well, this technology is the right solution for this.

Speaker 1:

This is actually not quite what we need, and sometimes it's the combination of all Fantastic. You're also a big proponent of making your security and engineering teams BFFs. How's that working out?

Speaker 2:

Well, it's slow and steady. I think, again, it's a problem that repeats itself in a lot of organizations, and this is something that goes back to that problem of trust, right, because trust can be between technology and people, and trust can be also internally between certain teams and other teams, and I think the prime thing to keep in mind and we can spend a complete hour about that but in essence, I think what's important is a lot of empathy, because if you're thinking about why is someone else on the other side of the organization not doing what I asked them to do? Typically they have good reasons, right. They have different motivations, they have different KPIs, they have different things that they're incentivized to do, they have different roles, right. But if I would ask anyone, you know how would you feel if I wake you up on a Saturday night because you have to deal with this critical incident, because we had a bit of breach in production and all of a sudden we need to get all of the teams to go into the office and deal with it. No one wants that, right, but we got to be able to understand that when we fix this vulnerability, when we're trying to eliminate the risk before a breach happens. This is actually what we're trying to eliminate Now. We can't cry wolf all the time. We can't just dump 10 million vulnerabilities on a spreadsheet and tell you, hey, go ahead and fix this and let me know when you're done. It's not practical. So you've got to be able to understand the motivations of people. You've got to be able to understand why is it so difficult for them to actually fix certain things, and I think there's an actual language barrier between the two teams in some cases.

Speaker 2:

So I'm working with a lot of security organizations in trying to really rationalize. You know how we're thinking about security. How do we think about the impact to the business? How do we actually drive that motivation internally? How do we understand what the impact to the business? How do we actually drive that motivation internally? How do we understand what the other side is actually trying to do and why is it so difficult for them to actually do what we recommend and sort of bridging that gap. And I think that again it goes back to sometimes trust. In many cases, technology can help solve this, because if you're not really using the right tools, then obviously it's going to be difficult to both sides. Right, we've got to be able to have very transparent, very clear processes into. This is the data that we have. This is what we're trying to do, here's the solutions and really building that flywheel internally.

Speaker 1:

Oh, so well said. So let's look at the real world. Any particular organizations come to mind that have made impressive use of AI today that you could talk about?

Speaker 2:

I think every single company is starting to use AI to a certain extent, right? I'm seeing a lot of really cool use cases. I can't really name all of them right, but specifically in cybersecurity, what we're seeing is a lot of different cool solutions. We've seen customers incorporating our solution to really be able to analyze all of the data, understand the root causes, advise on actual fixes, streamline that data directly into developers sometimes to incorporate additional layers of AI as well and being able to actually traverse through all of the internal documentation and actually tell the developer hey, here's what DAS recommends you to do. Here's why we need you to fix that particular vulnerability.

Speaker 2:

Here's the snippet of code that is really, really tailored to our organization, based on our internal documentation vulnerability. Here's the snippet of code that is really, really tailored to our organization, based on our internal documentation. Maybe someone on your team actually wrote this and can generate this automatically for them. We were seeing some organizations that are starting to actually connect this directly into the CI-CD pipeline as well, so they can actually promote automated changes, go through the automated test suite and promote that into staging or even production in some cases. So they get this fully autonomous remediation cycle, which is really cool, and I think this is only the tip of the iceberg, right? If you would ask me that question five years from now, I think I'm going to have a lot more examples.

Speaker 1:

Oh, I'm sure. Well said so we've had a busy summer in the security space DEF, con and Black Hat and other get togethers. It's no longer sort of behind the scenes conversation, it's front of the Wall Street Journal and New York Times every day. What are you most intrigued with or excited about this space over the next few months, weeks? What's on your mind?

Speaker 2:

Well, the community is growing right and all these events are super fun for me to have more opportunities to see old friends, to get to know some new friends. I think the industry is maturing, the community, the community is getting bigger. People are more excited about this. It's great to see uh young talent in this space as well. Um, I think we, uh we, we always need that you know, fresh set of the five to uh to come look at the same problems. I'm really impressed by uh, by some of the, the younger people that I get to, impressed by some of the younger people that I get to know and some of the newer companies. So, yeah, it's always fun.

Speaker 1:

Well, thanks so much for joining and just sharing your insights and updates. Really interesting stuff and I hope we'll get a chance to meet one of these many events coming up. Thanks so much, tom, I'd love that. Thank you, evan, all right, thanks everyone. Thanks.