Security Unfiltered

Safeguarding the Future Of AI Meets Cybersecurity With Erick Galinkin

June 11, 2024 Joe South Episode 157
Safeguarding the Future Of AI Meets Cybersecurity With Erick Galinkin
Security Unfiltered
More Info
Security Unfiltered
Safeguarding the Future Of AI Meets Cybersecurity With Erick Galinkin
Jun 11, 2024 Episode 157
Joe South

Send us a Text Message.

Curious about the real history of artificial intelligence and how it has woven itself into the fabric of modern life? Join us as Erick Galinkin returns to share his insights on the evolution of AI, from its early conceptual stages to its present-day applications like self-driving cars. We promise you'll walk away with a deep understanding of the various levels of autonomous driving and the enormous strides AI has made, surpassing even the most ambitious expectations of the past. This is not just a technical conversation; it's a philosophical journey questioning AI's origins and contemplating its future.

Discover the transformative role of massively parallel processing in AI, especially within computer vision. Learn how CUDA, initially designed for computer graphics, has become indispensable for deep learning by efficiently handling complex computations. We break down neural networks and activation functions, explaining how frameworks like TensorFlow and PyTorch leverage specialized hardware to achieve remarkable performance improvements. If you've ever wondered how deep learning mimics human neural behavior or how AI-specific hardware is optimized, this segment will be invaluable.

In the latter part of our episode, we tackle the intricate relationship between AI and cybersecurity. Hear about the challenges of training machine learning models to detect malware and the dual-use nature of AI models that can serve both defensive and offensive purposes. We shed light on the complexities of securing AI systems, emphasizing the need for specialized risk management strategies distinct from traditional cloud security. From tools like Garak to frameworks like Nemo Guardrails, we explore various solutions to secure large language models and ensure they operate safely within an organization. This episode will arm you with the knowledge to understand and mitigate the risks associated with deploying AI technologies in your own projects.

https://github.com/leondz/garak

https://github.com/nvidia/nemo-guardrails

Support the Show.

Affiliate Links:
NordVPN: https://go.nordvpn.net/aff_c?offer_id=15&aff_id=87753&url_id=902


Follow the Podcast on Social Media!
Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast
Patreon: https://www.patreon.com/SecurityUnfilteredPodcast
YouTube: https://www.youtube.com/@securityunfilteredpodcast
TikTok: Not today China! Not today

Security Unfiltered
Help us continue making great content for listeners everywhere.
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Send us a Text Message.

Curious about the real history of artificial intelligence and how it has woven itself into the fabric of modern life? Join us as Erick Galinkin returns to share his insights on the evolution of AI, from its early conceptual stages to its present-day applications like self-driving cars. We promise you'll walk away with a deep understanding of the various levels of autonomous driving and the enormous strides AI has made, surpassing even the most ambitious expectations of the past. This is not just a technical conversation; it's a philosophical journey questioning AI's origins and contemplating its future.

Discover the transformative role of massively parallel processing in AI, especially within computer vision. Learn how CUDA, initially designed for computer graphics, has become indispensable for deep learning by efficiently handling complex computations. We break down neural networks and activation functions, explaining how frameworks like TensorFlow and PyTorch leverage specialized hardware to achieve remarkable performance improvements. If you've ever wondered how deep learning mimics human neural behavior or how AI-specific hardware is optimized, this segment will be invaluable.

In the latter part of our episode, we tackle the intricate relationship between AI and cybersecurity. Hear about the challenges of training machine learning models to detect malware and the dual-use nature of AI models that can serve both defensive and offensive purposes. We shed light on the complexities of securing AI systems, emphasizing the need for specialized risk management strategies distinct from traditional cloud security. From tools like Garak to frameworks like Nemo Guardrails, we explore various solutions to secure large language models and ensure they operate safely within an organization. This episode will arm you with the knowledge to understand and mitigate the risks associated with deploying AI technologies in your own projects.

https://github.com/leondz/garak

https://github.com/nvidia/nemo-guardrails

Support the Show.

Affiliate Links:
NordVPN: https://go.nordvpn.net/aff_c?offer_id=15&aff_id=87753&url_id=902


Follow the Podcast on Social Media!
Instagram: https://www.instagram.com/secunfpodcast/
Twitter: https://twitter.com/SecUnfPodcast
Patreon: https://www.patreon.com/SecurityUnfilteredPodcast
YouTube: https://www.youtube.com/@securityunfilteredpodcast
TikTok: Not today China! Not today

Speaker 1:

How's it going, eric? It's a real pleasure having you back on the podcast. You know, the first one kind of like cracked my mind open, so to speak, with AI. You know, I think when we were first talking, openai had come out with their version 3, right, and you know, now they're on version 4, with 5 quickly coming out. I think is what it is. It seems like ai is and it's like it's it's in its infancy. You know, in terms of technological evolution I would say it's like in its infancy, but the impact that it's having is growing significantly and it's rapidly changing. Would you agree with that?

Speaker 2:

Yes and no. So very happy to be back. I've seen that you've done a couple episodes on AI since I was first on, thrilled to see that. I think what I would disagree with is that AI is in its infancy, right. Like AI is a pretty old field of computer science. I mean we've had artificial intelligence in some form or fashion for 70 years, 60 years, depending on where you want to start counting. But we have seen this pretty explosive growth in large language models and applications of like transformer-based architectures, right. So, without getting too in the weeds, this particular type of neural network, this transformer-based architecture, has seen a ton of growth in computer vision, natural language processing, right, all of these things. So I think we've definitely seen a lot of growth and evolution and adoption. I think adoption is the big thing, right. Like we've had AI.

Speaker 2:

I mean you look at like anomaly detection, right, like good old-fashioned anomaly detection, all the way back to like PID controllers, right In like electrical engineering, because you know, can we use functionally as anomaly detectors, you know, can we use functionally as anomaly detectors. You know, those things we've had forever. It's the fact that it's so on the surface now, right, rather than living deep in a process where it's like yeah, I don't know, the anomaly detector told, accustomed to dealing with right it's. You give it text, the way that you would give a person, you give it an image, right, and I think that's kind of the big difference and the big evolution especially in the last, like three to five years.

Speaker 1:

Well, you know, I feel like when I say it's in its infancy, I'm thinking of, you know, seeing where this is going right. What's that end state? What's that end phase look like compared to right now? You know, and right now that, like, that end phase or that end state is going to make this look like an infant, it's, um, I, I feel like it could be that advanced. I guess that's where, um, you know, I was getting that idea, but absolutely I I've. I've actually read a lot, a lot of different opinions on, like, when it started and how it started and things like that. And you know, sometimes it's hard to even like, put that together right because we're so used to this. You know, I guess, ai advancement, uh, that it's hard for me in some cases to say, oh yeah, it started 70 years ago by this thing that, you know, is trivial today. So it's, it's it's hard to like that dot together, I guess.

Speaker 2:

Yeah, I think that's true, and this is where the science and philosophy start to meet. So I think two of the things you said make me feel that way, and, of course, the first is when did it start? Does it start all the way back with polynomial regression, right, where you have an algorithm for performing polynomial regression? Is that blemish to you? I don't remember Something you can look up, it's fine, right, but you can go back hundreds of years if you really want to. And then, on the other hand, right, like modern machine learning, do you start with computer vision, do you sort of acknowledge that, or do you start with vision? Do you start with Outlook set, or do you start with you know where do you start? I think that's where it starts to get into the philosophy.

Speaker 2:

And the other place where I think it starts to get into philosophy is like what is the end state? Because there are, I think, these varying schools of thought about okay, so we have this, you know, exponential growth, or this, you or this super linear growth, or whatever. Is that sustainable? And for how long Is there a period where it's going to level off? Because I mean, I've been doing this long enough to remember gosh back in 2012, when people were saying like, oh yeah, like computer vision is practically solved and we'll have full self-driving cars within five years. And here we are, in you know 2024 and you know, say what you will about Tesla, we can leave that whole conversation aside. But even even that isn't full system. Five self-driving Right, it's just advanced lane control and speed control.

Speaker 1:

Um isn't that considered to be like, uh, like I think I read it was like level 3.5 or something like that. I mean, I I'm not really even sure what the levels are. If, if you don't mind, would you be able to go over it real quick, like just a quick gloss over it yeah, so this is this is not my ultimate area of expertise, but essentially level one is like cruise control right.

Speaker 2:

Level two, uh, operates on two axes. So that is the adaptive cruise control right, like you know even my, you know, toyota corolla has that where it's like there's a car in front of you so we're going to slow down, there's a hospital in front of you, so we're going to slow down, even though cruise control is set to 55 or whatever. And then the lane management. So that's forward and back and left and right, that two axes when it says okay, I know where the lanes are and I'm going to keep you in the lanes right. And then level three and four are blurry to me. But level five is like no human intervention whatsoever, general purpose anywhere, right, can go anywhere, can do anything.

Speaker 1:

If.

Speaker 2:

I remember correctly, level four is that it is able to drive under any conditions within a restricted environment. So if you think about, like Waymo's robo-taxis in San Francisco, those are, I believe, level 4 systems, you know, but those also use, right, a lot more than computer vision. They're still using LiDAR, they're still using all of these other things, so it's not something that is nearly as advanced as we kind of hoped it would be right, these self-driving cars that use computer vision systems, they are still struggling with things, like you know. I mean, they struggle with object detection, right, like they aren't even exceptional at object detections and they're even worse at object detection when, like, it's raining a little bit or it's foggy, or you know it's late at night or there are, you know, streetlights are out or whatever. All of these things are just limitations of the system, which isn't to downplay how incredible it is that it even exists, right, like that's cool, that's amazing. But there are limitations to these systems and I think that if you look at the progress in computer vision, we did see a huge spike after, like AlexNet and all that, and then over the last several years, probably since the vision transformers were the last, I think like really big advancement, and if you have listeners who are deep into computer vision, they can write me nasty emails or LinkedIn messages and tell me how wrong I am, but those systems haven't really continued to grow at that rate.

Speaker 2:

There's all this huge explosion and then it's a S-curve right a it's a sigmoid function. It's which is something that we in our official intelligence should be really familiar with right, you have that flat line and then exponential growth and then a flat line right. So the question really is like, if it is an s-curve right and not exponential growth forever which I think is a tough gesture to justify, right, that's a tough assumption to make. The question really becomes where on that S-curve are we right? Are we toward the bottom or are we toward the top? And that's something that is hard to know until we really experience the state where it's like okay, yeah, this thing came out and it was trained on 10 times as much data with 400 times as much compute and it's only like 20% better, like okay, well, now I think we're hitting the point where we're at the top of that curve.

Speaker 1:

Yeah, you know it's. It's like that saying right is hindsight is 2020. Right, right. You don't realize how easy things were, right, For instance. You know, when you're a kid, when you're a teenager, all you want to do is become an adult, right, you get to drive, you can travel, you can do whatever you want, you make your own schedule, whatever it might be right, and you become an adult and you're like man. I wish I was a kid again, you know, like, like I didn't have a mortgage, I didn't have to worry about paying a car note or anything like that. You know, student loans were not a thing. It wasn't even a thought in my mind.

Speaker 2:

Trade places with my seven-year-old. I would do it in a heartbeat. You know you can work, you can do all the things. You can. Go to the grocery store, I'll just like hang out, go to school, have recess, play on my ipad. That's yeah, I'm about that. I would absolutely yeah, yeah, absolutely.

Speaker 1:

You know, predicting where ai is going is extremely difficult. What I, you know, what I, what I, I guess, what I still kind of fail to understand. I mean, this podcast, right, with this question alone, could probably go for days, right, but what I, what I failed to understand is the evolution of the AI chip, right? So that's really interesting to me because it's like I guess one of my questions would be at what point in time did someone determine whether it's NVIDIA, apple, whoever right? I don't know who determined it, but at what point did someone determine, okay, we need a dedicated chip, a dedicated hardware module just for AI, dedicated hardware module just for AI. And oh, we're going to build this whole you know ecosystem like NVIDIA, for instance, as CPUs that are just for AI. You know, I'm not trying to. This podcast isn't sponsored by NVIDIA in any way. You only work at NVIDIA. But again, your thoughts aren't NVIDIA's thoughts or anything like that. We're just speculating on information that's out there.

Speaker 2:

Yeah, so I mean okay. So I'll say the original usage of GPUs for deep learning, right? Because, again, when we talk about AI, like AI is this whole big thing, but I think it has become shorthand for deep learning, right? Because, again, when we talk about AI, like AI is this whole big thing, but I think it has become shorthand for deep learning, right? Obviously, ai includes things like expert systems, traditional statistical machine learning.

Speaker 2:

you know, you know like logic programming, which is my personal favorite because it's a form of generative AI that's based on good, old-fashioned AI, not the modern deep learning paradigm.

Speaker 1:

I think it's really cool. I think it's neat.

Speaker 2:

But anyway, I'll go back to this. You go back and you see that CUDA, the Compute Unified and Fuzz architecture on NVIDIA GPUs, has always been this massively parallel processing framework. You have your CPUs, your general-purpose CPUs, you have your RISC CPUs, you have all of these different CPU architectures and instruction sets and whatever, and what CUDA kind of did was look at computer graphics as this massively parallel computation and specifically computer vision has always dealt with matrices, right, it deals with sensors, really, right, the RGB channels, and then you have your image which is like n, by, and so you have an n by c, right, your length, width and then the depth of the tensor. Those are how you produce computer graphics. It turns out that in deep learning you're dealing with the same sort of thing, where you have these matrices, right, you're doing these massively parallel matrix multiplications across these different channels and then balancing them forward.

Speaker 2:

Gtus were just kind of well-suited to the task it is, honestly, producing computer graphics and doing deep learning is under the hood. The math behind it is a very similar type of math, right, it's massively parallel matrix multiplication. You know, the element of CUDA led to CUDNN, which all kinds of frameworks have been, to CUDNN, which all kinds of frameworks have been built on top of that. Right, you have your Torch and your Kerrix and your TensorFlow and Theano if you're an old head or Chainer, if you've been doing this for a hundred years and more modern frameworks like Jaxx, right, and they just take advantage of that hardware capitalism. So that's how these sort of chips have become so ubiquitous in AI.

Speaker 2:

And then, when it comes to the more AI-specific hardware, right, this high performance computing that's not really a consumer-grade graphics processing unit that happens to be good at this thing, but are more specifically designed for it. It's just kind of looking at okay, well, where are those differences between the parallel matrix multiplications you're doing in computer graphics and the ones that you're doing in deep learning? And then how can we enhance the chip to be better at deep learning than it is at computer graphics? Right, they'll still render images, right? If you want to plug into your you know Cortex A6000's display port, you can. I don't know why you would do that, but you certainly could. It's just that it's going to be more performant tasks like deep learning, because that's kind of what it's designed for.

Speaker 1:

That's interesting. So can we touch on what deep learning is? Right? Because, from an uneducated perspective of AI right, like you know, I'm getting my PhD in securing satellites and preparing them for quantum communications right, and it's so very close to ai that I actually have to read the white papers on ai and all this other stuff, right. So I'm still under the impression that deep learning is like hook it up to the internet and let it go learn for a couple years, right. What is? What is deep learning?

Speaker 2:

so at level right, like the simplest version of deep learning is. You think about like, put yourself in like I don't know 10th grade, right, go all the way back to grade school and remember you're, y equals nx plus b. So you've got your y and you've got your x and you're trying to find the slope of the y-intercept. Now you have to do that with matrices, right, where, instead of finding just the number or the fraction for the slope and the number or the fraction for the y-intercept, you have to do this with a matrix, right? And for anybody who doesn't know what a matrix is, it's like think about an exercise sheet, right. You have a certain number of cells and each one of those cells has the weights in it. What you're trying to do is take your collection of Xs and Ys and find the best and B, so that you have the least amount of error, so that if you plug in an arbitrary X, which is going to be a vector, since you're vacuuming the trustees, you're going to get a y that's close to it. And then the other thing that it does on top of that x plus b, it wraps it in a non-linearity. So non-linearity is sort of what it implies it's a function that isn't linear. So some common ones are like the value, which is just the max of the output, and zero. So if it's zero or less than zero, if it's not a positive output, then you get zero. Otherwise you get the output of the function. Your mx plus b Sigmoid is another one, which is basically an exponential function that maps it between zero and one. Right, there's a whole bunch of different non-linearities. If you google activation function, uh, you'll find thousands of them, right, like there's. There's a whole literature just on activation functions, uh, but at the end of the day, that is what kind of gives it this spiking. All of this is inspired by human biology, right? So the neuron activation on or off right is one or zero, and so you get that spiking through these activation functions. So that's a single neuron, right. It's that y equals mx plus b and a non-linearity on top of it.

Speaker 2:

In a neural network in deep learning, you have many of these neurons that are interconnected. What makes it deep is that you have multiple layers. So the input to the next layer of the network is the output of the previous layer and you basically stack these on top of each other and then get your output of the previous layer and you basically stack these on top of each other and then get your output. There's all kinds of fun stuff in there, right when, like you know, that describes that. It's a good old-fashioned multilayer perceptron or, like you know, dense neural layer, but there are transformer blocks, there's recurrent blocks, there's all kinds of fun stuff you can do inside of those neurons, but essentially, like it all boils down to that interprets language and provides responses based on the broad knowledge that it has.

Speaker 1:

Are there limitations to what you're able to teach it in terms of? I've already given it all the information that I have available, right? What's the next evolution? To teach it more, to make it smarter, right? What does that look like?

Speaker 2:

uh, if I had the answer to that question, I would, uh, I don't. I don't know if I would have time to be on a podcast at all. I wish I had the answer to that question, but that is. That's a real problem that we're running into right is like we've basically exhausted the world's greatest repository of knowledge. The people training these large-scale models, whether it's OpenAI or Anthropic or whoever, they've collected basically everything that's out there, and that is a huge limitation. Once you've trained it on all the stuff, how do you get it to be better? And there's some applications of synthetic data. Right, where you have, these models generate new data. That carries its something called mode collapse. So basically it starts. But let me back up what these models are really doing.

Speaker 2:

Right, whether they're the instruction-tuned GPT-4 that you're using with chat GPT or they're more of a base LLM, they're essentially doing next token prediction. And think of a token as a word. Right, it's not exactly, but close enough analogy. It's trying to predict the next word. So the trouble with synthetic data is that you give it this input, you get this output. That's synthetic and it's generating the next word by picking the most probable, subject to some stochasticity, some randomness. Right, it's picking the most probable next output. So if it starts generating, do you want to go to the next first value and be like store? Do you want to go to the mall? Do you want to go right somewhere you might want to go? And as you continue to train this thing on data that it generates, you're raising the probability because it's seeing these things more often. You're raising the probability of each one of these tokens and so, essentially, you're taking this beautiful heterogeneous probability distribution uh, this distribution over distributions and you're slowly collapsing it into a single point, right, where, if you keep outputting the most probable tokens and then training on the most probable tokens, it's just going to continue reinforcing itself until it can't do all that general stuff right? There's a new paper on this that came out a couple of years ago about training on synthetic data causes.

Speaker 2:

Catastrophic forgetting is what they call it, and you know this is something that, again, we take. Now for language, which is the hot thing right now. You know that's what I work in, right, it's like securing large language models. But if you look back at computer vision like again if you've been doing this a while you look at generative adversarial networks, gans. This was a well-known phenomenon in GANs. Right, this is something that people who trained GANs were super familiar with is you had to watch out for mode collapse because you're generating something, you're getting an error and then you're casting that back in as your next step in your training phase and if you're not careful, your gam will just collapse to this single best possible output and it loses its ability to generalize wow, that is really fascinating how it would basically self-reaffirm and push more importance on a certain topic or language or whatever it might be, and then it just works out all the other stuff from its system.

Speaker 1:

It's like, well, all of that stuff isn't important. I'm going to put all my processing power towards this thing, right. That seems to be really important right now. It seems like you know that could be really helpful for some use cases. You know, like thinking of like medicine, for instance. Or you know, engineering, a new type of structure, whatever it might be right, like you could think of a lot of critical services where reinforcement would benefit in some ways to focus it, you know, on a certain topic or a certain situation. So I never thought of how reinforcement of these things would force it to, you know, collapse in on that one topic. That's really interesting.

Speaker 2:

Yeah, it is really interesting and, like, synthetic data is not all bad. It's just that if you use too much of it, it can be a problem.

Speaker 2:

But it's also like super important to use synthetic data in certain cases where the data is rare, right, where you don't have a lot of data, because, again, these models are trained on huge, huge amounts of data. It's just like incongruent, hensible amounts of data. So if you're trying to train something in, say, let the cyber security go in, right? Uh, you want to train on logs? Okay, well, you have your logs, but what if you need more logs? Like, do you just wait? But what if you need more logs? Do you just wait a year to get more logs?

Speaker 2:

Well, maybe you can feed your logs into a thing that allows you to generate things that look like your logs and now you can say, okay, these are normal logs, this is what our system looks like. We're assuming we're uncompromised, which is a heavy assumption, but put that to the side, right? So this is what the system looks like under normal operation. Now you have enough data to start training your thing, to do that anomaly detection right, because maybe otherwise you don't it's easy to come across malicious traffic, right, you can generate basically infinite amounts of malicious stuff. You just pop it in a sandbox and dump the logs, right, but it's a lot harder to generate things that are both benign and look normal right. So not just benign and nobody's using it, but benign and people are interacting with the system.

Speaker 1:

Benign and people are interacting with the system. Do you think? I mean, this is obviously purely, you know, hypothetical. Do you think that cloud providers are doing that somewhere somehow? Because I mean, think of the vast amount of the logs, just the logs, not even talking about the data itself or anything else like that, the attacks that you know are launched at them and everything else like that. Right, you know, if you're training it on data like that, it wouldn't be very intelligent to not use that data and train an ai with it. Now you have you know in security terms. Now you have you know. Know in security terms. Now you have, you know, the greatest SIM that ever existed. Right, there's nothing that would compete with that. Like ever, do you think that that is a route that cloud providers could take?

Speaker 2:

Two things. One is is it a route that cloud providers could take? Maybe? So I'm extremely not a lawyer. I'm as much not a lawyer as you can possibly be, but you know, the agreements between the CSP and their customers would kind of predicate the level of access to logs that they have. Right, they might have access to infrastructure logs, but they may not have authorization to access to infrastructure logs. But they may not have authorization to access your NetFlow logs or your WAF logs.

Speaker 2:

They may not be able to see that legitimately. I'm sure that if somebody were determined to, they'd probably have the capability, probably have capabilities, but like that would be kind of I think that is the sort of initiative that would be hence from more by lawyers than by technical capability. Right, because the CSPs you think about Microsoft, google, amazon, the three big ones they're also companies that are training and putting out huge language models and doing really incredible research in artificial intelligence. Right, microsoft and Google in particular? Right, have been doing phenomenal AI research and security research for quite some time and, like I said, people occasionally cuckoo on Microsoft's security posture. Right, like they have gotten a lot better about being forthcoming with security information and publishing security research. You know, shout out to my Microsoft friends Now, would that lead to a sim to end all sims? Right, the one sim to rule them all? I think it depends on how optimistic you are about AI alone. Right, because you could use that for incredible you know sim rule development and you could definitely have a world-class sim based off of that data. But a lot of, like the really good sim rules, handwritten rules that are written by, you know, security experts right, they're things that are known to be bad, because one of the things with security data is that there's a lot of noise, there's a lot of weirdness and there's a lot of things that, to an ignorant party, might seem really important, that don't end up actually mattering and should not be used as a factor.

Speaker 2:

I'll give you a concrete example where, once upon a time, I was working with a team and we were trying to train a malware classifier. So we had this giant corpus of malware, huge corpus of malware, and we took all of these features, we featurized the thing and we trained this machine learning system. It was a tree-based architecture, a decision tree-based architecture. This is just a few years ago and it had this great performance. It did a great job detecting malicious samples False algebra wasn't too bad.

Speaker 2:

And then we looked at the components that were considered most important, determining whether or not something was malicious. Right, the most highly weighted components, kind of the top of the tree. And item number one was the compiler. It was a compiler. If it was compiled with Borland Delphi, it's malicious. I don't know if that's true. I don't know if that's right. Item number two the second most important feature. So if both of these things were true, it was malware. All the time was if it was compiled in Russian or Chinese. So the language of the XTUAL right and what's the mistake? As a malware analyst, those two things are functionally meaningless in determining whether or not something's malicious and it was just an artifact of where our samples came from, right. So you know, when you're optimizing for those things and you're doing it in a way that doesn't account for expert knowledge, where you don't have people who are reverse engineers, who are malware analysts.

Speaker 2:

Looking at this, you might think, if you're, you know, a lay person like well, look, the accuracy is good, the F1 score is good, like we're happy with this thing. And then you look at the features that matter to it and it's like you know, does it, does it import? Uh, wsoc2 right? Does it do? Networking is like the seventh most important feature on that list. It's like I don't know man. To me that is like one of the more important features for me to look at. Does it modify or break keys? These sorts of things are going to matter. Whether it's Chinese or Russian doesn't matter. These are the sorts of things that would make it hard to just take a bunch of data trade the world's best sin and call it a day. You still need that security expertise to write the good rules to correlate things.

Speaker 1:

That is man. That's really fascinating. I feel like we could literally talk on that for hours. You know, in security, like you said, the attributes what the malware is actually doing. What the code is actually doing is far more important than its origin. Once you figure out what it's doing and you kind of attribute okay, this is typically from China, this is typically from Russia, this is typically from America, right, that's a possibility as well, and I wonder what kind of result and I mean, this is again purely hypothetical, right I wonder what kind of result you would have if you trained a model on only written malware and then you had a separate one that was trained on other, you know other nation states that are playing in the space, and then you use that to somehow inform some other higher tier model that when it looks at a piece of malware, it dissects what it's doing, it tells you where it's from all that sort of stuff, rather than you know, because right now it's a manual process, right?

Speaker 1:

You throw it into a reverse engineering tool like ito pro, right, and you start going through it and God forbid, they compile, you know, different modules of the code with different languages and things like that, right, because it just extends your reverse engineering process significantly. But I wonder if that is a very viable thing. And then you could even use that model to write more malware. Right, you could use that model from the malicious side to to use it to create like stuxnet version 5. Right that that that is undetectable?

Speaker 2:

well, that's the. That's the thing with a lot of the machine learning systems, right, a lot of these, these ai systems is that as a side effect of training a model that's good at defense, you kind of get a model that's good at offense. Right, you can invert the output. That's something that we're seeing. You know. You look at more and more models and like one of the big things that people are talking about and concerned about is phishing, and I think that's valid about and concerned about is phishing, and I think that's valid.

Speaker 2:

The thing about phishing is I don't freak out about how LLMs enable phishing, because I have seen people fall for some of the most obvious phishing emails ever written, right. I've seen people respond to the literal. I am from IT. We're resetting passwords. Can you please email me the password? But I have seen that in my real life. Things like do LLMs enhance the ability to scale? Does it let attackers be lazier? For sure, but when you train a large language model that's designed to help anybody write an email, you know if I go to chat to you after this your perplexity or whatever and say, like, write an email to Joe Self thanking for having me as a guest on his podcast. Well, it's not as an ability to write a phishing email, because a well-written email is going to be a well-written email regardless. It's just a matter of whether you know you include it or not. Right, and so I mean that's why a lot of the user education around phishing, I think, is kind of dated right as I look for misspellings, look for you know these things. And I mean you know there are phishing dated right. It's like look for misspellings, look for you know these things right. And I mean you know there are phishing templates, right, you can go on whatever the you know current version of exploitin is breach forum or whatever, right, who cares? Go on one of those forums and there are phishing templates. Those are well-written, legible, you can have them in any language you want, great, good.

Speaker 2:

So I mean, does a large language model really increase the threat land state? Not necessarily. But does that mean that it's a dual-use technology that we need to be aware of? Yes, right, and so when you have coding assistants that can help you write any code, can those be used to be aware of? Yes, right, and so when you have coding assistants that can help you write any code, can those be used to write malicious code, probably.

Speaker 2:

And I think this gets into the big conversation around model alignment how do we put safety into models? How do we put safeguards into models? And I think that that's a good first layer. But I don't think that alignment in and of itself is a solvable problem. I don't think it's a tractable problem. Right To encode these things into the model weights.

Speaker 2:

At the end of the day, these are just aisles of linear algebra. They don't know anything, they don't think, they don't have values. Right, we can't necessarily encode that. But having systems around it that validate the input with respect to some policy and the output of the model with respect to some policy, I think that's where we can start making meaningful gains. Start making meaningful gains Because with these, these water's language models, one of the big things that a lot of people don't realize about them is a key difference between these AI systems and the old-fashioned computing systems is in the old-fashioned computing systems, we have a data plane and a control plane, right, and they're separated.

Speaker 2:

And so if you can do things like see something happening on the data plane and a control plane, right and they're separated, and so you can do things like see something happening on the data plane and say, okay, don't opt in this IP address anymore, right, you're not allowed to go there. And you can manage that at the control plane layer to affect what happens on the data plane. In LLMs, the data plane and the control plane are the same thing. There's only one place to put anyone. So your system instruction right, your thing that says, like you are an unbiased, friendly, helpful language model who does X, great, that goes into the exact same input space as your arbitrary user input. And there's very few mechanisms and certainly no provably mathematics, no provably secure way to differentiate them and to have the model treat them differently. It just treats it all as one big input. And those are the sorts of things that make the problem of securing these systems really hard. Yeah, you know, makes the problem of securing these systems really hard.

Speaker 1:

Yeah, so do you think that this is going to turn into a situation with cloud, like what cloud security is currently evolving into? Right, you have cloud security at such a scale now, right Across so many different domains, that it can't be handled by one person. Right, it can't be handled by, you know, one person per cloud. Even right, you really need to start having, like a cloud IAM team. You really need to have, like a cloud data security team. You know people that specialize in it. Do you think that that is also the route that AI will be going and, if so, what do you think those specialties would be?

Speaker 2:

So I don't think it will go the route that cloud is going and how it gets managed, I think depends on how the community right, like the broad security community and, even more broadly, the risk community, the broad security community and, even more broadly, the risk community. If we look bigger than InfoSec risks, a lot of AI risks. A lot of the trouble that you're going to get into with AI today is falling all the way to your PCer and your IR teams and those poor people who have to deal with the way since your PCER and your IR teams, and like those poor people who have to deal with the fact that your job said something kind of nasty right, or said something that wasn't truly part of your corporate policy, right. You have the Air Canada instance where it said you know something about like a refund policy, offered somebody a refund or whatever, and that wasn't their policy, and a court made them uphold what their model said right, like is that something that your PCER should be dealing with? Probably not. Probably not right.

Speaker 2:

There are certainly potential security issues, but what I've found in AI security, in the security of AI systems, right, rather than applications of AI to general security problems, is a lot of it just comes back to like AppSec. We have a peripheral elemented generation system, a RAT. It's a language model that talks to a database and maybe we did a bad job of seeing what should be in the database or we let the language model read anything in that database and it should have user base user. You should have role-based access controls but, like that's annoying to do. You have to make the user authenticate a lot and then you have to make it, you know, take a token and pass that token into the database. That's annoying, so we're not going to do that. Like, okay, well, that's, your problem is like it's bad app set practices right. A lot of it comes back to that and I think you know if AI's surety ends up being AI risk management and it's that big broad umbrella where, like, bias becomes part of it or, you know, hallucination or whatever general risk, then then yeah, there's no way one person could possibly do that.

Speaker 2:

If you're publishing, you know, if you're a model provider, if you're hosting a model that end users interact with in ways you can't predict, but if we stop it down to security, you know, I think it's maybe not quite as complex, because a lot of this is just architecture and it's like you have this thing that takes English language input and generates SQL queries. It's like, okay, well, maybe you should do some output validation. Don't send arbitrary SQL to your database. We know this. You know a lot of these things are familiar to us. Things are familiar to us, but when they get abstracted away through this large language model, we kind of have trouble seeing the trees for the forest, as it were. We get stuck on just seeing okay, this is some big thing that has an English language interface and talks to our database. Who's talking to it? Do we pass their user role through as a habitability to read the whole database? Who's talking to it? Do we pass their user role through? Does it have the ability to read the whole database? And you can't really trust the language models output.

Speaker 2:

I think it goes back to that sort of stuff right, taint tracking, right Stuff that we've noted about, which is like if you talk to an AI system and it has the capability to go retrieve data from the internet and then process that data from the internet, then it shouldn't be allowed to take its output from the internet and interact with privileged systems, right? You wouldn't let a random internet server do things on your systems right. These are the sort of things that we need to be cognizant of, so I don't know that it's necessarily going to get to that point, but we have our baby steps, which is like good app, stack practices, and then we can start getting into the broader. How do we do AI risk management for organizations? And I think that's a much bigger and harder problem that can't be solved by a single person.

Speaker 2:

So, it's, it's uh it's really fascinating.

Speaker 1:

You know this entire new, new area right that just kind of, you know, spurred up into everyone's, you know, forefront of their mind right now. Do you know roughly how large the lom is like, how large the biggest lM is like, how large the biggest LLM is right now?

Speaker 2:

In terms of like data size.

Speaker 1:

I mean, I don't even know how to.

Speaker 2:

So so most people believe that the largest LLMs on the planet are either TP4 or Clon 3, opus Like those seem to be the largest, most capable models, or Cloud 3.

Speaker 1:

Opus.

Speaker 2:

Those seem to be the largest, most capable models. Cloud 3 Opus is, by the way, really cool. I am not paid by Anthropic, I just think it's cool. It's the first model in a while. That's really surprised me with its capabilities. But they have not neither Anthropic or OpenAI has said how many parameters it has, how many tokens it's been trained on, right, All of that is information that they haven't released to the public. So we have some pretty large open weight models, right, some people say open source, but if the data is not public I'll get off my set of thoughts, right, these open weight models. And so we do have some pretty large ones out there, right? I mean, croc is quite a big one. Mixtrol is a pretty big mixture of experts, but all that sounds good. You have a couple of like 340 billion parameter models out there. I think there are even some meaningfully larger ones that have been other weights released, right?

Speaker 1:

Yeah, some meaningfully larger ones that have been other ways for least right, yeah, so that's I ask because the parameters is a hard thing, I think, for people to understand the size of the data. Do you know what that roughly would translate into in like petabytes or even exabytes at this point?

Speaker 2:

So I think the parameters of the model and the training data are kind of distinct things, right? So the parameters of the model basically just say how much RAM it's going to use, right? How much VRAM or how much whatever, right? If you want to run it on CPU because you're an insane person, that's fine, I've had it if you want a 40 billion parameter model on your CPU with, you know, a terabyte of RAM, I guess.

Speaker 2:

Right, but you know these models. The parameter count basis is how much compute is used, right, how much RAM and GPU if you run GPU, cpu, whatever, right, the training data. On the other hand, there's a paper from a couple of years ago, the Chinchilla paper on Google, that actually suggests that there's like an optimal number of tokens to train a model on, based on the number of parameters. Right, like most of these are following that chinchilla model. Um, because you don't want a model that's over trained or under trained, right? I mean, hypothetically, you can train a 500 billion parameter model on no data. It's not going to be any good, but you could do it right now. Looking at that, I mean we're talking gosh. I mean, if you have like eight trillion tokens, yeah, you're talking like several petabytes of data it's an insane amount of data, right?

Speaker 1:

where do you even start with securing it? What's what's? I feel like it like it's. You know, obvious what the best practices are for normal security, even cloud security, right? Are those best practices translating into LLMs or is there something else you know that you're starting to potentially even adjust the best practices for?

Speaker 2:

Yeah. So I would say that, like your security best practices, all your security best practices still yeah. So I would say that your security best practices, all your security best practices, still hold right, and you still want your defense in death. You still want your role-based access controls. You still want to make sure you're doing input validation, all this stuff Right, no question about that. Now, I think, if we separate the open AIs and anthropics and Googles and metas of the world and let them worry about how they protect their training data and model ways from being manipulated, set that to the side for the moment and think about somebody whose organization is trying to deploy an LLM powered application. Right, they want to have some customer facing chatbot, or they want to integrate some natural language interface into their product.

Speaker 2:

Whatever I will say, you inadvertently set me up to plug the two open source projects I work on Right. One of those is Garak. So Garak is like Metasploit for LLMs, right? So it's got a collection of a bunch of different known ways to get library models to say bad things and you can kind of pick and choose what you're concerned about, right? So if you're concerned that your chatbot is full of jinkies and tossing output right, you're concerned about that sort of thing Then you can send these prompts it's like a command line interface, right, and it'll do all the work for you to take these prompts, fang them against your model and give you a report.

Speaker 2:

So then you can get kind of a sense of like, okay, well, where are the risks associated with this, right? Or if you're concerned about, you know, package hallucinations, which is a thing that is of a non-zero amount of concern to people, right, like with all the NPM and PyPy package hijacking that, you see, right, if this thing makes up a package name and you, okay, sure, import this thing and somebody has seen that hallucination and put that package into PyPy and it's a malicious package, well, that fessed off as bad as bad day for you. So you know, you can test for that sort of stuff. There are a couple of different ways to deal with that. If you're framing or fine-tuning your own language model, you can try and fine-tune that out or props tune it out or modify your system prompt or whatever right Like that might work.

Speaker 2:

There's also these LLM firewalls right, I think Cloud Flarecoat went out. There's a bunch of companies that have them. Right, I think Cloud Flarecoat went out. There's a bunch of companies that have them right, and those will kind of look for and detect known quote-unquote jailbreaks, right, so like known attacks against these models, the sort of stuff that we have in Dirac. Right, that's fine and that stuff is done. Some of it's quite good. The other thing is the other open source project that I work on, which is called Nemo Guardrails, and so Guardrails is more of like a dialogue management framework, so you can write these really specific configs or these you know, essentially write through3L files.

Speaker 2:

But this colang is the language, this abstraction that we deal with. It's like a domain-specific language that lets you say, okay, well, you should only talk about, you know, scheduling, vehicle maintenance, and if you're asked about anything other than vehicle maintenance, don't do it Right. And it provides this external check on the model so that when somebody asks a question that's not about vehicle maintenance, you're not depending on the model, which has no separation between the control plane and the data plane. To parse that. It's kind of saying, okay, well, this isn't about that, so we're going to tell the model to do something, right, we're going to tell the model to do something. We're going to get in between you and the model and you can make those as complicated as you want.

Speaker 2:

There's a bunch of other stuff in there. It's a really powerful framework. But these are the sorts of things that you can also put around your models, whether they are something you're deploying yourself, right, whether you've downloaded Lama 3 and you're using that as your basis and you're self-hosting it in some cloud service provider, or if you're using, like, an open AI or a Coheer or Anthropic on the backend. Yeah, there are a couple of different ways to both test these things Garag, pyrit, p-y-r-i-t, because Microsoft is another framework like this. And then on the other side, you have your LLM firewalls. You have your Nemo guardrails. You have there are alternatives to Harpreils, I think they're all called guardrails in some form or fashion. Amazon has their own guardrails, but you know, nemo guardrails is the one that I know best because it's the one that I work on. So there are definitely ways to start thinking about the AI-specific stuff.

Speaker 2:

It just gets back to like, what do you care about? What is what is the? What is the threat model? You have to figure out that, right? So you don't care that your customer service chat sometimes, like, says awful things to people. Uh, you don't care that it's toxic, right, like it's funny, it's like the dick's last resort chat, greatbot, great, cool, fine. But maybe your legal team doesn't want to expose themselves to the liability that comes with having that happen, and in that case, you have ways to test it and ways to try and mitigate it. And, of course, there's no fancy for you, right, like a sufficiently determined attacker will always find a bypass. These things aren't foolproof, but they can certainly make it a lot better.

Speaker 1:

Yeah, you can make it, you know, reasonably difficult to where it's like okay, if and or when they still get in, it's like I couldn't have done any more. You know what do you expect, right? Well, you know, eric, honestly, you know same thing as the first time around, right? I feel like this conversation would go another two, three hours, right? So I guess that just means I got to have you back on a couple more times.

Speaker 2:

It's always a pleasure, as always.

Speaker 1:

Yeah, it's such a fascinating topic and of course you know it's at the forefront of everyone's minds now. You know, I think my wife uses ChatJPT to write more emails than she actually writes at this point. But you know, eric, before I let you go, how about you tell my audience? You know where they? Before I let you go, how about you tell my audience? You know where they could find you if they wanted to reach out to you, and maybe even what those open source projects are that you're working on?

Speaker 2:

Yeah, sure, so I am on almost no social media. You can feel free to find me on LinkedIn. I'm the only Eric Galenkin, if assuming my name is in show notes, you can just copy and paste that and it's probably me wherever you find me. Yeah, feel free to drop me a message there. You can email me Whatever. I'm an easy person to find. It's a perk of being one of one. And then those two open source projects are both on GitHub. Please feel free to check them out and feel free to join the direct Discord. That is actually a place where you can find me and pester me directly, and that's githubcom slash, nvidia. Slash nemo, n-e-m-o. Dash guardrails, all one word. And then githubcom, slash. It's currently under liangdz, hasn't been moved to an nvidia repo yet. Slash, garak, g-a-r-a-k. Yeah, by all means, feel free to reach out. Love to chat.

Speaker 1:

Awesome. Well, thanks, eric, and I hope everyone listening enjoyed this episode. Bye everyone.

The Evolution of Artificial Intelligence
Massively Parallel Processing in Computer Vision
Machine Learning in Cybersecurity
AI Security Challenges and Solutions
Securing Language Models