Security Market Watch

AI and Human Interplay with Rich Owen

December 05, 2023 Josh Bruyning Season 1 Episode 21
AI and Human Interplay with Rich Owen
Security Market Watch
More Info
Security Market Watch
AI and Human Interplay with Rich Owen
Dec 05, 2023 Season 1 Episode 21
Josh Bruyning

Get ready to have your assumptions about cybersecurity turned on their head! Cybersecurity veteran and acclaimed author, Rich Owen, joins us in a riveting conversation exploring the surprising intersection of artificial intelligence (AI) and the human element in cybersecurity. Drawing from his 50 years of experience, including his pioneering work on the first local area network at the Michigan Control Center and Johnson Space Center, Rich shares intriguing real-life anecdotes that solidify his belief: it's the people, not the technology, that are the linchpin of security.

In the back half of our chat, Rich delves into the untapped significance of human factors in cybersecurity. He astutely points out that the code we utilize is the product of human creators, complete with their own biases and potentially with a lack of regard for security. He navigates us through the complex maze of code reviews, code reusability, and the inherent risks involved. Rich's profound insights into these involved topics transform this discussion into an eye-opening episode you simply can't afford to skip!

Josh Bruyning
Rich Owen

Show Notes Transcript

Get ready to have your assumptions about cybersecurity turned on their head! Cybersecurity veteran and acclaimed author, Rich Owen, joins us in a riveting conversation exploring the surprising intersection of artificial intelligence (AI) and the human element in cybersecurity. Drawing from his 50 years of experience, including his pioneering work on the first local area network at the Michigan Control Center and Johnson Space Center, Rich shares intriguing real-life anecdotes that solidify his belief: it's the people, not the technology, that are the linchpin of security.

In the back half of our chat, Rich delves into the untapped significance of human factors in cybersecurity. He astutely points out that the code we utilize is the product of human creators, complete with their own biases and potentially with a lack of regard for security. He navigates us through the complex maze of code reviews, code reusability, and the inherent risks involved. Rich's profound insights into these involved topics transform this discussion into an eye-opening episode you simply can't afford to skip!

Josh Bruyning
Rich Owen

Speaker 1:

Rich Owen, thank you so much for dropping in to Security Market Watch Everybody who is listening or watching this. These drop-ins are just a steady flow of information. We meet with security leaders and we bring some information to you that you might find useful and hopefully you find this pretty useful here in the next, let's say, five minutes or so. Rich, you and I have talked before and you are the I call you the druid of cybersecurity and you've written so many books. You're prolific, you're one of the most prolific authors within cybersecurity, and it's just a pleasure to talk to you today.

Speaker 1:

We connected briefly before this call and we talked about AI and the human element in security. So I know this is a topic that's been on your mind and it's on the minds of a lot of people, especially security leaders you're watching. I'm pretty sure that you've thought about AI and you've thought about the human element in cybersecurity, but you see an interesting connection between those two. Could you give us your perspective on the connection between the human element in cybersecurity and how it relates to AI, and also vice versa, how AI relates to the human element in cybersecurity?

Speaker 2:

Sure, josh, thank you very much. First, I need to probably give you a little bit of background about myself. I'm a past international president of ISSA, a member of the Information Security Hall of Fame 2021 ISC Square Lifetime Achievement Award, and I've actually my latest book, the Alchemy of Information Protection, is being used at a university program to create a certificate for information security management. It's also being used by the EU Academy of Sciences in the Ukraine. They're translating it to Russian and the whole focus on that book is based on my over 50 years of experience in cybersecurity. And the end result, the punchline of that book is it's the human factors. And why this is so important is my background is very technical. I started off in the Army Security Agency. I've repaired a computer by replacing a single transistor. I've built and repaired computers based on chips. I've written my own operating system. In fact, I was on the team that designed and installed the first local area network in the Michigan Control Center, johnson Space Center. So I come from a very technical background and the end result of all this is it's not the technology, it's not any of that stuff, it's not the laws and technology, it's the people, it's a human factor that's so important in security, especially with AI. Today I'll share with you a couple of war stories. First of all, when I was on the team designing the network at the Michigan Control Center, I also participated as a regional judge in the science fair and one of the students had an inductive reasoning machine. He was showing me and he said give me some information and then ask it a question. So I gave it some information that I asked if Socrates is alive and it came back and said yes, 100%. So I put that in. I'll share how that relates to the end of the story. But then another story when I was in the Army I was teaching electronics and sometimes on my tests I would put the question 2 plus 2 equals and many of my students would pull out their calculators and punch that in before they'd put the right answer down. I'll tie that into the end of the discussion as well. But so I mentioned I was building the network for mission operations at Johnson Space Center. They liked what I did. They ended up hiring me as an as-employee and I ended up creating the security program for mission operations at Johnson Space Center. That sounds like a lot of gobbledygook, but if you saw the movie Apollo 13,. The real guy in the white vest was my boss, one of the best leaders I ever had, and he's very fundamental in my decision about how human factors is so important.

Speaker 2:

So what do I mean by human factors? First of all, the code that we use is created by people, with all their flaws and biases and everything else and maybe lack of understanding or caring about security. Now, how do we know this code is good? Well, we have to trust the program or other code reviews. In the early days You'd have code reviews where one person would use somebody else's code. We got to the point we wrote programs to review code, but then we've evolved to the point of reusing code. Some of this code that we reuse, we don't know it's pedigree, we don't know what's hidden inside of it. So I point that out, as that's the code that we're using today. So the other thing is, of course, people, the user. The user is still the number one tag vector for all the hackers. In fact, I'm a fellow with the Ponymon Institute who tracks all this stuff, and the human factor is so real there because we're the target. And then you combine that with executive management, who may or may not care about information as an asset and you have a lackluster support effort to attack this. So this is what I call the human element and why I'm concerned about it. So that just gets me up to a couple of years ago.

Speaker 2:

Now that we're all talking about AI, well, I guess one of the worst stories is when I was there at NASA, I actually wrote a speed program, a race game, for my boys on the computer, and they could never beat me. It was you design a car by the engine size, wheel size, transmission, and no matter what they did, they could never beat me. Well, I confess later that I had a special code that I entered for my engine size and always guaranteed that I had one more foot of distance for every integration step whenever they put in. So I always beat them, but beat them barely. So this is back to how can you trust your program? You have to trust the programmer, to trust the integrity of your machine. Great, that leads me into AI.

Speaker 2:

Ai. There's two aspects of AI. There's the engine, the code that is written by these flawed and imperfect people who may have the biases and stuff. Don't get me wrong, I love AI and machine learning. When it comes to a SIM and having machine learning monitoring all the attacks that are coming through my firewall and automatically detecting that one particular URL, one particular IP address, is attacking all my different accounts. I automatically have code that shuts that IP address off.

Speaker 2:

This is wonderful stuff, but what happens if my code that I'm using, I got it from some foreign, foreign outlet or some company that maybe wasn't totally trustworthy and they have embedded in this code Stop all attacks, except from any attack from a specific IP address? Well now I'm screwed and I don't know it. I'm trusting that it's all good, but now I have a backdoor that's open to a hacker who knows about this, this backdoor? So so so the real key of AI is two elements it's the engine, which I just mentioned, and it's the data that this engine performs on it does it gives you results based on the data that's input to it. Really good example I like is you ask it a religious question in AI and if you're using the Old Testament or the New Testament as your data, you may get an answer that's hellfire, indemnation, or love and forgiveness seems kind of fitting, though I feel like that's.

Speaker 1:

This gets philosophical real quick.

Speaker 2:

Yeah, it does. The whole purpose of what I'm trying to explain here is is Trust, but verify, think. And it goes back to that thing that I said about the my students when I was teaching electronics 2 plus 2 equals 5 may be an acceptable response if my data is using a Julian database, where, where 2 plus 2 plus 5 is saying I accept whatever you go tell me. The interesting thing here is I actually ask an AI engine. I said how can I trust what you tell me is true? And actually I liked its response.

Speaker 2:

It said you need to be able to verify what we tell you, and which means that you need to be able to look at the algorithm, you need to look at the decisions that made, and it reminds me of an attorney who used AI to create a brief and the judge threw the brief out because it just so happens, all the case law referenced and it was all bogus. So you need to be able to trust this. So it's the integrity of the engine, it's the integrity of your data. And then you also want to complicate this stuff by the cloud. Where is your engine running? Where is your data from? Is it your data? And you're throwing your data into the cloud, oh good. So now somebody else has all of your data, unless you can guarantee that the source in the cloud will protect your data. So the whole purpose of what I like to talk about with AI is back to Reagan. Trust, but verify.

Speaker 1:

Yes, yes, you know, whenever I think about AI and people often talk about AI in terms of death and destruction the terminator, arpanet, not Arpanet. What was the, the, the, somebody remind me, whatever the you know in yeah, I'm Skynet, skynet. So you know everybody thinks about in that way and I go. You know AI is not going to do it by itself. It will. There's somebody who will code the AI to destroy humanity, if that's what AI winds up doing.

Speaker 1:

It's going to be some human element that's introduced into the AI, whether intentionally, probably unintentionally, knowing, yes, beings, and that if, if anything will lead to the, to the disaster that is Skynet, and the terminator it'll be something like that the human factor. So everything you said reminds me of that and I think a lot of people will, will, will resonate with that. But we have to wrap it up there and rich. You know, I really want to talk to you For a longer length of time about this subject, because I see you as a philosopher in technology and I am kind of a garage philosopher myself, and so I would love to dive into the rabbit hole with this sometime, and I hope you'd come back.

Speaker 2:

Okay, I just want to plant one more seed, and that is you can actually poison the data so that any decision you get out of your engine is wrong Because somebody poisoned your data. So I thank you for the opportunity and look forward to talking to you in the future. And how can people find you and find your books, johnny, security seed dot com. You can go there and in fact, there you'll find a code, jss one where you can get all my books and ebooks 10% off If you order through book baby or you. They're also available on Amazon and other places. Awesome, rich Owen.

Speaker 1:

Thank you so much and thank you for listening to this drop in at security market watch. Thanks everybody, bye.