Cyber Crime Junkies

AI Ethics Unveiled: Navigating Cybercrime and a Broken Industry

August 01, 2024 Cyber Crime Junkies. Host David Mauro. Season 5 Episode 16
AI Ethics Unveiled: Navigating Cybercrime and a Broken Industry
Cyber Crime Junkies
More Info
Cyber Crime Junkies
AI Ethics Unveiled: Navigating Cybercrime and a Broken Industry
Aug 01, 2024 Season 5 Episode 16
Cyber Crime Junkies. Host David Mauro.

We interview Steve Orrin, the CTO of Intel, who essentially make the chips (brains) of most computers on the planet.  We discuss these topics: Controlling AI Ethically, how to control AI ethically, what are the ethical considerations of AI, use of AI in cybersecurity, use of AI in cybercrime, deep fakes, zero trust and more. 

There is a special discussion inside so don't miss it!

Send us a text

Get peace of mind. Get Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com  
 
Imagine setting yourself apart from the competition because your organization is always secure, always available, and always ahead of the curve. That’s NetGain Technologies – your total one source for cybersecurity, IT support, and technology planning.

Have a Guest idea or Story for us to Cover? You can now text our Podcast Studio direct. Text direct (904) 867-4466.

A word from our Sponsor-Kiteworks. Accelerate your CMMC 2.0 compliance and address federal zero-trust requirements with Kiteworks' universal, secure file sharing platform made for every organization, and helpful to defense contractors.

Visit kiteworks.com to get started. 

🎧 Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss an episode!

Follow Us:
πŸ”— Website: https://cybercrimejunkies.com
πŸ“± X/Twitter: https://x.com/CybercrimeJunky
πŸ“Έ Instagram: https://www.instagram.com/cybercrimejunkies/

Want to help us out? Leave us a 5-Star review on Apple Podcast Reviews.
Listen to Our Podcast:
πŸŽ™οΈ Apple Podcasts: https://podcasts.apple.com/us/podcast/cyber-crime-junkies/id1633932941
πŸŽ™οΈ Spotify: https://open.spotify.com/show/5y4U2v51gztlenr8TJ2LJs?si=537680ec262545b3
πŸŽ™οΈ Google Podcasts: http://www.youtube.com/@cybercrimejunkiespodcast

Join the Conversation: πŸ’¬ Leave your comments and questions. TEXT THE LINK ABOVE . We'd love to hear your thoughts and suggestions for future episodes!

Show Notes Transcript Chapter Markers

We interview Steve Orrin, the CTO of Intel, who essentially make the chips (brains) of most computers on the planet.  We discuss these topics: Controlling AI Ethically, how to control AI ethically, what are the ethical considerations of AI, use of AI in cybersecurity, use of AI in cybercrime, deep fakes, zero trust and more. 

There is a special discussion inside so don't miss it!

Send us a text

Get peace of mind. Get Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com  
 
Imagine setting yourself apart from the competition because your organization is always secure, always available, and always ahead of the curve. That’s NetGain Technologies – your total one source for cybersecurity, IT support, and technology planning.

Have a Guest idea or Story for us to Cover? You can now text our Podcast Studio direct. Text direct (904) 867-4466.

A word from our Sponsor-Kiteworks. Accelerate your CMMC 2.0 compliance and address federal zero-trust requirements with Kiteworks' universal, secure file sharing platform made for every organization, and helpful to defense contractors.

Visit kiteworks.com to get started. 

🎧 Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss an episode!

Follow Us:
πŸ”— Website: https://cybercrimejunkies.com
πŸ“± X/Twitter: https://x.com/CybercrimeJunky
πŸ“Έ Instagram: https://www.instagram.com/cybercrimejunkies/

Want to help us out? Leave us a 5-Star review on Apple Podcast Reviews.
Listen to Our Podcast:
πŸŽ™οΈ Apple Podcasts: https://podcasts.apple.com/us/podcast/cyber-crime-junkies/id1633932941
πŸŽ™οΈ Spotify: https://open.spotify.com/show/5y4U2v51gztlenr8TJ2LJs?si=537680ec262545b3
πŸŽ™οΈ Google Podcasts: http://www.youtube.com/@cybercrimejunkiespodcast

Join the Conversation: πŸ’¬ Leave your comments and questions. TEXT THE LINK ABOVE . We'd love to hear your thoughts and suggestions for future episodes!

Controlling AI Ethically: Navigating the Future of Cyber Crime

How To Control AI Ethically

We interview Steve Orrin, the CTO of Intel, who essentially make the chips (brains) of most computers on the planet.  We discuss these topics: Controlling AI Ethically, how to control AI ethically, what are the ethical considerations of AI, use of AI in cybersecurity, use of AI in cybercrime, deep fakes, zero trust and more. 
 
 There is a special discussion inside so don't miss it!

TOPICS: Controlling AI Ethically, how to control ai ethically, what are the ethical considerations of ai, use of ai in cybersecurity, use of ai in cybercrime, AI, ethics, industry guidelines, policies, risks, AI industry, AI adoption, deep fakes, cybercrime, latest cyber security concerns, Global Cyber Security Strategy,zero trust and national security, zero trust in critical infrastructure, zero trust in supply chain, zero trust in small business, secrets inside zero trust, how us cyber security strategy effects allies, driving change in cyber security today, zero trust in cyber security today, latest cyber security concerns, most effective cyber security strategies, effective ways to reach zero trust, zero trust security model for national security, ways to use zero trust today, global security and zero trust, national security and zero trust, understanding zero trust, zero trust crash course, real life zero trust and national security, zero trust in real life, real life zero trust, zero trust real life example, zero trust real life examples, understanding zero trust today, how to use zero trust for small business, ways to leverage zero trust for small business.

Takeaways

  • Trust and transparency are crucial in the ethical use of AI.
  • Diverse and representative data sets are necessary for accurate and fair AI outcomes.
  • Government regulations and industry guidelines play a role in shaping AI practices.
  • Organizations should have clear policies on AI usage to mitigate risks.
  • AI hallucination and data poisoning are potential risks that need to be addressed. The AI industry is rapidly evolving, and organizations need to keep up with the changes.
  • AI is augmenting human jobs rather than replacing them, and organizations should focus on leveraging AI to make workers more efficient and effective.
  • Cybercriminals are adopting AI tools to enhance their attacks, and detecting deep fakes remains a challenge.
  • Organizations should have AI policies, provide continuous user education, and foster collaboration between business units and AI teams.
  • The future of AI includes practical guidance for ethical use and the optimization of AI implementations.
  •  

Chapters 

  • 00:00 Introduction and Guest Introduction
  • 03:06 The Importance of Trust and Transparency in AI
  • 09:32 The Need for Diverse and Representative Data Sets
  • 24:26 Creating Clear Policies on AI Usage
  • 26:23 Addressing the Risks of AI Hallucination and Data Poisoning
  • 28:06 Augmenting, Not Replacing: The Role of AI in Jobs
  • 35:26 The Use of AI by Cybercriminals and the Challenge of Deep Fake Detection
  • 46:09 Guidance for Organizations: AI Policies, User Education, and Collaboration
  • 51:24 The Future of AI: Practical Guidance and Optimization

 D. Mauro (00:03.022)
All right. All right. Well, welcome everybody to CYBER CRIME JUNKIES I am your host, David Mauro and in the studio today, we have my always positive sidekick partner, the Mark Mosher alongside me, Mark, how are you, sir?

Mark Mosher (00:22.726)
wonderful, David. I'm really excited about this episode. We've got, it's like somebody just handed me the keys to the Technology Resource Library having this guest on. Isn't that right? Right? Tell the listeners who's on the episode with us today, David.

D. Mauro (00:32.782)
It is only only you never you've yeah.

Yeah, I'm excited. But I do want to say you were never in a library when you were in school. So I don't know. I think I told you about it maybe. But that's great. So we are joined by Steve Orrin who's the CTO, federal CTO for Intel Federal, which is a wholly owned subsidiary of Intel, which is the maker of almost every computer on the planet, basically.

Mark Mosher (00:44.646)
No, I just heard about him growing up. That's where I got it from.

Mark Mosher (01:07.322)
I'm going to go ahead and close the video.

D. Mauro (01:08.366)
Steve was named one of InfoWorld's top 25 CTOs, received the Executive Mosaic's top CTO executives award, is a Washington exec top chief technology officer to watch, a guest researcher at NIST National Cybersecurity Center of Excellence, and he is a fellow at the Center for Advanced Defense Studies and the chair of the INSA.

Cybercommittee. Steve, welcome to the studio, sir. Very nice. And I'm sure there's a lot of things that I missed on that, but yeah.

Mark Mosher (01:40.838)
And an all around nice guy too. We got to put that in his intro. That's part of his bio from now on every time he comes on. And he's a really nice guy.

Steve Orrin (01:42.992)
Well, thank you, David and Mark and.

D. Mauro (01:51.534)
He really is.

Steve Orrin (01:52.08)
Well, thank you, Dave and Mark. It's a pleasure to be here.

D. Mauro (01:55.246)
Well, we're excited about having you. So what is new recently? Last time that we met, it was close to a year ago. You gave us your background, how you got into it. We talked about threat canaries, which was really interesting. Learned all about that, all these different types of ways that cybersecurity practitioners can leverage technology to kind of detect things and trap threat actors. Since then,

Mark Mosher (02:09.51)
Very.

D. Mauro (02:24.334)
little thing called generative AI, which has been around for a while, but it really made a splash and has hit mainstream, you know, all over the world. And it's dividing a lot of people. There's a lot of opinions on it. And, you know, when you see things in your executive role from the federal perspective, you know, one of the big concerns that we see is

How are we going to put some types of guardrails on it? And what are you seeing being debated right now? And what insight can you share with us?

Steve Orrin (03:05.072)
Sure. And there's a lot to go into there. I think every industry and every market has caught the bug of AI and sees it as the new shiny object. And everyone's trying to figure out how best to leverage it, how to deploy it, how to get the most of it, how to jump onto the hype cycle, if you will. The federal government's no different. The AI in general and the more recent generative AI and large language models can be transformative.

D. Mauro (03:08.014)
There is.

Steve Orrin (03:32.848)
for mission and enterprise applications and operations. It's a major advantage if we can figure out how best to leverage it. And so the government is spending a lot of time and energy focused on both practical applications of AI, research into developing AIs that work for their mission needs. And as in a lot of the conversation and debate, how do we ethically use AI? And really in the government space, one of the other key tendencies is trustworthy AI.

How can I trust the AI? Because it's not like I'm making a recommendation about what presents to buy my kids for their birthday. It's making life or death decisions. It's giving information to operators, to the mission. It's being used in national security environments, being used for citizen services. And so one of the fundamental questions that a lot of people are asking are what are the ways that we can build trust into or at least have a way of assessing the trust? And that's...

a lot of times mixed together with the ethical use. And there is a connection there. Cause if you're able to ethically to define how you're ethically using it, you're providing transparency into the way that you develop the AI, which is a foundation for how you then figure out how to trust it. But trustworthy, it goes a step further. Cause it's not just, well, did I build it securely or with trust in mind and with ethical controls, but how have I deployed it? How have I maintaining it? Am I constantly verifying that it isn't going astray?

And so trustworthy is a bigger nut to crack, if you will. And it's definitely part of the conversations as many of these really cool AI projects start to try to figure out how do they transition to practice and get deployed. The question that ultimately gets asked by the programs, by the mission operators, and by the non -AI developers is, okay, well, great. It just told me something. Can I trust it? When it presents me information, tells me there's something coming my way.

D. Mauro (05:24.878)
Right.

Steve Orrin (05:29.584)
Can I trust that? Do I act on that information? And so that opens the door to a variety of conversations around, how do I secure the AI, the development lifecycle? How do I know that the data that I got was secure and that was trustworthy? So it opens up a Pandora's box, if you will, of additional insights and questions that need to be answered as part of the process.

D. Mauro (05:42.062)
Right.

D. Mauro (05:50.51)
Yeah, it really does. So from a, you know, a mid market business executive, business owner or a leader in a larger organization who heads up a division, let's say, from their perspective, you know, and they want to leverage AI and they want to implement it for their employees. The risk of AI hallucinating or the risk of AI, the data being poisoned.

is, you know, that's one concern, right? But then there's also the responsible and ethical use of it. So is there two meaning, meaning on the one side, it could be injected with bad data, right? Which could lead to inaccurate results, but also how do you use it? Meaning, let's say you're not going to, you're not going to use it in a format that is poisoning the data or giving it

Mark Mosher (06:37.702)
Right.

D. Mauro (06:49.102)
purposeful, like intending to give it purposeful, incorrect data, but you just want to know how to use it so that you're not disclosing trade secrets, you're not using it, you know, to harm your organization's brand. Like when you were talking about the guardrails and the ethical use, kind of, can you explain to us what those were?

Mark Mosher (07:00.038)
Thank you.

Steve Orrin (07:03.92)
So the...

Mark Mosher (07:09.638)
Yeah.

Steve Orrin (07:11.12)
Sure. And a couple of ways to look at it, because you're asking those two questions actually get to fundamental different parts of the lifecycle. But they're connected because ultimately it's how you're using the AI. But at its core, you have to remember AI is built from data. It is the ultimate data consumer. And your AI is like the old logic question, garbage in, garbage out. And so knowing what data.

D. Mauro (07:18.894)
Got it.

D. Mauro (07:35.374)
Right.

Mark Mosher (07:35.398)
All right.

Steve Orrin (07:38.064)
went into the AI and the development and the model creation, the training, is absolutely critical for both of those answers. On the ethical use, part of it is knowing what your end use case is, you have the right data to drive those inferences, to drive those decisions, drive those outcomes. Oftentimes, it doesn't even have to be a poisoning, a malicious act. Just having too limited of a data set will give you bad results when you start applying it to a broader group.

There's been some unfortunate but very well documented examples in the healthcare space where the data they got from standard scientists and academia libraries of data sets that have been generated or produced over years, and then they applied it to clinical use cases. And the problem was, and the examples that are often used is much of the original data sets were done by college students coming in, getting their body scanned or submitting to various tests.

D. Mauro (08:26.382)
Mm.

Steve Orrin (08:35.344)
And when you look at the data, what you find is the demographic, it was 18 to 30 year old white males. And so your data set is very limited. And yet when we use it in a clinical world, there's females, there's minorities, there's African -Americans, there's Europeans, a very diverse set that you're actually trying to apply that AI to. And so your fundamental flaw is that your data set was not representative. It wasn't diverse enough to actually be able to deal with the diversity of where you're trying to use it.

D. Mauro (08:40.846)
Right.

D. Mauro (08:48.942)
Right.

Steve Orrin (09:02.768)
And just saying, well, I'll use some synthetic data to try to generate some other populations really doesn't cut it. And so it's fundamental to both the ethical and, like you said, how do you prevent bad outcomes is knowing your data, knowing that you have a diverse enough data set. And by the way, it doesn't mean that you have to break it, throw everything out the window, but it also can help guide from the ethical use of where you apply the resulting AI. If you've only trained on one demographic, it may be really good.

D. Mauro (09:08.43)
No.

Steve Orrin (09:32.048)
for that demographic, but it will not translate well to others. And so what we're finding with a lot of the ethical and governance, and that's one of the terms we use in data science and AI is the governance controls need to be established way early in the cycle so that you can enforce or at least provide guidance on diversity of data across multiple populations, data types, so they get a richness in the data, because that will help you get to better outcomes.

D. Mauro (09:33.838)
Right.

D. Mauro (09:41.294)
Mm -hmm.

Steve Orrin (10:00.208)
And one of the things that ethical use and governance provides is that attestation. So now when I get to the other end, I say, OK, I want to trust this AI. I have the provenance. I have the evidence to say, OK, here are the data sets that were used to train. Here's the optimization tools I use to tune it so that when I make my decision, do I want to trust this or can I apply this to this different population or this different use case, I have at least transparency.

D. Mauro (10:07.662)
Exactly. Right.

Steve Orrin (10:26.992)
to know, am I going to be successful or do I need to do some augmented training, which is where a lot of the excitement now is, is taking these large language models that have been trained on a vast variety of data across multiple domains and then adapt them to very specific domain data sets to be able to help them hone in on the thing you actually care about. A great example is you may have an AI that was trained on every kind of vehicle, ship, airplane known to man across the past hundred years.

D. Mauro (10:33.358)
Right.

Steve Orrin (10:55.856)
But your use case may be trying to track airplanes coming into the airport and be able to report on the flying times and delays and things like that. So doing a spot training or an adaption training on the specific domain, modern aircraft that fly in and out of the United States will help give you better results for that specific domain. And we see this approach of how do we take existing AIs or existing work and get them to be

better suited for the mission area that you're applying to. These are just one of the kind of controls that folks are looking at of ways to wrangle in, if you will, the AI. The other thing to keep in mind to your other point, though, is that ultimately, once you deploy AI, your job isn't done. And this goes to your point about hallucination and other problems that could happen. Some of it requires making sure you're securing the environment to protect against things like probing attacks and poisoning.

D. Mauro (11:41.07)
Right.

Hmm.

Mark Mosher (11:44.454)
Perfect.

Steve Orrin (11:53.392)
and some of those coercive attacks that especially when you open up to the broad internet, the prompt injection being able to try to skew the day. Because many of these generative AIs learn from every query. And so if I'm.

D. Mauro (12:04.974)
Right. So no matter who's in putting it, meaning even somebody with bad intent could be putting data in there knowing what it's going to do to it.

Steve Orrin (12:13.296)
Exactly. And so part of it is securing the front end or the inferencing systems themselves to protect them and protect your model weights. The other is monitoring. And so one best practice we've seen is having continuous query monitoring. So basically checking the AI, is it giving me accurate results? And there's going to be a range. And you have that acceptable range of what we call good answers. But if you're constantly monitoring, you can then plot across time.

Are you seeing a skew in the confidence levels or skew in the results to be able to check? Hey, is it starting to hallucinate? Yeah.

D. Mauro (12:46.67)
Mmm.

the validity on the level of the output that's coming in. So are organizations able to decide which AI, which machine data set they can work with, or are they all reliant or majority of them, unless you're Oracle or Intel or whatever, like are the majority of businesses, you know, leveraging open AIs?

Mark Mosher (12:51.11)
Right. Yep.

Steve Orrin (12:54.416)
Yeah, and then.

Steve Orrin (13:18.96)
So I would say the vast majority of both commercial and in many cases federal are leveraging the open source or commercial AI because they've done the heavy lift. I mean, open AI has trained for years on billions and trillions of parameters. And so they've done the initial investment to get you to that large language model.

Mark Mosher (13:25.606)
All right.

D. Mauro (13:28.11)
Right.

D. Mauro (13:37.614)
But once it's poisoned, is it poisoned for everyone or is it just poisoned?

Mark Mosher (13:40.614)
good question.

Steve Orrin (13:42.48)
So it's a good question. It depends on the poisoning. We find, and there's been some great papers on this, that poisoning can happen at any phase. And so some of the large language models, the Open AIs, the Bards, the Lamas, there have been targeted poisoning attacks, but they have teams that are actively monitoring and looking at ways where they can either undo the poisoning. So there are techniques of how you...

D. Mauro (13:54.51)
Right.

D. Mauro (13:59.278)
Mm -hmm.

D. Mauro (14:04.142)
I would think they could, yeah, I would think they could use the same optimization flow to eradicate the poison, right? Like if they see the poison, maybe, okay, maybe reduce it, water it down a little, dilute it perhaps, okay.

Steve Orrin (14:12.112)
eradicate is a strong word there, David. Reduce it. That's better. Yes. Because one of the things that it was a Google paper was put out a couple of years ago, that said once a large language of miles old was poisoned, you can never completely eradicate that that because it's built into the the new model, the new waiting, but you can reduce its impact.

Mark Mosher (14:14.042)
He's like, I wouldn't go that far.

D. Mauro (14:31.79)
Ugh.

Steve Orrin (14:38.224)
And one of the things that the controllers of those AIs can do, if they see that an AI is going too far off track, they can actually take it back in time and start it again from a point in time pre. And again, it's questioning of when is the good time to do that. But in the worst case scenario, they can always roll back. Exactly. Now, the flip side.

D. Mauro (14:49.614)
really?

D. Mauro (14:54.67)
Right.

Roll back before Mark started using it, probably, is what I'm thinking. Yeah, not even intentionally. Sorry.

Mark Mosher (15:00.102)
Right before I poisoned it.

Steve Orrin (15:05.488)
The flip side is for a lot of your, whether it be a small business or even a large business or federal government is, the vast majority of the valuable or interesting use cases aren't just going to chat GPT and typing in, give me an essay based on Shakespeare for an eighth grade paper. It's for it to solve a particular problem within your domain. And so what we're seeing is this shift from sort of the massive open AI systems, not the company, but the general idea to domain.

D. Mauro (15:11.534)
Yeah.

D. Mauro (15:22.862)
Right.

Mm -hmm.

D. Mauro (15:33.358)
Right.

Steve Orrin (15:34.736)
specific AI. And what they're doing is they're taking the work that was done by those large language models and then doing a very focused training on either their data set or data set that's relevant for that domain. And healthcare is a great example. Finance, where there are data sets of health records across every hospital system, where I can take the large language and train it on that. The big challenge goes back to your ethical use. I can't just take all these electronic medical records and shift

Mark Mosher (16:00.326)
Mm -hmm.

Steve Orrin (16:03.792)
up to Facebook or Microsoft or OpenAI and say, hey, train on this too, because it's proprietary, it's personal identifiable information, it's healthcare data. But what they can do is build their own instance of a smaller, large language model. They don't need the trillions of parameters.

D. Mauro (16:08.91)
Right.

D. Mauro (16:12.462)
Right, it's PHI, yeah.

D. Mauro (16:19.63)
Yeah, like slice the data and kind of put guardrails around it. Right.

Mark Mosher (16:21.958)
Yep.

Steve Orrin (16:22.16)
or slice the model to take the model, bring it local and apply it to the dataset in a controlled environment to train it on the data that they care about with the proper control so you don't get the exfiltration of the private information back to the mothership. And so we're seeing that approach. And there's techniques like RAG, which is very exciting now as well, of how do I get these large language models focused in on my particular domain?

D. Mauro (16:28.046)
Mm -hmm.

D. Mauro (16:37.038)
Got it.

Steve Orrin (16:47.376)
And because I'm not trying to train for every possible query, you know, a doctor's not going to ask for the Shakespeare paper for an eighth grader. I only have to focus on the kind of questions that a doctor would ask around, you know, like, is this spot cancer? That's a much smaller set that I have to do with, which means I can build better controls in on what is allowable prompts. I can reduce my poisoning and I can also more easily verify my results still in line with what I'm doing. And so we're seeing this massive shift.

D. Mauro (17:01.198)
Right.

Steve Orrin (17:16.464)
across the industry from just, you know, everything's going to happen in the big language models to more domain specific today. And where we project things going in the industry is going to organizational specific. So you think about a healthcare organization will train on all electronic medical records they have access to and insurance data today as the current process. In the future, it will be Columbia Pest Veterian training on their medical records for their patients with their procedures.

D. Mauro (17:42.254)
Right.

Mark Mosher (17:43.526)
Huh?

Steve Orrin (17:45.328)
and their IP and their doctor's information will be contained on their cloud instance or on their on -prem instance to get the maximum value out of the AI. Of course, it takes more work because you have to co -elect and manage that data set. But we're seeing this trend towards moving that direction. And what you're finding the big AI is doing is helping to provide those tools to help organizations transition to get the most out of the AI systems they're playing with.

D. Mauro (18:14.094)
Let me ask you about government regulations. The congressional hearings that we've seen, right, have demonstrated to a lot of observers that several people in Congress might not be like advanced technology leaders, right? Are they getting in coordinating with groups of advanced technology?

resources that understand this to formulate regulations that make sense. You know.

Mark Mosher (18:44.486)
Yeah, good question. Yeah, they struggled with TikTok hearings.

Steve Orrin (18:51.568)
So the best way to answer that is with a two -pronged answer. One is the staffers from any of those congressional folks are really smart. Technically, they've got their domain. And they're the ones that interface with a lot of the tech companies.

D. Mauro (19:00.142)
Yes.

Mark Mosher (19:00.742)
Mmm.

D. Mauro (19:02.894)
I completely agree. I know many of them and you're absolutely right. Yeah, but they're the ones who are in our ear. Yeah, they're the ones who are in our ear going, they don't know what they're doing. We have to guide them, right? Because they're very bright at what they do, but this is something brand new and it's not something that maybe, you know, an elder states man or woman has grown up with, right? So...

Mark Mosher (19:06.406)
Yep.

Mark Mosher (19:14.374)
Hahaha.

Steve Orrin (19:26.544)
Exactly. And then the other areas where is in the government industry collaboration. So there's been both congressional mandated as well as DOD led, IC led, even civilian agencies are working with the industry associations, with collaborations, with think tanks, with labs. A lot of the research labs are working with academia and with industry to collaborate on helping to guide the adoption and use of AI.

D. Mauro (19:31.47)
Right.

Steve Orrin (19:55.76)
for both the federal government as well as how best to provide guidance. And so you'll see in like the executive orders and things buried down underneath there besides, you know, we have to have AI officers and proper ethical controls is, you know, NIST and NSF and other organizations are tasked with reaching out to and engaging academia, the industry and other governments to collaborate on building better guidance, building controls.

demonstrating the best practices. And so we're seeing that filter down. NIST has taken up the charter. There's the National AI Research, I forget what the extra R stands for, NAIRR, but it's basically an NSF and government -funded engagement with academia and industry to collaborate together on both advancing AI, but also on building in some of those guidance around ethical and proper use.

D. Mauro (20:29.87)
Yeah, I saw that.

Steve Orrin (20:52.176)
of the AI and of the data sets that drive them.

D. Mauro (20:54.766)
Do you think there will be an overall federal regulation governing all Americans ever, or is it really going to be broken down to individual verticals or individual industries, most likely?

Steve Orrin (21:07.76)
So David, it's a really good question. I'm not a policy person per se. I can tell you that in general, the typical way that the US government goes and industries in the US government is that the government provides guidance and requirements. And you'll find that vertical industry specific regulations of the finance industry will come up with its guidelines. Health care will come up with its guidelines. The federal government will have guidance for their individual agencies. So the DOD was tasked with coming up with ethical

D. Mauro (21:27.374)
Right.

Steve Orrin (21:36.944)
and operational use guidelines for AI. And so they did that. They developed, and one of the cool things, they published it. So you can go to the DoD and look up their AI use policy, their AI guidance, because it's a great representation that any large organizations should look at and say, I think we can do about 60 % of this, because they already figured out some of the gotchas there. We are seeing in some other governments, the UK, for example, where they have a little bit different structure in their relationship between

D. Mauro (21:42.734)
Mm hmm. Yep.

D. Mauro (21:54.03)
Right.

Exactly.

Steve Orrin (22:05.008)
the government and the private sector, where in certain cases they are dictating this is what you will do. I don't think that, you know, honestly, government in the U .S. is more advisory. And then for their own consumption will stand up controls as a representation of what different industries will do. But I think you'll see different verticals come up with their own and they may be a lot of commonality. Eventually, we're going to want commonality because the big tech companies don't want to build 50 different flavors for 50 different industries.

D. Mauro (22:32.174)
right yeah

Steve Orrin (22:33.424)
But I think you'll see it driven more by the vertical itself, finance, healthcare, critical infrastructure, energy, things along those lines that will drive their adoption and requirements for how they're gonna ethically use AI. I think what you're seeing a lot of the focus on the government is to get the ball rolling on what are those guidelines? What are some ethical use primitives? And NIST, and like I said, will come up with guidance documents to help industry sort of have a baseline to begin with.

D. Mauro (23:02.542)
Excellent. Excellent. So what do you say to organizations that have not created policies yet on what you should be what their employees should be doing for AI?

I mean, shouldn't they have some like there's some that I know of that are just like, they're essentially saying to their employees, until people figure this out, do not use it on company devices. And, and to me, I think that puts them at a competitive disadvantage. Is there is there a middle ground? Is there a responsible framework that we could just tell everybody, don't put

Mark Mosher (23:32.422)
Right. Yep. Yep.

D. Mauro (23:49.166)
personal information or any brand names in there or any code for our products type thing. Any any guidance there? Any thoughts on that?

Mark Mosher (23:50.342)
I'm going to go ahead and close the video.

Steve Orrin (24:00.048)
So there's a couple of ways to answer that. I'll start with your first comment that any company that doesn't yet have a policy better get one quick because odds are just like we had with cloud and wireless and every other technology advancement, they've got shadow IT, they've got shadow AI happening. So if they don't, in lieu of a policy, it's going to happen regardless at those organizations because at the end of the business units have to get, have to be competitive, have to get business done.

D. Mauro (24:06.51)
Okay, good. Yes.

Mm -hmm.

D. Mauro (24:15.374)
Yes.

Mark Mosher (24:15.398)
Yeah.

D. Mauro (24:19.342)
Right. Absolutely.

Steve Orrin (24:25.456)
and they're going to use whatever tools in there are. So without guidance and without policy, they're going to go use it. So it's already happening. So they need to get a policy in place. The thing that we found, and I saw this when Chat GPT first came out, about a month or two after the major push, there was a couple of examples of where corporate IP from various companies ended up in the Chat GPT engine. And it was exposed and there was some media about it. And there was a knee jerk reaction. We're shutting off Chat GPT across all these big companies.

D. Mauro (24:45.422)
Correct. Yep.

Mark Mosher (24:48.774)
huh.

D. Mauro (24:52.686)
Right. Yeah.

Steve Orrin (24:53.968)
and all these schools and everyone just said, no, no, no. And they put, it was, it's like putting your head in the sand and it actually doesn't solve the problem. And so what we saw very quickly thereafter is a lot of large, large organizations drafted and put out a policy about how to properly use it. So it wasn't that it was blocked by default. Now it was things like don't put personal information, don't put corporate data, don't put anything that you would have to sign in an NDA for. So they created clear guidelines on how best to interact with this tool.

D. Mauro (25:01.997)
Right.

Steve Orrin (25:24.368)
similar to how you would interact with social media. You don't put your corporate IP into your Facebook account, nor do you publish your social security number. So it's similar, sort of basic guidance of how to leverage a tool, but not how to give away the kingdom. And they enforce that policy both with Word documents that tell you here's the policy and training, as well as with monitors that are looking at some of the connections and doing spot checks. The same way we do things like DLP or data loss prevention.

D. Mauro (25:26.958)
Mm -hmm.

Right.

D. Mauro (25:51.63)
Right.

Steve Orrin (25:51.792)
being able to look for our social security numbers exiting our keywords, like product code names or source code being transferred over these connections. And those are the kinds of things that you can do your monitoring from an IT perspective. But ultimately, having the guidance and training your employees, it goes along because most employees are going to do the right thing if they know what the right thing is. And as you said, you really can't in this modern day, you can't not be using AI somewhere.

D. Mauro (26:05.966)
Right.

Mark Mosher (26:15.334)
Right.

D. Mauro (26:15.63)
Mm -hmm.

Steve Orrin (26:22.352)
It's absolutely transforming every vertical and every industry that's out there. What you'll find is that there's a full gamut of where you are on the maturity roadmap. As far as cool little lab experiments inside to full scale adoption on the other, everyone is somewhere on that spectrum. And what you'll find even inside a organization, they'll have one team that's way ahead and one team that's way behind. So we're all learning.

D. Mauro (26:33.294)
Absolutely.

Steve Orrin (26:46.8)
And of course, at the same time, the AI industry itself is changing rapidly. So as soon as we get comfortable with one thing, there's a new thing coming. And we got to go try that thing too. So it's a constant thrash that everyone's trying to keep up with. And the technology is advancing both from the hardware, from our side, as well as from the software that's available to help people get better at developing these solutions.

Mark Mosher (26:51.814)
Okay.

D. Mauro (27:08.526)
It's gotten so much better just in the last six months. It's unbelievable. Like I've always used it for idiating and for content creation and coming up with ideas because you can go and Google it and then all the blue links that come across, go and look at each one. Or you could ask AI that's already done that and it's going to summarize it all for you. Right. And it's it's really helped speed up things. So I think if it's just done in a

Mark Mosher (27:12.742)
Mm -hmm.

Mark Mosher (27:28.326)
Yeah, right.

D. Mauro (27:36.878)
in a good way. It's really powerful because it just takes up, it just speeds up so many manual tasks. I mean, to me, it's always, tell me what you think of this, because I wanted to ask you this. To me, it's like people are always, they're so, I hear it all the time. People are worried that AI is going to take their job. It's a knee -jerk, fear -based reaction, right? But to me, it's like, okay, well, it's a generational technology. It's like the

automobile, right? When the automobile came out, I believe we lost some some horse drawn carriage mechanics, people that were really good at that. But it but it spurned an entire industry of automobile engineers, right? And designers and and factory workers and everything else. And so the the positive impact compared to that small group, I don't see I still think it was it was it was generational.

Mark Mosher (28:16.07)
Right.

D. Mauro (28:36.494)
Do you feel something like that is it's very simple? I mean, it's very similar or no?

Steve Orrin (28:41.808)
There's definitely similarities, you know, like someone who was doing sort of data entry or sort of menial tasks are going to maybe need to become AI developers so that they can operate in the new world. But what I find a lot of the practical implementations of AI, it's really about augmenting my job, not replacing my job. There are many examples and in all domains of where leveraging the AI is going to do a couple of things for you. Number one, it's going to make you more efficient with the work you're doing.

D. Mauro (28:50.638)
Right.

D. Mauro (28:58.862)
Yes. Correct.

Mark Mosher (28:59.622)
Yeah, yeah, exactly.

D. Mauro (29:11.63)
Mm -hmm.

Steve Orrin (29:11.728)
It's also going to eliminate a lot of the manual, menial work and let you focus on the cool stuff or the interesting problems. It's going to generate exactly and more. It's going to reduce a lot of the manual tasks. So you'll be freed up. And the key thing is that there's, like we said, there's a technology revolution. There's also going to be organizational and cultural revolutions that come along with it. It's not just the people that are impacted.

D. Mauro (29:19.182)
Right, right. Less custodial, more strategic. Yeah.

Steve Orrin (29:37.872)
the managers of teams need to understand how best to leverage the AI so that they can give their workforce the right tools, but also then measure them on what they're actually good at. One example I've talked about many times is about the application of AI to cybersecurity professionals and how it's never going to replace my advanced malware hunter or even my firewall administrator. But if I could take 80 % of their job, which is, you know,

D. Mauro (30:04.654)
Right.

Steve Orrin (30:08.016)
fighting fires on a daily basis, doing patches, rolling out firewall updates, checking on the latest status of vulnerabilities and automate that with an AI and a machine learning engine. Then allow them to focus on the 20 % that no AI will detect because it's the one -time new APT or it's the ransomware campaign that we haven't seen before. So we can actually focus that underpaid, overworked and very tired cyber team on the 20 % hard problems.

D. Mauro (30:33.902)
Mm -hmm.

D. Mauro (30:38.286)
Right.

Steve Orrin (30:38.32)
and let the AI deal with the daily firefighting, which is not a good use of their time anyway. And you actually will find in the organizations that have done that, their people are in better moods because they actually feel like they're contributing and they're working on hard problems. And they're not constantly fighting the vulnerability of the day. And so that's, I think it's going to be one of the key ROIs of using AI and cybersecurity. A lot of folks are talking about, well, I'm going to use my AI to detect that one time.

D. Mauro (30:43.15)
Right.

D. Mauro (30:48.206)
Mm -hmm.

Steve Orrin (31:06.352)
really cool malware that no one has ever seen before. Maybe, but I think that's not actually the best use of AI. So you find that one, what about the next 40 that come down the line? However, if I use the AI to, you know, to deal with the 80 % of what I call the stupid stuff, the stuff that just happens every day, the thousand hits against the firewall that you just can deal with in a more automated fashion, you actually get better, a better return and a better team because they're feel like they're doing something that really leverages their expertise.

D. Mauro (31:16.206)
Right.

Steve Orrin (31:35.984)
and letting the machine learning deal with the more mundane or the more daily routine. And the other cool thing is those daily routines, we've done that so many times, we have a rich data set of what happens there to trade the AI on in the first place. You don't have good data sets on that esoteric one -off nation state APT, that's where you need the human. And so it's actually a really good application of AI. And we'll see, you see the same thing in contract management or document management or any of the other more mundane manual processes.

Mark Mosher (31:48.774)
Good point.

D. Mauro (31:54.094)
Right. Exactly.

D. Mauro (32:01.326)
Mm -hmm.

Steve Orrin (32:04.784)
If I can use the AI to help automate or speed, reduce the time on that, the compliance first can then verify, okay, here are the deviations as opposed to looking for the 4 ,000 things that were correct. And so you're really optimizing your team to be more efficient and to then scale by using AI in the right ways. That's going to take some cultural changes on the management side and on the leadership side of knowing where the best place to apply that AI to.

And oftentimes we get caught up in the shiny object. Well, AI can solve all my business problems. No, they're not going to make your company a better company because they're going to sell more widgets where your sales team are going to get replaced, but they're going to make your sales team much more efficient. And that's the way to think about where AI is going to ultimately get us to is really that augmentation. It's going to help the worker, not replace the worker in most cases.

D. Mauro (32:36.11)
All right, now.

D. Mauro (32:55.662)
Well, yeah, I mean, in the sales environment, we've already identified ways that that that it helps when you have all these meetings throughout the week. You have to manually review your notes to determine next steps. Send off the emails. Do all that. I can do all that. I can transcribe the meetings. You can have a I review that spit out exactly what the next steps were that were discussed in the meeting, what they were specifically what the client said. So that way you're not

Mark Mosher (33:20.678)
-huh.

D. Mauro (33:25.518)
relying on your own memory of what they said, it's actually what they said. And then you can fire off and take concrete next steps. And it's, you know, 40 meetings with all of this different, you know, note taking and things like that can be done in an hour. It's really quite...

Steve Orrin (33:45.296)
It is. And think about where that goes next. So now, first it gives you the information, but imagine the next phase of that, where it schedules the next meeting, it knows who to invite, it knows the tools you need so they can already deploy it to your system. We're getting to the point of that intelligence as it learns our workflows, where it can help us be more efficient at the job that we're trying to do. At the same time, it also offers the opportunity for security is going to be very critical.

D. Mauro (33:48.014)
Yeah.

Steve Orrin (34:12.592)
Because we're starting to rely on these AI technologies to make decisions, to inform the way we think about things. And so in that example, when it gives you the meeting notes, you're going to make decisions based on that. But you have to know, a human didn't generate those notes. And so how do you trust that you got the right action items? That's where the trust comes in, and having built -in controls and monitoring to be able to trust that the AI is leading you down the right path.

D. Mauro (34:29.23)
Right.

Mark Mosher (34:30.502)
Good point.

D. Mauro (34:32.494)
Yep.

Steve Orrin (34:40.047)
Or is it sending you off to down a rabbit hole that actually isn't going to be beneficial to your sales team?

D. Mauro (34:45.07)
Yep, that's exactly right. And that's where the putting those guardrails in really becomes so important.

Mark Mosher (34:55.142)
Well, with that being some of the positive outcomes and uses of AI, you know, although I'll admit, I thought cyber crime was a passing fad about 10 years ago, and apparently I was completely incorrect on that one. It was a bad call, Moser, bad call. But what about the early adoption in the use of AI by these cyber criminal gangs or nefarious uses for AI? What are you seeing out there? I mean, it's already been going on, but it seems to be they're leveraging these tools and those...

D. Mauro (35:05.582)
Ha ha.

Crime's always up, man.

Mark Mosher (35:24.934)
datasets more than maybe what some of these verticals in the consumer industry is.

Steve Orrin (35:29.904)
So Mark, you're right on there. The adversaries, the cyber criminals have absolutely adopted the technology because any edge they can get in generating revenue for them. And that's the thing to remember. They are revenue generating organizations. Or they're trying to accomplish their nation state goals. And they're not held to the same requirement of reliability and trust and ethical use that legitimate corporations are.

So they're much quicker to adopt new technologies, to try them out, and their returns don't have to be, well, we need to have 85 % confidence on any AI before we deploy it. If it works 10 % of the time, that's 10 % of the time more than they had before, that's a win. And we're seeing them use all sorts of different AI tools across the vast array of their attack vectors. Everything from really advanced phishing campaigns. If you remember, 10, 15 years ago, you'd get a broken English email that you could definitely tell was written by a foreigner.

Mark Mosher (36:02.726)
Right. Yeah.

D. Mauro (36:04.43)
Right.

Mark Mosher (36:09.19)
Yeah.

D. Mauro (36:19.374)
yeah.

Steve Orrin (36:24.176)
and you knew that wasn't your bank. Now, with AI tools, they can not only craft a really well -generated message to you, but if they're targeting you, Mark, they could do some reconnaissance on your social presence, on your emails, and on the things to be able to tailor that phishing campaign so that you will, because ultimately what they want you to do is click the link, so that you will say, yes, that's my bank, I'm going to click the link.

We're seeing, I think it's one of the scariest and yet most elegant examples of the combination of AI tools being used for cyber criminal was the story from a few months ago of a company's financial person was phished through a deep fake video and a live chat bot to transfer $25 million out of the company. And so the video was of the CFO, yeah, and the CFO with an interactive chat and a video.

D. Mauro (37:09.966)
yeah.

Mark Mosher (37:12.486)
Yep.

D. Mauro (37:12.654)
yeah, now we've we've we talk about it. Yep.

Mark Mosher (37:15.878)
Yep.

Steve Orrin (37:20.272)
got this person to basically send the money. And so you think about they combined chat bot, deep fake video with a phishing campaign targeting a particular organization. And they were successful to the tune of $25 million. Shows you the level of sophistication that's already being achieved by the cyber criminals. And the reality is no antivirus product today would have detected that.

Mark Mosher (37:31.526)
Wow.

Mark Mosher (37:44.902)
No, no.

D. Mauro (37:45.966)
No. And that's what I wanted to ask you. Like what, where are we at in terms of deep fake detection? I know that I believe Microsoft and a couple companies have detection that can watermark a video or look at it after it's already recorded and they can play it and they can say either they're either looking at the metadata or they're evaluating it. So I'll say this.

Mark Mosher (38:08.87)
Mm -hmm.

D. Mauro (38:15.278)
was altered, right? But what about live video? Is there anything out there that can do that?

Steve Orrin (38:23.088)
So we've seen there's a couple ways to deal with this. I'll start with a really simple thing that if this happened today, if a deep fake tried to call you, Mark, and get you to transfer all the money out of your company because it was telling you I'm the CFO, there's a little trick that a couple of people have talked about online that would help you at least verify. And that's an inline video. Have the person wave. Because the odds are they've got really good video of your face that they're doing the deep fake on. But.

D. Mauro (38:40.878)
Yeah, have them turn. Yep. Yep.

Mark Mosher (38:45.478)
Mark Mosher (38:49.606)
Yeah.

D. Mauro (38:49.934)
Mm -hmm.

Steve Orrin (38:51.568)
responding to, hey, I want to see wave hello, or, you know, give me the thumbs up, is something they wouldn't have trained on. And so it would be a quick way to see if the AI could respond to that. Again, that's a short term fix for a bigger problem. Where the research is going and we're seeing technologies, one is in understanding the techniques that the deep fake generators are using. And so the Microsoft, Facebook, and many others are looking at ways to detect if a video looks like it was craft by AI.

D. Mauro (39:02.03)
Great point.

Mark Mosher (39:05.03)
Yeah.

Steve Orrin (39:19.696)
But honestly, that's going to hit the sort of the commercial or the open source tools that everyone can get access to. The nation state adversary or wealth, well -funded cyber gang is going to defeat those. And so where the other area of research and there's a tool out there called a fake catcher is one example of some research that Intel and some universities have put out looking at biomarkers. So right now I'm looking at both of you. You're looking at me, but over the course of our video, my eyes have gone this way. My eyes are gone that way.

D. Mauro (39:19.726)
Mm -hmm.

D. Mauro (39:24.942)
Right.

Right.

D. Mauro (39:41.134)
Steve Orrin (39:48.592)
And so when the deep fake creates my face and creates my image is taken amalgamation of all that video. There's detectors to the two main ones is the more eye movement. My eyes go like this or one's going this way and one's staying straight. That's an easy detect, but also the blood flow in my veins. The actual cameras we're using actually have much better capacity than even we use in our computer conversations. And so embedded in these videos, you can actually change the definition to be able to detect.

D. Mauro (39:58.35)
Mm -hmm.

Mark Mosher (40:08.102)
Wow.

Steve Orrin (40:18.512)
the blood flow in my veins. And it's really, you know, with the right bio algorithms, you can detect when it's not the regular thump, thump, thump, thump, thump, thump, a deep fake isn't going to give you the thump, thump, thump, thump, thump, th

Mark Mosher (40:26.278)
Yeah.

Mark Mosher (40:38.726)
Mm -hmm.

Steve Orrin (40:48.016)
from a facial or human perspective. Gets much harder when we look at sort of surveillance video, where it's a grainy image to begin with. And those are some of the things where we're gonna need better tools. And we're already starting to see some monitoring of the tools. Another thing that companies like Facebook and Microsoft have access to is the vast majority of the real videos. You think YouTube and Facebook, they actually have all the videos that have been posted out there. So when someone trains on their deep fake generator,

Mark Mosher (40:55.334)
yeah.

D. Mauro (41:09.582)
Right.

Steve Orrin (41:17.904)
on those videos and they put together an image, at the end of the day, a Microsoft or a Google or these others can look at, I'm looking at Mark's picture right now, you're sitting there in that room. There's an original video of you doing that and I can detect what was going on. And so they can do a compare and say, is this video the same as the one that we have posted on the site 12 years ago? And are there changes? And so there's techniques that you could do. It's not going to catch the real time when someone's, but it's a way to sort of go into the repository and say,

Mark Mosher (41:39.046)
No.

Steve Orrin (41:47.696)
Is this deep fake video being published? And we've seen this being done on Twitter posts and things like that where someone grabs a video from Bosnia from 12 years ago and say, this is happening now in Sri Lanka. And they're actually able to say, no, we saw this, everything but the humans in the picture. We saw this picture existed before we have video evidence of it. And they could do those correlations. So we're seeing a lot of innovation. It's still going to be a while before we get it practically implemented.

D. Mauro (41:58.606)
Right.

Steve Orrin (42:16.56)
into our viewers and into our cameras so that we could do this kind of real time detects. But this is a growing area. And in the US with the upcoming elections and other things that are going on, misinformation is a big deal. And so we're seeing a lot of investment in how do I detect is my information I'm seeing real? Because the reality is very few people physically see anything today. We all are seeing videos of something that was seen. And so there's a real question about how do I trust

D. Mauro (42:26.318)
yeah, misinformation is a big concern.

Mark Mosher (42:28.294)
yeah.

Mark Mosher (42:43.174)
Right.

Steve Orrin (42:45.968)
the information and deep fakes are going to be a problem ongoing until we as an industry come together and better understand how the detect.

D. Mauro (42:52.942)
Yeah. Yeah, I mean, and you have these groups on Telegram, the Yahoo Boys and these groups from Nigeria, where they are posting their their live capturing how they're using deepfake generators to to socially engineer the elderly and these romance scams and these sextortion scams and things like that. And they're openly posting this stuff.

They're like, here you can do, you can buy this package and you can go and commit this crime for X amount. You can make this amount. And it's really disturbing. I mean, it's a major cyber crime concern.

Steve Orrin (43:35.824)
And David, to that point, you have to remember it's a trillion dollar industry. And the only way you get there is by having the same kind of efficiencies you'd find in a legitimate trillion dollar industry. Customer support, how to videos, licensing. I mean, there's actually, you know, they showed this with some of the ransomware and APT products that you could buy on the dark web. There's a number to call. And what was fascinating is when you get hit by a ransomware attack and they pop on screen, you know, pay us 12 ,000 Bitcoin to get your your key.

D. Mauro (43:39.598)
Yeah.

D. Mauro (43:45.422)
Right. Yep. Yep. Yep.

D. Mauro (43:55.022)
yeah. Yes.

Steve Orrin (44:05.104)
There's a little number there. You could call somebody to help you get Bitcoin so that you could pay them because they know the only way they're going to get paid.

D. Mauro (44:10.03)
And they're really helpful. They're really good. It's like the best customer service you can get.

Mark Mosher (44:13.83)
Yeah.

Steve Orrin (44:14.224)
It is. They've got an incentive to get you to purchase the Bitcoin.

D. Mauro (44:17.326)
Yeah, that's hilarious. Yeah, unbelievable. Yeah, that's so true. Yeah, we're very familiar with ransomware as a service model. There's so many changes this year with the takedown of Lockbit and the Black Cat fiasco. So it's quite interesting. So what is coming up next? What are we?

to expect and what can people do from organizational perspectives? Have an AI policy. If you don't have one, you need one quickly, right? Continuous user education. What else can organizations make sure is on their agenda?

Steve Orrin (45:09.776)
So I would say there are three things that would be guidance for the business units and the CIOs that are in these organizations, small and large alike, besides the bare bones of policy and governance. One is get the tools in the hands of your developer communities. Help them be successful. Give them access to the right tools. Give them the trainings they need. Because all of you, one of the things we're seeing is that everyone has a role to play in the AI. The

D. Mauro (45:20.878)
Yes.

Mark Mosher (45:27.142)
huh.

D. Mauro (45:38.03)
Hmm.

Steve Orrin (45:38.448)
Most successful implementations, the most successful organizations recognize it's a team sport. So while you have your data scientists and your AI model tuners, and they're really smart in their domain, they're only as good as that understanding what they're trying to do. And so marrying them, having the business people and the legal folks work with them to provide the requirements, to provide the, what data sets are appropriate, give them the information. And what we see is this teaming early in the development lifecycle to get the right controls, the right.

D. Mauro (45:54.99)
Great advice.

Steve Orrin (46:08.24)
business outputs and the workflows identified gives you much better outcomes on the other end. Plus those teams now have invested interest. They've been involved with the development. So they're much more likely to adopt it once it comes out the other end. And so getting your teams, culturally, everyone working together with the AI teams has been much more successful than the organizations that create a new division that's our AI group and they're special, they're off here. We're not gonna let anyone talk to them is a huge mistake.

D. Mauro (46:21.87)
Absolutely.

D. Mauro (46:34.574)
Right. It is a huge mistake.

Steve Orrin (46:36.816)
So getting those teams working together, the next one is helping the transition. There's a lot of really cool things happening in little labs and little demonstrators at every company. And they get out there, they show their project, and then what? Helping from an organizational sort to figure out how do I take it from the lab to the real world? Some of that is having the right funding in place to scale it once it's completed. The other is on the team that's building it. Make sure you go after a project that actually is meaningful.

Mark Mosher (46:43.622)
True.

Steve Orrin (47:06.704)
So having a cool AI that really gives you better odds in your football pool is not going to be the win. At the same time, you don't want to put the AI project on the most mission critical thing of your organization because when you fail and you will a couple of times along the way, you don't want to cater the business. So I like to call it find the Goldilocks. Find the one that's meaningful to the business, but it's not going to cater the business if you screw it up. And use that as your demonstrator because once you're done, someone's going to find value in deploying that because it's meaningful to the organization.

Mark Mosher (47:11.878)
Thank you.

D. Mauro (47:12.974)
Right.

Steve Orrin (47:36.4)
And then it helps you figure out how to scale it to the more mission critical or enterprise wide applications. Once you've shown the benefit on something that does have impact. And so that's the second thing. And then the last thing is knowing that it's not a one and done. It's not like I built my AI, I throw it out there and I'm done. It's a continuous process. And that's both from enhancing and making the AI better, but it's also about the security and governance controls. And that's what I talked about in the beginning, or a monitoring, both real time monitoring of the.

Mark Mosher (47:52.198)
That's right.

D. Mauro (47:52.462)
Right.

Steve Orrin (48:03.952)
of the output and the results in the waiting, as well as checking the C, you know, is it hallucinating? Or have we taken it too far? One example we've seen is they'll take a big language model and apply it to a very specific domain, and they constrain it in that domain. And one of the things we find is this great term, I was taught a number of months ago, called catastrophic forgetting. It's actually a really cool term. And it's basically when you optimize and you miniaturize an AI too much,

it will forget the things it learned. And I actually want to get a t -shirt that says, my AI and I catastrophically forget, because it's a great term. But it often happens, we see, when you're taking a large language model that's trained on such a large data set, and you try to miniaturize it too much to meet a specific domain, that if you don't account for that miniaturization, apply and be able to enhance the richness.

Mark Mosher (48:35.206)
Really?

D. Mauro (48:35.47)
D. Mauro (48:41.166)
That's fantastic.

Mark Mosher (48:52.326)
Mm -hmm.

Steve Orrin (48:59.728)
of the data to augment the fact that you're reducing some of those connections, it will end up forgetting the key things that learned in the first place.

Mark Mosher (49:06.726)
Wow, it makes sense though, right? Yeah.

D. Mauro (49:06.766)
my gosh.

Steve Orrin (49:09.648)
Yeah, but they came up with a cool term for it.

D. Mauro (49:11.758)
So that's what happened, Mark. You catastrophically forgot. That is great. Yeah, we need those shirts. That's fantastic. Steve, thank you so much, Steve Orrin, Intel Federal. Remarkable insight, like what you're seeing and your efforts are just bar none. Like it is just phenomenal.

Mark Mosher (49:13.446)
Yeah, I know. I like the idea of the t -shirt too, Steve. I like that. I'll get you one for Christmas, buddy.

Steve Orrin (49:18.927)
Excellent, excellent.

D. Mauro (49:40.814)
What's on the horizon for you? What are you doing in the near future? Any presentations? Any speaking engagements? Where can people find you?

Steve Orrin (49:54.64)
So the best place to find me and what I'm doing is on LinkedIn at LinkedIn. It's a sword S O R I N. That's where I put out where I'm going, what I've done. There's some really cool talks coming up this summer in the AI space in the security space. I think, you know, keep keep on my LinkedIn to see what's coming next. I will say that for those of us in the security industry, we were all looking forward to the hacker con, which is, you know, Defcon, Black Hat, B -sides and all the related activities in August.

D. Mauro (50:21.23)
Yes.

Steve Orrin (50:24.336)
in the AI space, the CDOIQ is coming up this fall, which is the major chief data officer event. And then also there's some really good content going to be coming out this fall from Spark AI, which is a new consortium of industry, government and academia coming together to try to do things like, you know, how do we provide some common ground on governance and proper use, as well as other organizations and think tanks that are really starting to think about

D. Mauro (50:48.43)
Excellent.

Steve Orrin (50:53.232)
what are the right guidance for ethical use that can be practically implemented? So I think we're going to see over the next six months, moving away from the, you should do good to actual practical guidance of how to do good and where, where to spend your time and energy to actually help organizations get there. So we've gotten, we've had the last year of sort of, we need something to do it. We need to have a mandate. Now that we're going to, we're seeing the implementation of, I've seen this put out guidance. We're seeing,

Mark Mosher (51:05.766)
Right.

D. Mauro (51:06.398)
Right. Exactly.

Steve Orrin (51:23.152)
NSF invest in organizations. So I think we're going to start to see a lot of content towards the end of this year being published to help organizations get on the right track or to get the, you know, to optimize what they're doing today.

D. Mauro (51:26.254)
Yeah.

D. Mauro (51:35.854)
That is fantastic. We will have you a link to your LinkedIn so that people can stay up to date and stay informed on all of your guidance and insight right there in the show notes. And we thank you so much, sir. Thank you so much for your time. Always a pleasure.

Mark Mosher (51:54.918)
Yeah, thanks.

Steve Orrin (51:55.312)
Thank you, David. Thank you, Mark. It was a pleasure. Thank you.

D. Mauro (51:56.59)
Thank you so much. See you guys.


Meet the INTEL CTO
The Importance of Trust and Transparency in AI
The Need for Diverse and Representative Data Sets
Creating Clear Policies on AI Usage
Addressing the Risks of AI Hallucination and Data Poisoning
Augmenting, Not Replacing: The Role of AI in Jobs
The Use of AI by Cybercriminals and the Challenge of Deep Fake Detection
Guidance for Organizations: AI Policies, User Education, and Collaboration
The Future of AI: Practical Guidance and Optimization