Artificial Intelligence Podcast: ChatGPT, Claude, Midjourney and all other AI Tools

The Crucial Role of Cybersecurity in the Age of AI With Terry Ziemniak

Jonathan Green : Artificial Intelligence Expert and Author of ChatGPT Profits Episode 315

Welcome to the Artificial Intelligence Podcast with Jonathan Green! In this episode, we explore the critical intersection of AI and cybersecurity with our special guest, Terry Ziemniak, a cybersecurity expert.

Terry delves into the importance of understanding and managing the risks associated with AI technologies. He discusses the shared security model, which underscores the responsibilities of both AI providers and users. Terry emphasizes that while AI tools can bring significant benefits, it’s crucial to implement proper data governance and risk management strategies.

Notable Quotes:

  • “It’s easier to see the value in the gold rush than the potential risks coming behind it.” - [Terry Ziemniak]
  • “In this very quickly evolving role of AI, the vulnerabilities may be a big unknown.” - [Terry Ziemniak]
  • “You are not gonna stop the stealing of Jonathan Green’s book… The question is how do you stop the misuse on the backend?” - [Terry Ziemniak]
  • “How does security have to be? Nobody really needs to hit that highest level… What’s the right level of security for us?” - [Terry Ziemniak]

Terry also shares insights on how small and mid-sized companies can approach cybersecurity by classifying their data and understanding their specific risks. He provides practical advice on balancing security measures with operational efficiency and the importance of staying adaptable in a rapidly changing technological landscape.

Connect with Terry Ziemniak:

 • Website: https://www.techcxo.com/

 • LinkedIn: https://www.linkedin.com/in/terryziemniak/

Connect with Jonathan Green

Jonathan Green 2024: [00:00:00] Cybersecurity is just as important as ever before in the world of artificial intelligence, which is today's expert guest, Terry Ziemniak.

Today's episode is brought to you by the bestseller Chat, GPT Profits. This book is the Missing Instruction Manual to get you up and running with chat g bt in a matter of minutes as a special gift. You can get it absolutely free@artificialintelligencepod.com slash gift, or at the link right below this episode.

Make sure to grab your copy before it goes back up to full price.

Are you tired of dealing with your boss? Do you feel underpaid and underappreciated? If you wanna make it online, fire your boss and start living your retirement dreams now. Then you can come to the right place. Welcome to the Artificial Intelligence Podcast. You will learn how to use artificial intelligence to open new revenue streams and make money while you sleep.

Presented live from a tropical island in the South Pacific by bestselling author Jonathan Green. Now here's your host.

Now a lot of people are so excited about new technologies that they never think about the risks, and [00:01:00] we get really excited and think, oh, everything will figure itself out. And a lot of people have this mindset of open AI is gonna figure it all out. They'll handle security. If I get in trouble, they'll cover me.

And we've already seen. That's not really the case. They do say they'll cover you if you get in trouble for like copyright infringement. If you accidentally write a blog post, that's plagiarized, but that's the only thing they've talked about. We already saw a lawyer get ensed last year for bad data. Why do you think people get so excited by technology and then the concern follows way far behind.

Terry Zienmiak: Morning, by the way, Jonathan, I appreciate you inviting me on your podcast. People are always looking after the what's new, the latest, the trendy thing. And AI is certainly all of those things. It's easier to see the value in the gold rush than the potential risks coming behind it.

But it was interesting you mentioned early on, Jonathan, you were talking about. ai, op chat, GPT [00:02:00] Open AI is gonna protect us. It's they're gonna build the security trolls in and the users can just run free willy over this thing. And the technology protect itself. I'll take you to an analogy.

If you remember when the cloud technology got real popular 10 years ago, everyone's jumping on AWS Google Azure and whatnot. They introduced the concept, which I think applies here in the open I AI space. And that concept is called the shared security model. So when you're in the Azure space and you're using the Azure managed database server, for example, Microsoft manages the power, the basic network stuff.

They do patching. They do three quarters of which you'd consider technical security space, but they don't do it all. As the consumer of that solution, I take partial responsibility and security. So particular to that example of the database being managed. As the consumer, I'm responsible for people.

I'm responsible for the code on top of it. So in that giant stack of security, Microsoft does a good [00:03:00] chunk of it for me, but I still own part of it. That same concept applies in the AI space. As these folks with these really great AI tools are evolving, they're gonna start building more and more controls in and protections, and they're gonna get you.

Half, two thirds, 80% of the wave where you know, where it'd expect to be from a security perspective. But as the owners and users of the technology, we're always gonna have responsibility. Right now, that responsibility is probably in the user access space. Can Jonathan run as administrator?

Can Terry run as a user? Does Teradata a password, does Teradata multifactor, those sorts of things. Also today we're dealing with the data governance concept. Should Terry be postings, data up there? Can Terry send data this direction? I answered that, the idea that it's always gonna be a shared security model, but be aware that you as the o and consumer of that technology, know your responsibilities and make sure you're accounting for those.

Jonathan Green 2024: Let me give you another [00:04:00] example that I'm thinking about. The big ais are built by Google. They're built by open ai. Open AI is 49% owned by Microsoft. Now they say they don't keep your data. What do you think the odds are that's a hundred percent true? Because Google's entire business model, it's built on putting ads in your email, reading your emails, basically getting with as much as they can.

And Microsoft was founded. The very first version of Ms. DOS was stolen code. It was built by another guy that was on a trip, so they took credit for it. So their foundation at a core level, right? The very first thing they ever did that started their company, Ms. Doss, stolen code. What are the odds that they're gonna go, you know what you, we promise Because they say they promise.

You're not touching your data. They're not looking at it. That if a competitor who is building a product, they're interested in uploads their data, they're not gonna take a little peek under the skirt. 

Terry Zienmiak: That, that's a tough one. I would say it falls in the realm of trust, but verify the big players like Microsoft, Google, and [00:05:00] AWS, they've got regulators all over 'em and there's contractual protections and, Amazon doesn't wanna go bankrupt and they get dinged by Europe all the time.

So trust, but verify in the sense that if it's my personal vacation diary. I wanna post it up. That's one thing. If it's my corporate, if my corporate intellectual property, you may be leery about it. So I think it's risk management as well of what you put up there. And the risk management really is a corporate concept more than a personal Jonathan concept.

The, that, that goes back to the responsibilities of the companies the company should be thinking through and set those governance models of, yeah, you can send, I don't know, new marketing information up to chat GPT and do the Wiz Bank stuff. It's gonna do, but don't send up our financial records and don't set up business plans up there.

There's always a risk and you gotta protect yourself as a business. So it's, yeah, trust but verify would be my take on that. [00:06:00] 

Jonathan Green 2024: That's what I wanted to get into next, which is that the government has levels of security, like secret, top, secret, eyes only. And larger companies have that, right?

Coke has the highest level is their secret recipe, right? Hidden in the vault. That's the highest recipe. Same thing, KFC, right? We're not gonna say you, the 11 herbs and spices are, we'll just tell you there's 11. That's their highest level of security. A lot of smaller companies, I think, don't have this idea of this is our secret sauce.

This is our middle, this, we really don't care. Separating it that way. And how would you advise when you come in, because you're a cybersecurity expert, you come in as a fractional expert. If people don't have any policy starting at that point, how do you say, here's how to divide into categories and there's three categories, the right number, or should it be five?

What's a good starting point? Yeah. 

Terry Zienmiak: I tell you that, that's a great point because when I work with small, mid-size companies. Almost all of them struggle with the most foundational part of cybersecurity, which is really what [00:07:00] is your stuff, your data, your people, employees, contractors? What are your contracts and your vendors and partnership?

What are your devices? What are your networks? Just foundational is tracking all your stuff is difficult and you gotta put effort to make it work, but it's just foundational for your cybersecurity. One particular aspect, especially with AI becoming so popular so quickly is your data? Where is your data?

What is your data? Generally that's called data governance. And the idea being to your point, you're gonna classify your data. What's our secret sauce? What's really important and what's not that important? And then based on that, what are the expectations? Maybe it's medical records, it's governed by hipaa and we must do dink.

Maybe it's credit card information and that has not regulations but contractual protections. Maybe it's just employee information and it has state law expectations. Your personal identity theft. Maybe it's business plans with no outside regulations or expectations, but we gotta protect our business stuff.

Yeah, absolutely. You wanna think through the [00:08:00] process of identifying and classifying your data. Three, three tiers is not a bad one. I actually go for my smaller clients, honestly, just two things we care about and things we don't really care about and things you care about would be again, regulatory.

So your credit cards you, your medical information your consumer information the stuff that you have expectations to protect. And what I do, Jonathan, I always make that a single tier. Because it, for small companies, it's hard to manage a really high level of expectation and then just a moderately high level of security expectation.

So I just bump it all. So everything is gonna be fully encrypted, the backups are the same, and encryption and transit is gonna be the same. So honestly, I split it into two tiers. But that being said, it does take time to think through those tiers and you gotta go find the stuff. So we're, again, maybe a word about business data.

Where's your, where is it? Is it on Jonathan's phone? Is it [00:09:00] in Terry's personal Google Drive? Is it an email? Is it this place? Dropbox backup tapes all over the place identify the stuff and finding it. Those really the foundational concept that make your overall cybersecurity program work and definitely required to manage this AI risk.

Jonathan Green 2024: I think that's really good to first figure out, okay, what do I actually need to protect? Now I have a sense of it. A lot of companies say I'm too small. A lot of small people go, I'm a one person operation. I'm a three person operation. Why would someone hack me? Why would someone attack me? Especially because it seems like computer viruses have disappeared.

When I worked, my very first job was in it way many years ago when I was 18 in the late nineties, and. That was when the I Love You Virus came out. And you open that email. No one like, I can't imagine anyone open it now, but probably they still would. And we had one guy who did open it twice, which kind of blew my mind.

I was like, nobody loves you. Just stop. [00:10:00] Just but that was like what we were all worried about and we would all have antivirus stuff on our computer. Then I switched to Mac and everyone's they don't write viruses for Mac. I was like, are you sure about that? I don't know. And you have do I need a firewall or not?

And it was really the sense of personal attack, mostly the prankster style of attack, which was, let's see if we can crash a computer. Let's see if we can destroy someone's data. Let it's switch more. There's a lot more of the hold your data for ransom. Version of that and the attacking companies and people have seen, if I'm gonna, why hack one individual?

I can hack a company, and this is never more obvious to me than when people talk about website security. When I first started putting up barriers on my website, I thought, come on, I have a tiny blog. Who's gonna attack it? And it goes report. I had to turn off the reports 'cause I was getting so many, it was like attack.

I recently had a major attack on one of my sub websites. Just because someone figured out the username. I even as a basic security precaution, like with a standard blog, they all give everyone the username, admin, [00:11:00] or user. I always change it to randomized character, so it's two passwords. There was a mistake in the settings that you could see the username in the settings.

So I went back in. Had to set up a new username and set up two FA. They're still attacking the old username, which means I found that security whole, so at least that, but all the time, like lockout. But even a small site, not even my main site, it's a subsite that I'm transitioning away from, but I still don't want it to get hacked.

There's a whole bunch of processes for website security that I've had to evolve. Like I take a backup once a week. I keep a series of backups. I have a second backup system. All of these elements are in place because. We know that sites get attacked, but people say, I'm too small. Why would anyone attack me?

And what I see, the problem with AI is that it's a new vector that we haven't really thought about. There are a lot of AI companies, for example, that pretend they're an AI or they're just a front end for chat, GPT. So you don't know what you're putting between you and chat GPT? Not really, 'cause they're just using chat [00:12:00] TV's API and they don't have.

Like a governance policy. They're not checking how to using the API. Not really. So you might be running a man in the middle pulling all the data people on the way. We recently saw there was a Facebook page that pretended it was associated with a big AI company and they put a virus on about a million computers.

Put this Facebook page that just came out in the news today. I think it was a fake mid journey or fake chat GBT page. It's happening very quickly. This new vector, it's new territory. There was new territory people. Think of good ways to use it. A lot of people think of bad ways to use it. What are some basic ways that you tell your clients or you tell people, here's how to approach a new technology and here's some way to have a barriers?

I'll give you my example. First. When I talk to a client or someone who's talking about security, I say, here's how you can have a secure ai. You run it on a laptop. I. It's Air Gapt set up a local ai. At the end of the day, you run a magnet over it and then you blow it up that way. There's, that's the most secure version because there's the AI's never connected to the internet and [00:13:00] then you destroy it At the end of the day, nobody ever wants to do that.

Nobody ever wants to destroy a laptop at the end of every day. 'cause that way you can't have someone break into your building or do a social injuring and steal the laptop. 'cause that would be a vulnerability. We've all seen the person who accidentally left secret material in the car and went to Starbucks and it got stolen.

So nobody wants to do the highest level of security. So what is the right level on that spectrum between not paying attention to security and destroy your laptop everyday security. Okay. 

Terry Zienmiak: Yeah, it was interesting you used the phrase that this is the most secure model. And that's interesting because it, to me, it, it triggers the conversation of.

How does security have to be? Mo you know, nobody really needs to hit that highest level. Wanna show the government or something. You may have really high requirements for cybersecurity. Most people don't be don't need that level of security. So when I talk to my clients, I use the phrase, what's the right level of security for us?

Then how do we get there? So particular to the use of [00:14:00] AI technology, how security do you have to be, goes back to, we talked about data governance. If you are just putting marketing a website in a chat and GPT, do it all day long. Nobody cares. It's the data you're not worried about. If you're putting medical data up there, shame on you.

You shouldn't be doing it anyway. There's different ways to use it. So it's really based on, in, in this case, it is the data classification. So if it's data we care about. Don't, you gotta consider the security within the generative tools that you're using. So again I hate to keep saying it, but it really comes down to governance.

You think through what data are you using, what are you trying to get out of it? The problem is, I think most small companies I don't know that they a, ask the right questions and B, really have necessarily the technical expertise to do anything beyond ChatGPT. So AI is percolating down in all these consumer level pieces of software copilot and Microsoft office for example.

It's just built in now, and you can you can license it, but away you go and you're using chat GPT [00:15:00] within Microsoft office, which is great, a lot of value. But if Jonathan the analyst isn't thinking about it and he's putting his financial stuff into Excel and hits the button, boom, all that stuff goes up to chat GPT space.

So I think it, I would suggest companies take. A couple hours and have a session with the right folks to think about what are our rules around ai? Where can we use, where can we not use it? What's the data we're gonna use? What are the risks we're worried about? A couple hour session, you can think through and manage this risk because this is a risk.

This AI is similar to your cyber risk and your financial risk and your competitive risk. You're not really gonna drive these down to zero. These are business level conversations you have. You have to have. These are great tools and can do great things, but you gotta manage the risk associated with it.

Jonathan Green 2024: Let me ask about a different type of technical mistake. I'll give you an example. I recently posted a question on LinkedIn, which was, [00:16:00] if I post a tweet that's offensive, but then I say, Hey, chat, GPT wrote it. I posted without reading it. Does that mean I'm not responsible anymore? 99% of the people said, no, it's still your fault.

One person said, yeah, it's not your fault. And I was like, oh boy. That guy's giving you trouble when he makes that mistake there. It's the new version of the dog ate my homework, right? There's a lot of people are posting content while they're reading it, which I think is another form of risk. You can accidentally create content.

This is what happened to a lawyer who published a paper with content that he didn't double check. There's different versions of this, but. We've also seen, like a lot of celebrities will get in trouble for a tweet. I'm like, probably their friend or a VA is the one running their Twitter. It's probably not them.

I have met a lot of people who don't realize that politicians don't write their own books. I've been ghostwriting for 10 years. I was like, you think someone running for president has time to write a 600 page book? I. Definitely someone else. That's an entire job. So sometimes we don't know about that.

But it doesn't matter if someone in between did it 'cause the person we blame gets the blame. So there's a possibility. Now [00:17:00] another security concern is publishing content that you're, you say, oh, I have a superpower. I get really worried when people say I can replace myself, because that means they're not doing oversight.

That means, oh, this does the work and I trust it to output it. And we can see that in all of our social media feeds that there's a lot of content that's. The quality's dropped. That's why everyone talks so bad about the LinkedIn feed. It's so hard for me to find anything worth commenting on LinkedIn, and I try to be active for that reason.

It's all AI generated content. What are some ways that people should start thinking about the content they're generating and the risk that could, cause there was recently a company in England that let their chat bot, they had to turn it off because the was like, yeah, we're the worst company in England.

The chat bot turned against them when it was talking to a customer. And that's another thing, whenever you take your eyes off an ai, just like when you take your eyes off an employee, right? If you just go, oh, you're my employee, but I'll never check your work, what's gonna happen? So how should people approach that part of security?

That part of making a mistake, they can also damage your brand or company in a major way. [00:18:00] 

Terry Zienmiak: Yeah. I, that goes back to my previous answer. It's a risk consideration. So if you're gonna you have a new use for ai, we're gonna start spitting out the social media content, okay.

Viable solution to a business problem. We can drive the cost down, we can now, all sorts of neat things here. What's the risk behind using AI to automatically generate our social media stuff? The risk is it could go south and make. Inappropriate comments, whatever it may be, that's a risk. How are we gonna manage that risk?

Whatever the solution may be. But the point is you gotta learn these lessons. Other people are tripping on. See what things you have to be worried about and build risk, register they call, but think through what are the top five or 10 things that could bite us in this space.

Yeah, I'm seeing that more and more often. I know there was a issue with, was an airline a couple weeks ago reporting their chat, GPT. Their bot was making incorrect summaries of some policies. So this guy said, Hey, your [00:19:00] chat bot said I can get a full refund. I want my money back. But I would think it goes along, kinda along the lines of with employees.

You really want the idea of, and I work a lot of a good number of smaller companies, empowering your staff, especially in the cyberspace. I wanna empower this chat bot. I wanna empower my developer, Jonathan, to do his work. I'm gonna set the guidelines, and if Jonathan or the chatbot's running in the space, you go crazy.

If you're outside the space, we gotta put the brakes on. We gotta double check what's going on. So I would frankly deal with a model like that. Chat, GPT can spit out the social media, but maybe the interim can do it or maybe have another tool read it for I don't know, inappropriate comments and whatnot.

The tools like that start to pop up, but establish the guidelines once you understand the risks. All the guidelines and let it work with those guidelines. 

Jonathan Green 2024: I'm glad you brought up customer support because that's another area I'm thinking about a lot of people. The first thing they say to me is, I wanna fire my customer support team or replace 'em with a chat bot.[00:20:00] 

And I always say, have you ever had a positive experience with someone else's customer support chat bot? No one's ever said yes. And I say when someone reaches out to customer support, they're usually already at least a little bit upset. So you're taking someone in a negative situation, you're putting a bot in place.

That's risky because sometimes it goes off track. And then I ask, here's my second question, will you give the chat bot the ability to authorize a refund and to actually process a refund? No way. So that means whoever I'm talking to does have the ability to solve my problem. So that's why everyone hates chatbots and customer support.

And I always say, listen, you can put a chat bot in. Sales process. That's fine. 'cause that's a happy person. You're building up rapport if someone's already upset is, I used to work at a Fortune 50 company and I know the reason they moved customer support to India was 'cause 10% of people when they hear an Indian accent will hang up and they go wait to keep all that refund money.

And I go, wow, that's dark. [00:21:00] And I only found that out 'cause my friend was head of one of those departments. He is yeah, it's crazy up here. So companies sometimes do strategies specifically to. Push people off. That's why the phone tree exists. 'cause people will hang up and it's always about saving money.

But I always say, this is my thought, I wanna hear yours, that there's a balance between. Giving the customer good experience and cutting costs as much as you can, and there's a risk when you get rid of customer support, then there's a risk of a bad thing happening on the other side of that, which is an unsatisfied customer.

Instead of a refund, it turns into a chargeback, which is a different problem, and all these other things that can happen, which is the customer. By the time they get to you, they're a lot more upset. You. It's like saving pennies to lose dollars. What do you think about this part of the process that when you're making a decision making calculus, it's not always about saving money.

That I know it's not specifically a cybersecurity risk, but it is like a technological risk. 

Terry Zienmiak: I think it comes down to AI is a really neat tool [00:22:00] and it can do a lot of things. Maybe not well, but it certainly has a lot of capabilities and I know companies are playing with seeing how they can best leverage it to improve their business, whether that's cost or efficiency or whatnot.

But like any other technology like cloud technology and all the. Wearables that came out recently in self-driving cars. Is it the right tool for our problem? If you start with a tool and then try to slot a problem into it, you're gonna be in trouble. So I think it is, like any other technology is I, again, if you're looking for a problem to solve with your technology, you're in trouble.

It's really, you need to know your problems ahead of time and then see if the technology is the right solution to the problem. 

Jonathan Green 2024: So I personally know of seven or eight ways to jailbreak any ai. So if you have any company data that's inside of your ai, I can get it. And I'm [00:23:00] not, that's not even my area of expertise.

I just happen to follow some cybersecurity people in that space. So one of the issues with ai, and there's a couple ways. For example, you can send it a picture that has data encoded and it will immediately break through. That's just one way. I don't wanna get too specific 'cause I don't wanna give everyone away all these crazy things.

I don't cover these very often, but other people do. So just realizing that if you actually did, that's the reason you don't wanna give the chat bot the power. 'cause it's just like you could socially engineer a person. You can socially engineer an ai. 

Terry Zienmiak: Yeah. 

Jonathan Green 2024: Which is a new attack vector that people haven't even thought about.

I, when I teach people, I say, before you start. Using AI for that. I like to use AI for smaller stuff like organizing, like sorting. So there's a lot of studies that have shown employees spend a huge amount of time dealing with email every day. I use an AI to help me organize by email. Really simple, really small.

It's run locally. There's another area of AI that's very interesting. [00:24:00] People in information or knowledge businesses like mine spend 20% of our time looking for files. I learned that last week I was talking to someone last episode I got. You're right. That is true. That's definitely true for me. So an AI to help you sort and organize your files makes a lot of sense.

'cause it's not customer facing, it's internal. It doesn't need to have public access because it can be run locally onto your machine. And these are the areas where I always try to tell people to start rather before you jump in between you and customers or in between you and your bank or these other areas that are exciting.

There's always a risk with any technology that's touching the internet. I remember in the late nineties when I was working on larger companies that we had an intranet, which is no one inside the building could actually access the internet from the company. Computers. Those days seemed to be gone like that was a really good security measure, which was, there's a firewall, or you would have two computers, one which had the internet and one which didn't.

That security measure seems to have faded away, and I'm not sure why. 'cause it seems [00:25:00] like it's probably the smartest version of this, right? Which is that our network is secure, so then we just have a team just protecting the fence. What do you think, when people are thinking about implementing these technologies, about these vulnerabilities that are the unexpected things?

'cause it's like the 10 biggest losses in Las Vegas were all things that no one had insurance for, right? Siegfried and Roy had insurance for the tiger attacking the crowd but not attacking. Siegfried and Roy, so they hadn't insured the one thing that happened. We all the things we expect, it's the unexpected, the Black Swan event.

So how can someone prepare for these new areas when they haven't even thought of that? There's probably half people listening. Didn't even know you could jailbreak chat, GBT and I've blown their minds on accident. 

Terry Zienmiak: That's, goes back to risk management and I, in cases like this I like to remind folks that you know, really what is risk.

There, there's formulas out there. And if you ever do your cybersecurity training and certifications, they'll talk about different formulas. But the most fundamental is risk is [00:26:00] impact, how big is the issue? What is the issue? And then divided by countermeasures. So what are your protections?

The more protections, the less risk. The more vulnerabilities, the more risk and the bigger impact, the more risk. Think about the impact aspect of that. So if you had a chat GPT, which was totally hacked, all the data's gone, but you had, data you didn't care about, it's very low impact. So in this very quickly evolving role of ai, the vulnerabilities may be a big unknown 'cause they're gonna pop up all the time.

The amount of protections you can apply. Is gonna be evolving as well, but you can control the impact aspect. The piloting the small scope data we talked about classification data using data that you've classified as not really important in, in your pilots. Drive down the risk by using less risky data.

'cause you are right there, there is an unknown [00:27:00] area of this AI and how you're gonna manage it. But step back to, you mentioned earlier on the idea that. Where do you start with ai? You got the office users and then you got the public facing stuff and something else. I would intentionally break those into two pieces.

It's relatively easy and it's consumerized these days of how to get AI to your office workers. Again, it's built into office. There's a bunch of tools out there, good or bad, and still have to plan, but that's relat becoming clear how to manage that. You can jump on that immediately when, once you've again established your guidelines and your boundaries, how to do it correctly.

But I would make that distinct from consumer facing, which is more, much more business related, sorts of application of ai. It again can be riskier 'cause it may be public facing. Plus it's typically less. Mature perhaps than consumer related ai. Your chat your copilot built in office 365.

Microsoft's banging on it for, a year, let's say. And they're pretty sound, if [00:28:00] you have a public facing chat bot, it may not be as polished and as mature. So you don't have the maturity level behind that security solution. That's another reason your public facing stuff may be more risky, just 'cause it hasn't.

Baked as long and all the threat modeling and all the cybersecurity stuff you can do to drive down the risk is not as mature as it would be in a consumer facing solution.

Jonathan Green 2024: So there's another vector I wanna talk about. This is something that's specific to me is that people who are creatives, authors, artists. Their response right now is there's a lot of people suing Chat, GPT and suing Midjourney for stealing their data. And George R. Martin, for example, someone used chat GPT to finish his series.

They write the final two books of Fire of Dragon Ice. And I was like that. I thought that's a really good idea. He has a, and my feeling is you haven't written a book in 13 years. You're probably not gonna finish the series, like you've probably [00:29:00] stopped. The guy didn't sell it or publish it, he just wrote it for himself and I guess talked about it online.

And so that's why they're suing Chat, GBT. Now chat, GBT has read my book without permission. So if you say to chat GBT, write in the style of Jonathan Green, it will write in my style, which surprised me because I'm about the fourth or fifth most famous Jonathan Green author. There's the guy who wrote A Fault in our stars.

There's science fiction author. There's a couple other ones, but it. Picked me and I only found out when I was doing a live demo and I go, this won't sound like me, but watch this. And then it did. And I go, which Jonathan Green are you using? And so it's read my book. Why Easy? It scans internet. Every book ends up on A PDF on some Russian website.

So it's probably how it ran into it. I doubt it. Stole it from Amazon servers. 'cause it doesn't need to. I look at this and I think, is that my secret sauce or is there more to me? And my mindset is, I think about the Napster stuff. 30 years ago when everyone sued Napster and said, no more internet music.

That didn't work. [00:30:00] The internet is where all music is now. Everyone's changed. No one buys CDs anymore. The business model had to adapt and change. The artist who switched and adapted the fastest really succeeded. There were certain musicians who would. Take their song, rename it as every popular song and upload it to those servers so people would download it thinking it was the song they wanted.

And it was Soldier Boy, that's how he got famous. I was like, that's so genius. He just kept taking his song. He would name it You two this or whatever band was popular. And people would download that and it would just keep being Soldier Boy. And I know people, that's how they first heard of one of his songs.

So there's people who are fighting against it and people go, I think there's an opportunity here to get my name out there and what I think about. My possibility, yes, my book is in there. The thought of getting them to pull it out, which I know would be almost impossible for them. They'd have to retrain the entire model.

You're talking hundreds of millions of dollars just for me. They would never do it. I don't wanna be part of a class action lawsuit. And same thing like Hollywood writers are doing is that they're trying, they think the answer [00:31:00] is regulation or systems that you can stop a technology and that never works.

So my thought was. I can use this and I had this version edit my book and I edit my book faster. I've ever edited a book before. We did get into a lot of fights and this is the reason I've never wanted a twin 'cause it's like talking to yourself is like with my personality. I was like clashing with it, which is exactly what I expected to happen because we both have strong wind personalities with this new vector where people are thinking about.

My information, which is my special sauce, my writing style, my painting style, my drawing style is what makes me special. And now that's getting pulled into these ais. What do you think people should do? 'cause my thought is that I'm more than just my style. I'm constantly innovating, constantly doing new things and thinking how can I stay ahead of this curve?

Because my data, my image, constantly get pulled in there. 'cause I'm a public figure. 

Terry Zienmiak: Yeah that, that's a good thought. As you mentioned, as you wrapped up that [00:32:00] story, it got me thinking about conversations we had about consumer identity theft and I was on a group that was giving us some feedback to the FTC on some stuff they were doing, and we're talking about social security numbers.

People are stealing your social security numbers. They're making a lot of money off it. It's a pain in the butt if you get it stolen. It kinda occurred to me, I'm like, why is the social security number so important? We spend so much time trying to stop people from stealing the social security number.

On the back end. Why don't we just stop the misuse of the social screener. You are not gonna stop the stealing of Jonathan Green's book 'cause you're right. It's gonna be in Russia and they're gonna publish it. You're not gonna steal anything that's publishable is gonna get stolen. I don't think you can stop.

You can't steal the theft. Presumably. Real serious boundaries around it, which consumers aren't gonna I think's gonna stick foreseeable, future anything's digital. Just assume it's stolen. The question is how do you stop the misuse on the [00:33:00] backend? And I think that has to be worked out.

But from your point of view, again, assume it's gone, which you decide is, can you get value out of it? Can you reduce the risk? Can you reduce the impact? I think the forward looking people in companies are gonna get a lot of value outta this 'cause they're gonna learn how to play the game. Just like they adjusted to the internet.

They adjusted to the cloud. They, there, there's always pivots in in, in our culture, in our technology and our business. How dynamic and how fluid are you as an individual or as a business, you gotta pivot. It is just the way the world did. The world is not static. It's gonna change.

Learn what's going on and figure out how to make use, make the best out of it.

Jonathan Green 2024: So the last question I wanna ask you as we wind down, 'cause this has been amazing, and I don't wanna take too much of your time, but I just had so many questions for you. Is that right now every company is suddenly getting advice from their board of directors, which says, Hey, we need to do AI this year.

And then the CEO goes, what do you mean by ai? And they go, we don't know. We just know [00:34:00] we want it. Which is the last thing you wanna hear. It's like the worst possible thing you hear from someone. I want something, but I don't know what it means. This happened about 25 years ago when every company said they needed a website and you'd say, why do you need a website?

They go, I don't know. I just know I need one. And it was like my nephew told me I need one. My cousin told me, anyone, the board of directors, and first it was a MySpace page, then it was my own website, then it was a soc, a Facebook page, and it's, there's always, and then it was a mobile version of your website and now it's every company needs AI and they're not sure what it means.

There's this massive demand in the market for AI expertise. The area that I'm in where I tell people what to do and what not to what's possible and nots possible. 90% of my job is telling people what tools not to use or that tool's not ai. That's most of what I do is what I keep out, not what I keep in, it's what I don't recommend.

'cause there's a lot more stuff I don't recommend than I do. There's this need, and everyone's talking about the development of the chief AI officer. A lot of companies are talking about it and the salaries, just, those are so few people [00:35:00] in that space. Crazy spiking. And then when you say, what does a Chief AI officer do, nobody knows.

When you say what love of expertise, if you talk to A CTO, they think a chief, a officer, should basically be a CTO. Under another name, right? A super technical person who's programming themselves. If you talk to people from other directions, especially people who come from the consulting world, they say, no, a chief, a officer's real job is to tell you which companies to work with and which to not.

Their job is not to implement, but instead to develop strategy and then say, hire this agency, not that agency. That agency, as they're talking about, I went through an experience where a very large company reached out to me to have me look at their stuff and work on a project with them, and they forgot to send me the NDA and then canceled the project.

I. I was like, oh my gosh, I'm gonna talk about this all day long. And they were charging millions of dollars and everything in their secret deck is in my book. I was like, guys, you don't have anything in here that you're charging a hundred times more than I do. That's [00:36:00] different, unique, or whatever. As I was like, oh my gosh, I'm gonna learn all these amazing things.

They might, they're spending millions of dollars on r and d. They must know more about AI than I do, and no. They just charge at that level. So what do you think the future is for that space? As someone who works in the fractional space, as someone who works with a lot of other people in your organization that are different types of fractional experts, what do you think is gonna happen and how is this idea of the fractional a IO gonna shake out and what do you think this role fills that's different than a COO and A CTO?

'cause it's I think, in between those two roles, it's where I think it fits in my mind, but maybe I'm wrong. 

Terry Zienmiak: So I've worked in healthcare for a good bit of my career and over the past 10 years or so, a lot of big healthcare organizations have created positions called the Chief Data Analytics Officer.

So healthcare has lots and lots of data and they're trying to figure out how to make the most value added. The data analytics officer and the AI officer, those are pretty similar roles and [00:37:00] really the idea is how do we as a business. Best leverage this technology, to solve our business problems and to continue to execute upon our mission statements.

And that's what that executive c-level suite is supposed to do is set strategy a lot of alignment, a lot of culture, working with the organization to make sure that we agree on what we're doing is and heading in the right direction, overseeing a roadmaps, those sorts of things. I definitely would not have the AI officer play in the CTO space.

CTO has a very specific job, architecting, building things and making it all work together that's your CETO. You. You don't want two guys or guys fighting in that space 'cause someone's gotta own how all the pieces fit together. The CIO operates it all and make sure it works, make sure it's aligns with the business need.

But frankly I think the chief AI officer is something that'll spike and then probably disappear in 10 years. 'cause it's just gonna be absorbed into general data management. I. The CIO [00:38:00] strategy sorts of stuff. So I think it's gonna spike because it's a lot of interest and there's a big rush.

And again, everyone wants to be ahead of the curve like we talked about. But it's gonna normalize and it's gonna become part of a general business function. The data management. So your CIO or maybe a chief data officer sort of position. Now the evangel of the fractionals is the fractional, then you can bring in someone who's got that experience.

So if you're a mid-size company and you don't wanna spend 500 grand on a AI officer or more there, there's people out there that had that experience doing it. And I've been a fractional executive for about six years. Prior to that I was actually a VP of cybersecurity for a couple big companies across the us So a real fractional executive, and there's.

Real fractional executives and there's people that call themselves fractionals. And in my mind the difference is do you actually have executive experience? So people that I can act like an executive, give you the strategy, give you the roadmap, give you the alignment, those sorts of things I would think that would be a great way to spend your money is [00:39:00] get those fractionals with the experience.

In the AI space, the experience as an executive and kind of help you maybe you can learn lessons from other, from previous clients. You having that experience, especially in such a quick moving dynamic space like AI would be invaluable. So I, that would be a great use case for the executive role fractional executive role 

Jonathan Green 2024: because a lot of the CTOs I talked to are factual, CTOs are trying to.

Slide over. And the same thing with a lot of programmers. A ton of programmers in the last year have changed their profile from programmer to AI engineer or prompt engineer. And the problem with the programmer is that every nail needs a hammer. And they're answer to everything is custom software custom code.

And I always tell people, I say, listen, the last thing you wanna do right now when the technology is moving this fast is build custom code. Because you build something custom and it's gonna be on the API by the time you finish, it's three months or six months ago, they update it. For example, if you built something on Mid Journey's, API, they made an announcement three weeks ago.

They go, [00:40:00] Hey, everything in the past no longer works. We've restructured our entire prompting language, which means everything you spent six months building, no longer works. And I had a small tool I built, went through that problem. And that's exactly why I think the chief A officers is exactly that, is more implementation, is figure out which tools to use, which tools not to use.

Developing the strategy, not the technical thing of building your own software, because why would you do that when companies are spending hundreds of billion dollars and adapting so fast that by the time you finish something, the tool, the tools have changed. Like right now, this week, Claude is the better tool, but as soon Asche, GBT releases their next iteration, then that will be the better tool.

And then philanthropic read LED version, or maybe Google release a new version. And I just think that I. This id, the older mindset of everything should be a custom solution. Anytime I built a custom solution, I've always regretted it later on. It hasn't been necessary for my business and then I go there's a solution that's 99% as good that's already publicly available.

That doesn't require me to maintain and update it because that's another additional [00:41:00] stress on your business. Now you have to have an AI engineer and it, a lot of people claim to be really experts at AI programming or like that part of it building an ai and it's like, how is that possible? They've only been around for a year and a half at this level.

Anything before that was really machine learning. I. So I, that's my thought. I just wanted to get your thought on that perspective. See if I'm in the right direction or if I'm crazy. 

Terry Zienmiak: I do think you bring up a good point is it's hard to know who knows AI whether you're a strategic AI person or a developer or whatever, or an integrator or whatever it may be.

Again, things move in so quickly and again, I think that's where you're chief AI officer can work with, perhaps with hr. To identify, how do we detect the right people, the right skillset to work in this space. Yeah, resources are gonna be a problem if you wanna push ahead and you wanna be, leading edge in AI or anything.

Finding the right resources to support that is is not easy. And it's not cheap. Not cheap either. That, that's a [00:42:00] great point. You gotta make sure you find the right people 

Jonathan Green 2024: That's been amazing. Thank you so much for giving us so much of your time. I have really enjoyed this. Where can people find out more about you?

Do you want 'em to follow you on LinkedIn to visit your website? Where's the best people? Place for people to connect with you and find out all the things you're working on and see if maybe you are the right fractional person to help them take their business to the next level and protect their security issues.

Terry Zienmiak: Yeah. Yeah. Thank you Jonathan. Really you can find me at my company. I'm a partner with Tech CXO. It's T-E-C-H-C-X-O. Look up Terry or look up security, you'll find me in that space. We're a, a large consulting group about a hundred fractional executives, so cybersecurity, technology, ai we have quite of a, quite a stable of executives in there.

So yeah, absolutely reach me, reach out to me. Happy to chat with anyone. 

Jonathan Green 2024: Thank you so much for being here. I'll put all the links below the videos and in the show notes. And thank you guys for listening to another amazing episode of the Artificial Intelligence Podcast. Thanks for listening to today's [00:43:00] episode starting with ai. It can be Scary. ChatGPT Profits is not only a bestseller, but also the Missing Instruction Manual to make Mastering Chat, GBTA Breeze bypass the hard stuff and get straight to success with chat g profits. As always, I would love for you to support the show by paying full price on Amazon, but you can get it absolutely free for a limited time@artificialintelligencepod.com slash gift.

Thank you for listening to this week's episode of the Artificial Intelligence Podcast. Make sure to subscribe so you never miss another episode. We'll be back next Monday with more tips and tactics on how to leverage AI to escape that rat race. Head over to artificial intelligence pod.com now to see past episodes.

Leave, review and check out all of our socials.