Public Relations Review Podcast

Combatting AI Deepfakes: Strategies for Public Relations Professionals

Peter C Woolfolk, Producer & Host w/ Rebecca Emery Season 5 Episode 153

What do you think of this podcast? I would very much appreciate a review from you!! Thank you!

Can AI deepfakes really cause global economic damage exceeding $1 trillion? Join host Peter Woolfolk on the Public Relations Review Podcast as he and guest  Rebecca Emery, APR, CEO of Seacoast AI.com, a leading expert in the field. tackle this critical question with Rebecca Emery, APR, CEO of Seacoast AI.com, a leading expert in the field. Rebecca uncovers the alarming rise of AI-generated misinformation, shedding light on how bad actors are exploiting the accessibility of AI technology to spread falsehoods with devastating impacts. We also explore the double-edged sword of AI applications through examples like Ukraine's AI avatar Victoria Shi, which demonstrates both the positive potential and inherent risks of AI-driven communication.

In this episode, we arm you with practical steps to detect and combat digital deception. Rebecca dives into tools such as Google Lens, Hive Moderation, and TinEye, empowering communicators to identify image and video alterations effectively. You'll also learn about the SIFT method—Stop, Investigate the source, Find better coverage, and Trace claims to their original context—which journalists use to verify information. We emphasize the broader role of educating the public on AI's ethical use, equipping you with the knowledge to respond to misinformation and deepfakes adeptly.

Finally, we underscore the necessity of education in recognizing and addressing digital deception. Rebecca shares frameworks like the ABC method—Actors, Behavior, and Content—to help identify threats, illustrated by real-world incidents like the Royal Family's altered photograph. We stress the importance of building robust resources for reporting and removing harmful content while staying informed about the evolving threats in AI technology. Don't miss Rebecca's expert insights on AI's transformative yet challenging role in public relations, and learn how to stay ahead in this rapidly evolving landscape.

We proudly announce this podcast is now available on Amazon ALEXA.  Simply say: "ALEXA play Public Relations Review Podcast" to hear the latest episode.  To see a list of ALL our episodes go to our podcast website: www. public relations reviewpodcast.com or go to  or
Apple podcasts and search "Public Relations Review Podcast."  Thank you for listening.  Please subscribe and leave a review.

Support the show

Announcer:

Welcome. This is the Public Relations Review Podcast, a program to discuss the many facets of public relations with seasoned professionals, educators, authors and others. Now here is your host, Peter Woolfolk.

Peter Woolfolk:

Welcome to the Public Relations Review Podcast and to our listeners all across America and around the world. This podcast is ranked by Apple as one of the top 1% of podcasts worldwide, so I say thank you to all of our guests and listeners for making this possible, and please leave your review for the podcast. We certainly like to hear from you. Now question as most of you know by now, artificial intelligence is front and center in many cases, including visual presentations. Some are exceedingly realistic. Unfortunately, many will use fake avatars and spread misinformation so what do?

Peter Woolfolk:

you do when you find fake AI presentations? Well, my guest today can help answer that question. She is Rebecca Emery, APR, and currently serves on the board of the Maine Public Relations Council as chair of the Professional Development Committee. Now, in November 2022, when ChatGPT was released, it caused most of us to take immediate notice and try to understand how this new technology could be helpful. Well, Rebecca's fellow APRs, agency heads and PR colleagues kept asking her what is this new technology? Is it safe to use in my role? Is it plagiarism? Well, it was at that point that Rebecca made a career pivot and founded Seacoast AI in early 2023 to provide AR awareness, training and advisory services to the Seacoast area of New Hampshire and Southern Maine. Now her tri-state consultancy helps individuals, executives and small to medium sized businesses embrace and harness AI technology's powerful business acceleration. Embrace and harness AI technology's powerful business acceleration. So let me now welcome Rebecca Emery to the podcast. Thank you for joining me today.

Rebecca Emery:

Thank you for having me today, Peter, I appreciate it.

Peter Woolfolk:

Well, let's just ask the first question Just how pervasive is fake artificial intelligence, and how can we begin to recognize and combat misinformation?

Rebecca Emery:

You bet. So we're seeing that AI is essentially everywhere. It's on our cell phones, it's in our computers. We have the ability to talk to the smartest supercomputer in the world in natural language, and why wouldn't we? But the bad actors also have access to this technology.

Rebecca Emery:

I've seen estimates, peter, as high as $1 trillion, that the global cost, from everything from misinformation and disinformation and phishing scams, could exceed $1 trillion by the end of this year.

Rebecca Emery:

But when it comes to AI deepfakes and misinformation and disinformation, we saw about a half a million deepfakes were shared globally last year and that number is expected to reach about 8 million by the end of this year and double every six months thereafter.

Rebecca Emery:

Now you ask yourself how did we get here? And the interesting thing is this the AI technologies are becoming more and more sophisticated, to the point where it's very difficult to recognize whether, anized deepfake, they could reach about 100,000 social users for about $0.07. But the cost to try dozens of these AI tools, these image generators and voice cloning technologies those are free. So there's a real problem with the dollars and cents here, in the sense that all of these technologies. Problem with the dollars and cents here in the sense that all of these technologies. Many of them are available in free basic capabilities, limited capabilities and it only costs a few cents to weaponize what's made with them, and so we're going to continue to see this grow over the coming years Because, unfortunately, as much as AI has lots of good capabilities and positive benefits, the bad actors also have access to it.

Peter Woolfolk:

You know, just for contextual purposes, this conversation between you and I started a while back when you showed me a video of I believe it was someone in Ukraine speaking for the government.

Rebecca Emery:

Correct.

Peter Woolfolk:

It was a real person. Tell me a bit about that, and then we can go from that into more real and fake AI presentations.

Rebecca Emery:

You bet, peter. So I think what you and I spoke about a while ago was that the Ukraine government, basically the Ukraine Ministry of Foreign Affairs, created a new AI avatar called Victoria Shi S-H-I, and Shi was based on a known personality, very relatable, I think, to the local people there, but understand that AI can translate into multiple languages in real time. And here's a country that is in a wartime situation. They can't exactly call a press conference and say we're going to bring all of our top ministry officials and consular officials together and we're all going to be in this location at this time. I mean, that's just not practical in a wartime setting. So they created an AI avatar. Her name is Victoria Shee, and the idea is that the consular will be able to put out messages with operational and verified consular information. They'll be able to do this on the public, on the ministry's official website and social media channels, and they can also broadcast information on emergency situations and other news. Broadcast information on emergency situations and other news.

Rebecca Emery:

And on one hand, we think about AI and there's a contrast here, right, because we would expect our government to give us authentic information from a human being, but in this case by using an AI avatar and by using a QR code that allows you to go back to the official statement, they're able to provide very timely updates. It doesn't take the officials away from their official business, but using an AI avatar also ensures that there are no missteps or on-camera slip-ups. Right when the cameras are rolling, we often choose words that we go back later and think, gosh, I wish I hadn't have, you know, said it that way. But using an AI avatar allows the ministry to put out an official public statement. The avatar can deliver it. The avatar can deliver it in multiple languages and from anywhere at any time.

Rebecca Emery:

So you know, I think there's some really positive benefits to something like that. However, it also set some up for possible manipulation. It would be very easy, perhaps, to clone that avatar and put out what sounds like seemingly official statements. So we haven't quite seen all of this play out yet. There's a city in Japan that did a similar thing, where they cloned their mayor so that this clone could then deliver timely messages in multiple languages. So I think we're starting to see AI come into the communications role in terms of its ability to deliver messages, timely messages, messages in different languages. But I think the jury's still out in terms of its ability to deliver messages, timely messages, messages in different languages, but I think the jury's still out in terms of whether this is going to be successful for them or not, but it's certainly a bold move forward.

Peter Woolfolk:

Well, you know, I think it's a good idea to let people know that it also has other uses, because I would imagine some commercial firms can bring AI up to answer some basic questions, or maybe a wide range of questions based on whatever the inventory or programs or services or products that they offer, that it at least let the caller see that someone is responding to or can respond to them in a reasonably intelligent fashion.

Rebecca Emery:

Correct, and we know that AI needs a lot of data, and so companies have a lot of data, and so for them to be able to extend that into a more natural relatable form in the form of an avatar that can speak their language, relatable form in the form of an avatar that can speak their language it's definitely the step forward that we're seeing in terms of AI use for communication.

Peter Woolfolk:

Well, you know, it certainly has some benefits, because one of the nice things that I like about it is that an individual, an actual person, can become an avatar with the technology. You know, I've done it myself here on some of the software we have that load my photograph up there and it does whatever it needs to do, and then I just add some words to it and I can see myself speaking. It might not be the correct voice, but the fact that I am saying what I wrote. So companies can perhaps do the very same thing and it adds a bit more, I guess, friendly outreach to our customers, rather than just maybe typing something into a computer and having a few words come back to you.

Rebecca Emery:

It does. It adds a level of, I think, dimensionality and it may be to the point where you could choose to see an avatar that you most relate to, so you might be able to choose from multiple avatars, meaning you, the viewer, and then have that avatar provide you with those messages or that information in that relatable format. I mean some of the avatars they're still a little clunky. The mouth movements are still that. Even Victoria she, her shoulder and head was a little robotic and of course, part of that is the transparency around using an AI avatar.

Rebecca Emery:

But as the technology gets more sophisticated, I believe these avatars will become more and more realistic. We see them a lot with onboarding at new companies. More and more realistic. We see them a lot with onboarding at new companies. So new employees might get onboarding video that has an avatar in the corner that walks them through. And we see that with presenters that you're used to sort of looking for that person in small format as they're speaking. But the use of avatars for that and being able to choose different languages simultaneously and say, well, I speak this language, so I'd like to receive that message To be able to do that in real time is pretty powerful and again, ai needs all of that data. So it's a logical extension for AI in terms of being able to present a company's data in that more relatable format let's get back to those, the shifty, underhanded people now, because bad actors, the bad actors.

Peter Woolfolk:

They're there and I guess we need to maybe help people understand how they can identify some bad actors or at least begin to ask some questions. Is that information accurate? Yes, I saw it on the TV or on my computer. It came from an avatar. How can I go about determining if it's accurate or not? Let's help them understand the processes that they need to go through to get that done.

Rebecca Emery:

Absolutely, peter. I think it's important for communicators to first understand that when they see an image or a video or hear audio and they're not quite certain that it's real, we want to be able to evaluate and analyze and assess that threat, and so first we want to use tools to investigate Is this a deep fake? Is it a cheap fake? Is this coming from humans and trolls? Or is this coming from digital bots? Is this disinformation or misinformation? All of those have different distinctions and, as a result, they will inform what we do and how we respond and whether we publicly respond or not. But from a first step, if you see something that appears to be digital deception of some format, the very first thing you can do is simply do a reverse lookup on Google. It's a powerful tool and you can simply go into the Google search bar and click on that little it looks like a little camera, I believe and it's called Google Lens so you can put an image in there. You could put a social media post, anything, and it will start to look up whether indeed that is digitally altered.

Rebecca Emery:

There are more sophisticated tools. I like Hive Moderation is one of them H-I-V-E. I think that's really an excellent tool for both image and then multimodal, so audio and video, and it uses multiple models to detect. And then, of course, if we're talking about content, there's ways you can look in Snopes and Room Regard and Reddit and other locations to see whether this is a known scam. But if you're really unsure, go to AI. You can ask ChatGPT to analyze an image and it will. It will give you a pretty good idea as to whether something has the digital footprint, if you will, that might signal that it was altered in some way.

Peter Woolfolk:

You know, hold on a second. That was interesting. You said that chat GPT can help Explain that a little bit more. First time I'm hearing something like that.

Rebecca Emery:

Well, I mean, ai can recognize things, ai signatures, so, yeah, so, and ChatGPT is a very powerful model and it's multimodal, so you can upload handwritten notes and it will immediately put them into logical form. You can upload an image and ask it for a caption or to describe the image and give you a mid-journey prompt. But we can also put an image for example, a social media person maybe wasn't at an event that they're being depicted at. You could put that into ChatGPT and say, does this appear to be digitally altered? And it will at least analyze it for you and give you an idea as to whether it thinks it is. I think hive moderation is a better tool for that because it uses multiple models and cross-checks them. And then there are lots of other tools that can check for plagiarism detection and Grammarly and Quillbot and so forth.

Rebecca Emery:

But, yes, you can start by just going to any one of the AI chatbots, putting an image in and saying, hey, does this appear to have been altered at all?

Rebecca Emery:

And some of these will actually go back and show you the history. So if you go to a tool called TinEye, which is P-I-N and then E-Y-E, it will actually show you all of the impressions of that image throughout history. So if it's something that is going viral in the moment and there are, let's say, 100 impressions of it out there, it will show you each one as it finds them, and then you can go back and trace back to the very first one. And this is what helps us be sort of AI detectives you want to try and find the origin of misinformation or disinformation and understand is it being weaponized against you or the brand you represent, or is it in part of a larger cause? And then you want to find out if this is going back to humans, or whether this is going back to humans or whether this is going back to a bot farm where multiple, maybe a hundred, cell phones are all being controlled by one master computer.

Peter Woolfolk:

So it helps to go back, I think, and trace and then verify when you suspect digital deception well, let me just say right now that is a spectacular amount of information that you've provided to our audience you know how to detect fake, if you will, or the bad players in terms of AI presentations.

Peter Woolfolk:

I hadn't heard this before and I was. I'm taking notes on this. I think it's huge information that our listeners can certainly benefit from, and let me say this now I'm certainly going to be promoting this episode with some of my AI videos that I do produce for these things, to let folks know just how important this is.

Rebecca Emery:

Absolutely. And you know there's a method that's taught in universities and used by journalists. It's called the SIFT method, and used by journalists. It's called the SIFT method. Stop, investigate the source, find better coverage and trace claims to their original context. So if you suspect that there is some kind of altered article out there or whatever, you can really just take a minute and try and investigate that source and trace it back to their original context, especially if it's a cheap fake or an altered. So the context and the image may be real, but part of it was altered At least. If you can go back and find the original real reference, then you can put them side by side and say this is AI manipulated or this has been digitally altered.

Peter Woolfolk:

Now, ann, I know you do quite a bit of speaking, but in terms of the topic, do you find I guess the best question is are most of your topics about the fake AIs, or just where do they land for your speaking engagements?

Rebecca Emery:

No, most of the time I'm educating and providing awareness and demystifying AI.

Rebecca Emery:

What is it, what is it not, and what can it do for us and what should we never do with it, and what are those best practices and guidelines for responsible and ethical use.

Rebecca Emery:

But as part of that, being a public relations career professional myself, I talk to a lot of marketing communications groups. I talk to public relations groups and part of what we talk about is the reputation management piece, which is the rise of deep picks that are being weaponized against corporations for money and weaponized against people for money and to disparage them, perhaps out in the public, and so this topic comes into that discussion. But very often I'm talking about the fun stuff how do we use it? How do we use it for content creation and ideation and SEO and so forth? But it's also very important to me to make sure I educate others on how to start spotting misinformation and disinformation. You have to practice. You have to start to understand how to recognize the signs when we suspect digital deception is at play, and I have some frameworks around that, so I do teach whole groups, whole teams about AI, deep fakes and reputation management.

Peter Woolfolk:

Well, let me say that right now. Hey, you all haven't just mentioned how to detect things. Let me say that to our listeners right now that Rebecca're having just mentioned uh, you know how to detect things, let me say that to our listeners right now that rebecca has developed a free I want to repeat, free 20-page digital guide that includes links to ai tools, best practices and tips for spotting misinformation, and it's available free on her website at seacoast, that's s--E-A. Seacoastaicom Wanted to make sure we got that in because I was fascinated as you talked about how we can go about, or others can go about, detecting fake AI presentations, because that's going to be huge today and in the foreseeable future.

Rebecca Emery:

Absolutely, and in fact, I'm building out a resource guide around this, because if you suspect digital deception and you have to make a public statement about it, how do you analyze the threat? And you can use AI to really make that process go much quicker as well. Ai is excellent at analyzing situations and giving you guidance on next steps and so forth. I was training a group of amazing communicators this week and I showed them how to use Claude to create an artifact, and it was a decision tree and based on a certain scenario. At what point should we consider making a public statement about this?

Rebecca Emery:

And it steps us through sort of a yes, no, if it's going to cause us harm, then perhaps, yes, we should make a public statement, and so forth. And then I asked Claude AI to turn it into a little quizlet. And all of a sudden, now you have this little snippet of code that looks like a quiz and can step you through all the different scenarios until you get to the answer you need, which is is it time to put out a public statement or should we just create a holding statement and continue to research and investigate what is happening?

Peter Woolfolk:

Well, it seems to me that you're doing a lot, a lot of work for a lot of people to help them. One, become more efficient in the use of AI, but also, just as important, detecting the unsavory players that are out there.

Rebecca Emery:

We have to improve our skills. And we saw a little bit of this, peter, earlier in the year, when the royal family was trying to provide Kate with some personal space. She was dealing with a personal matter and the world, you know, influencers and followers and interested people were asking for an update and unfortunately, the royals made a bit of a public relations stake when they put out an altered photograph and although the photograph in parts were real that was the family, they were together there were also parts of that photograph that were clearly altered and that's what we consider misinformation. Right, the part of it is real, but somehow it's been altered. The part of it is real but somehow it's been altered, and I think that caused the world then to get into a bigger panic because they felt like they were being deceived.

Rebecca Emery:

And so when you look at breaking something like that down, I use a simple framework, an ABC framework, which is actors, behavior and content, and when I mean content, I even mean context and contours. So you first want to look at what's happening and, with the person and the behavior, and you want to think about does this seem far-fetched for this person? You know, is Tom Hanks really trying to promote a dental plan? No, Is Taylor Swift going to be giving away 3,000 sets of Le Creus? Was a. All you have to do it sign up and give your credit card. No, if she's gonna do any kinda promotion, she can do it. You know, and degree. It can be a big, coordinated promotion and it will be spectacular and fabulous.

Rebecca Emery:

And that was not, and you could clearly see that the images were altered, the the deep fake was altered, but unfortunately people were scammed by it and gave their information. So when you're looking at what could be a scam, you have to think about the actors and the behavior. Does this make sense? Is there some kind of opportunity or intent to exploit or harm others? A big indicator is the sense of urgency. Call now, put in your credit card now. We need money now. Or someone might call and try and do a ransom play and you need to upload a Walmart card now. Right, that sense of urgency should be a big red flag for all of us.

Rebecca Emery:

And then, when we look at the content, you want to think about not only the content, but the context and the contours. Very often AI, the avatar technology, the voice swapping and so forth they don't quite get all the edge details right. So I always encourage people to look around the contours of the image itself around the edges, but also around the edges of people's faces their nose, their mouth, their teeth, their hair, their hands. Those are areas that AI still, at times, struggles with and those can be very helpful indicators that something is amiss. So ABC actor's behavior and content.

Peter Woolfolk:

Well, Rebecca, you've provided some exceptional information for our listeners here today. Is there anything you think we have missed?

Rebecca Emery:

I think that there is a lot to this, but I encourage communications professionals and public relations professionals if you worry about possible weaponized AI, deepfake or misinformation or disinformation, it helps to start educating yourself. What are the differences between those things? What are the differences between deepfakes and cheapfakes, and trolls and bots? Because all of that and how do you assess the impact? And then you better have your links ready. So, for example, where do we go to report and remove content? Right, there's Google takedown. There's Bing, there's Facebook. All the different platforms have takedown pages. Build a list of those and have those ready.

Rebecca Emery:

We need to make sure that we might have to refer to law enforcement if we're at a school and this deals with minors. Right, we want to make sure we have a good monitoring tool in place, whether that's something like, you know, sprout, social or whatever. You want to make sure. Then you can start to continue to monitor, and then you have to really think about your response and what that might look like. So there's a lot to it and I enjoy educating others and really breaking it down, because, if you think about it, all those platforms they reward engagement and in some cases you do want to respond, but in other cases you don't want to fuel that engagement because it's only going to make the misinformation or disinformation go even further in those platforms.

Peter Woolfolk:

There's a lot to it well, let me say that you have provided an awful lot of information. Now can our listeners get in touch with you by way of your website.

Rebecca Emery:

Yes, there's a contact form right on my website. It's seacoastaicom.

Peter Woolfolk:

Okay, let me repeat that again and write this down, listeners, because I think there's a lot of information here that you might want to get back to, and her website again is Seacoast, that's S-E-A, seacoastaicom. My guest today has been Rebecca Emery, an APR with Seacoast AI and, being quite a son of mine, has provided some exceptional information on AI, legitimate and illegitimate, today that I think that we can all benefit from.

Peter Woolfolk:

So, rebecca, I am so happy we've had a chance to have this conversation and I'm sure all listeners are going to benefit from it.

Rebecca Emery:

Me too, peter. I really appreciate it, and I just encourage everyone to just increase your awareness about AI, be on the lookout for digital deception and know what to do when that starts to rear its ugly head, because, unfortunately, ai deep fakes are on the rise. Ai technology is here to stay, and so this is part of the new reality for us as communicators and PR professionals.

Peter Woolfolk:

Well, good, thank you. So so very much for taking time to be on the Public Relations Review Podcast and you know, based on what we've talked about today, I think we might have to find another way to get you on this show about some other related topic in the near future, if you'll be willing to come on and share it with us.

Rebecca Emery:

Happy to do that, peter, anytime. It's always a pleasure to talk with you and, uh, I think that there's so much that's happening on this and ai is evolving so rapidly before our eyes that this is going to be an ongoing discussion point for sure great well, as I said, my guest that today has been uh, rebecca, emery and apr, who heads seacoast AI, located at Seacoast AI, and you get to her at SeacoastAIcom.

Peter Woolfolk:

Rebecca, thank you again so much, and to my listeners, thank you. We certainly would like to get a review from you and, of course, share this episode with all of your friends. And be sure to tune in to the next edition of the Public Relations Review Podcast of the Public Relations Review Podcast.

Announcer:

This podcast is produced by Communication Strategies, an award-winning public relations and public affairs firm headquartered in Nashville, Tennessee. Thank you.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Public Relations Review Podcast Artwork

Public Relations Review Podcast

Peter C Woolfolk, Producer & Host