Cyber Crime Junkies

AI Showdown: Deepfake Dangers. Who's Winning the Race?

Cyber Crime Junkies. Host David Mauro. Season 5 Episode 49

We review the latest New Dangers of Synthetic Media. Asking Who Will Win the AI Race? We identify the risks and current available solutions.

CHAPTERS
00:00 Introduction
00:32 The Rise of Deepfakes and Their Threats
03:56 Deepfakes in Cybercrime and Social Engineering
06:46 latest cyber risks from deep fake
09:49 Strategies for Deepfake Detection and Prevention
13:59 new dangers of synthetic media
22:19 Legal and Ethical Implications of Deepfakes

Send us a text

Have a Guest idea or Story for us to Cover? You can now text our Podcast Studio direct. Text direct (904) 867-446

🎧 Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss an episode!

Follow Us:
🔗 Website: https://cybercrimejunkies.com
📱 X/Twitter: https://x.com/CybercrimeJunky
📸 Instagram: https://www.instagram.com/cybercrimejunkies/

Want to help us out? Leave us a 5-Star review on Apple Podcast Reviews.
Listen to Our Podcast:
🎙️ Apple Podcasts: https://podcasts.apple.com/us/podcast/cyber-crime-junkies/id1633932941
🎙️ Spotify: https://open.spotify.com/show/5y4U2v51gztlenr8TJ2LJs?si=537680ec262545b3
🎙️ Google Podcasts: http://www.youtube.com/@cybercrimejunkiespodcast

Join the Conversation: 💬 Leave your comments and questions. TEXT THE LINK ABOVE . We'd love to hear your thoughts and suggestions for future episodes!

AI Showdown: Deepfake Dangers. Who's Winning the Race?

Key Topics: new dangers of synthetic media, who will win the ai race, latest cyber risks from deep fake, deepfake, AI, artificial intelligence, artificial intelligence risks in cyber security, new dangers from deepfakes, How Deep Fake Videos Increase Security Risks, why deep fake videos are dangerous, new ways ai is used in social engineering, ai risks in social engineering, ai used in social engineering, emerging technologies changing the world, ai, artificial intelligence, deepfake, deep fake, social engineering, social engineering risks, new ways to reduce risk of deep fakes, how are deep fakes made, how are deep fake videos made, how are audio deep fakes made, how is ai making it harder to detect deep fakes, why deep fake videos increase security risks, new ways to reduce risks from deep fakes, dangers of synthetic media, artificial intelligence risks in cyber security, new ways to reduce risk from deep fakes, new ways to reduce risk of deep fakes, how are deep fakes made, how are deep fake videos made, how are audio deep fakes made, how is ai making it harder to detect deep fakes, how ai is making it harder to detect deep fakes, 

Chapters

 

00:00 The Rise of Deepfakes and Their Threats

00:14 New Chapter

09:00 Deepfakes in Cybercrime and Social Engineering

15:48 Strategies for Deepfake Detection and Prevention

23:53 Legal and Ethical Implications of Deepfakes

 Dino Mauro (00:01.408)
you

Dino Mauro (00:07.756)
Welcome everyone. This is the story of new dangers of synthetic media. Who will win the AI race? Let's set the stage. An arena, bright lights, the trembling floor, roar of the crowd, and fire shooting up to the sky, exploding like 4th of July.

The battle is just beginning. Today we will talk about mastering strategies for deepfake detection. Now here's something that might surprise you. Deepfakes, which are AI generated images, videos and audio, pose a growing threat by blurring the lines between reality and fiction. These AI created videos and images make it appear as though people are saying or doing things they never actually did. The loser in this battle, the truth. There is war waged against the truth. Why?

It's simple really, cookie cutter political science. It is so that people can be manipulated into action at the will of someone else. As deepfake technology improves, so must our ability to detect and combat these digital fabrications. Here's a closer look at why deepfakes are a threat and the key strategies to identify and counter them. Imagine this.

You get an email request from a supervisor at work asking you to send them some confidential internal information. You recognize the security risk and do not respond. Great job. Check the box. You passed the test and recognized an attempt for back business email compromise. But wait, there's more, much more. You soon receive a follow -up email. This time it's in the form of a calendar invite, an invite for a video call later today.

with your supervisor, also including a member of the HR team and the finance team, all on the invite. You join the live video call, see your supervisor and other in virtual person ask questions, and then are persuaded to complete the task after the call. And you do as told and share the private confidential data as requested. Then reality hits.

Dino Mauro (02:23.682)
Your real supervisor contacts you later this afternoon, asking you upset, questioning what just happened. Reality, it's hard as you learn the people live on the video call were not in fact your supervisor, HR, and finance people you knew. They were deep fake, altered video, images and audio fabricated to appear real. The lawsuits and financial losses soon mount.

All because technology has advanced far beyond our general awareness. The deception, in fact, is virtually undetectable by the human eye, unless we exercise vigilance. Advancements in AI and deepfake for the sake of art and commerce seems to be actually wonderful. It's fantastic that art and science has merged and advanced to a degree when videos can be created to match our imaginations.

but it's also terrifying. If it's not shocking you, then maybe you're simply not paying attention. The risk is yours. We are simply sharing to spread awareness. Now let's pivot and take a closer look at the fact every time we get online, we enter their world. It does not matter what role you have at work or at home today requires vigilance. Deep fakes.

blur the boundaries between truth and fabrication, making it disturbingly easy to fool even the sharpest of eyes. Deepfake technology deceives, manipulates, and can unleash chaos. Now here's something that might surprise you. The US Department of Homeland Security, the FBI, and the US Department of Defense have launched deep research investigation and are developing protocols to address this.

They are taking this advancement very seriously. In fact, they deployed the same group that developed the internet. Yes, the actual internet to address this. They are taking deep fake risks extremely seriously. And so should we. They have stated that it causes threats to national security and society at large. They state publicly available techniques are enabling fraud and disinformation

Dino Mauro (04:43.768)
to exploit targeted individuals and organizations. The democratization of these tools has made the list of top risks. Deepfakes need to be seen to be understood. See for yourself as I play you an explanation. I mean, who better to explain what is most important in the United States today than none other than the one and only Taylor Swift.

Dino Mauro (05:15.32)
Ladies and gentlemen, I stand here today deeply concerned about the proliferation of AI generated deep fake images on Twitter. To the creators of these deep fakes, I urge you to use your talents for positive change, not deception. To the public, be vigilant and critical of the media you consume. And to our policymakers, it's time for regulations that protect individuals from such digital violations.

We now have the ability to make people look like other people. Like, I could be Johnny, or I could be Nick the studio manager. Look, I'm Nick the studio manager right now. I could also be Tom the music composer. And what's crazy is even if you go back and watch what I just showed you frame by frame, you likely won't be able to tell.

I am not Morgan Freeman. And what you see is not real. What is your perception of reality? That is a deep fake. Deep fakes are video audio even still images alter using AI and can be utterly impossible to distinguish from the real deal. Problem is the tech used to create deep fakes is evolving much faster than the tools used to detect them that we're running on this clip right now my face is slowly morphing into something else. And it's basically pixel

Perfect. Look, it's like amazing. I'm not me. I mean, I am me, but I'm not me to you. And that's kind of nuts. important point here is that it's getting to the point where deepfakes are nearly impossible to decipher as computer generated, which is super exciting, but also kind of scary. It's a real concern. It's a real concern. From the US Department of Homeland Security, DHS says that deepfakes and the misuse of synthetic content

pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains. The Pentagon is using its big research wing, the one that helped invent internet, that one, to look into deep fakes and how to combat them. Like, they're taking this very seriously. And then of course, deep fakes are being used for good old fashioned cybercrime. Man, cybercrime, it just sounds so quaint. Cybercrime.

Dino Mauro (07:40.718)
Like this group of fraudsters who were able to clone the voice of a major bank director and then use it to steal $35 million in cold hard cash. $35 million just by deep faking this guy's voice and like using it to make a phone call to transfer a bunch of money. So from Taylor Swift, Bank of America to the Department of Homeland Security, protecting your company, you individually, as well as your family from deep fakes, which are the most effective form of social engineering is a top.

priority for continued education and awareness. AI synthetic media deepfakes needs all of our attention. After all, this isn't even me.

Dino Mauro (08:33.074)
As artificial intelligence advances, so do the risks it brings. From deep fakes to AI -driven social engineering, these technologies are reshaping cybersecurity. While these threats may seem overwhelming, there are ways to mitigate the risks. Understanding how deep fakes are created, recognizing the new challenges they pose, and learning how AI complicates their detection are crucial steps.

We'll explore why they're a unique threat and the strategies to protect yourself from synthetic media's growing risks. The rise of deepfakes marks a pivotal moment in AI related cybersecurity threats. Are you prepared? Before we move on, let's touch on another important factor, the change in usage of deepfakes from entertainment to exploitation. What started as parlor games and memes has now evolved into serious undetectable security threats.

The term deep fake combines deep learning and fake reflecting the technology's ability to create hyper realistic videos of people doing or saying things they never did based on large data sets of images and footage originally used for satire and entertainment. Deep fakes have since turned into a significant cybersecurity risk.

fueling political manipulation, social engineering attacks, fraud, and identity theft. One notable case featured a deep fake of former President Obama created by filmmaker Jordan Peele, showing how easily AI can distort reality. Similarly, deep fakes of Facebook CEO Mark Zuckerberg and Elon Musk were used to manipulate public perception and commit fraud.

While deepfakes can be used for positive purposes, such as enhancing film effects or giving digital voices to the voiceless, their potential for harm is vast. The technology reshapes social engineering and cybersecurity risks, making it critical to understand how deepfakes are made, why they pose such threats, and how AI complicates their detection.

Dino Mauro (10:54.584)
Deepfakes are now powerful tools in social engineering, enabling cybercriminals to deceive and manipulate with unprecedented accuracy. Understanding these evolving risks is essential to staying ahead in the fight against synthetic media. One of the most chilling uses of deepfakes in social engineering is in business fraud, where malicious actors impersonate high -level executives to convince employees

to transfer large sums of money or reveal sensitive information. For example, a UK -based energy firm lost $243 ,000 after cybercriminals used a voice -generating deepfake to mimic the CEO's accent and instruct the transfer of funds. These types of attacks exploit employees' trust in their leadership and can bypass traditional

security measures. And now let's explore a deep fake crime story. This year, a Hong Kong finance employee was tricked into transferring $25 million in a video conference scam where deep fake technology was used to mimic seven supervisors and coworkers. Only the target victim was real on the video call.

The company's chief financial officer and other senior staff members were all fabricated on a live video call. The criminals orchestrated a realistic video meeting, manipulating the employee into believing she was receiving legitimate instructions. Following that convincing, deep fake video call, the employee proceeded to make a series of transactions totaling $25 million all, which wound up

fraudulent. And then there are the real facts of political dangers. But that's not all. We also need to talk about misinformation and political manipulation to get your votes. Deepfakes have been weaponized for political manipulation. Synthetic videos of political leaders saying or doing things they never actually did have already been used to sway public opinion and could even destabilize

Dino Mauro (13:15.352)
geopolitical relations. This tactic poses a significant threat to democratic processes and political stability. The rapid accessibility of AI tools, such as generative adversarial networks, pronounced GANs, has made it easier for anyone to create convincing deepfakes, and they are not limited to visual media. Audio deepfakes have gained significant traction

creating convincing synthetic speech that has been exploited for both phishing and financial scams. By impersonating familiar voices, these attacks can bypass even vigilant security measures.

Dino Mauro (14:05.304)
The data shows massive growth in AI, deep fake usage, the alarming reality. Imagine receiving a voice message from your boss or a loved one only to find out later that it was completely fabricated, an AI generated trick, say. This isn't science fiction, it's happening now and the rise of deep fake fraud is staggering. According to security .org.

The proliferation of deep fakes is wreaking havoc across the globe, targeting both individuals and corporations. The statistics are terrifying. One in 20 people report having received a cloned voice message and 77 % of those individuals ended up losing money to scams. Deep fake fraud surged by an iPOP in 1 ,740 % in North America alone in 2022.

fraud where deepfakes impersonate top executives now targets 400 companies daily. Over 10 % of companies have experienced deepfake fraud attempts with financial damages that could reach as high as 10 % of their annual profits.

Worse still, more than 54 % of company leaders admit that their employees lack the training necessary to recognize or respond to deepfake attacks. In a world where deepfakes can be used for everything from social engineering to international sabotage, the need for vigilance is more critical than ever. As AI technology grows more sophisticated, so too must our defenses.

From enhanced AI detection tools to robust employee training, the race to outsmart deepfake fraud is well underway, but it's only just begun. Which reminds me, we should also discuss mastering the art of deepfake detection. The stakes have never been higher. The need for sharp, foolproof deepfake detection strategies is critical.

Dino Mauro (16:17.56)
Falling prey to a deep fake can lead to fraud, misinformation, and even national security risks. But how do you see through the illusion? Here's your arsenal of tools to protect yourself.

Behavioral analysis. Spot the flaws in the illusion deepfakes, though unnervingly realistic, often trip up in replicating the natural subtleties of human behavior. One of the factors to look for are our unnatural movements. Watch closely for anything that feels off. A deepfake may have slight but detectable quirks, awkward facial expressions, weird body posture, or unnatural eye blinking patterns.

this subtle disharmony between how humans naturally move and how AI -generated people move can give away the deception another thing to watch for are our audio inconsistencies. Keep an ear out for mismatched lip syncing or sound that doesn't align with the person's surroundings. Many deep fakes struggle to synchronize audio perfectly.

leading to noticeable lag in speech or abnormal background noise that breaks the illusion. So now let's talk about verification and specifically contextual verification and the rule now of trust. But verify when you encounter a suspicious video, don't take it at face value. Dig deeper. A best practice is to conduct cross referencing information. Does

the video content seem out of character. Verify the details with trustworthy sources like news outlets or official records. For example, a deep fake of a politician making inflammatory statements can be cross -checked against their public speeches. Another best practice is to conduct reverse image and video searches.

Dino Mauro (18:18.386)
Use tools like Google's reverse image search to trace the origins of suspect footage. If the deep fake was created from public media, this method can lead you back to the authentic source and reveal the manipulation. In then, there is the important fact that education and awareness, knowledge is power. One important best practice is the need

more than ever before for training programs, for organizations, particularly those handling sensitive information, implementing deepfake detection training is non -negotiable. Law enforcement and media professionals must be equipped to spot the telltale signs of a deepfake. Ongoing education ensures everyone stays one step ahead of this evolving threat.

Dino Mauro (19:16.45)
When thinking from the lens of a technical analysis, there needs to now be a digital fight back. So what does that mean? Beyond what the eye can see, there are cutting edge tools designed to reveal deep fakes in the most subtle ways. One key thing to implement is digital watermarking. This invisible marker can be embedded into media files, acting like a digital fingerprint.

It helps verify the authenticity of content by tracing its origins and it's resistant to tampering, making it a solid defense against deep fake manipulations. There are also available solutions in the form of AI and machine learning tools. The irony AI that creates deep fakes is also our greatest weapon against them. Machine learning algorithms can scan for inconsistencies in facial expressions, lighting or background noise, detecting even the finest.

discrepancies in deep fake videos. For instance, researchers are developing sophisticated AI systems capable of recognizing the microscopic pixel alterations left by deep fake technology. And then again, there is also forensic analysis. Through digital forensics, experts can uncover artifacts left behind by deep fake algorithms, irregular pixel patterns, suspicious blurring,

and other anomalies invisible to the naked eye. This in depth examination is crucial in authenticating video content. And then there is the US legal system deepfakes have become the ultimate ambulance chase the law one even has to say it aloud with a deep low voice. Yet the practice is far behind other industries in terms technology adoption.

For now, the technology continues to outpace the law, leaving gaps that cybercriminals are quick to exploit. According to the American Bar Association, the federal rules of evidence leave a gaping hole when it comes to deepfakes. The proposed evidentiary rule won that never got backing as of today. At the April, 2024 meeting of the Advisory Committee on Evidence Rules, Judge Paul Grimm, retired in Dr. Maura Grossman,

Dino Mauro (21:40.718)
made a presentation about the evidentiary problems caused by deepfakes and proposed a new federal rule of evidence, number 901 subsection C. The proposed rule provides the following and I quote, potentially fabricated or altered electronic evidence. If a party challenging the authenticity of computer generated or other electronic evidence demonstrates to the court,

that it is more likely than not, either fabricated or altered in whole or in part, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence. Grimm and Grossman

content that their proposed new rule puts the initial burden on the party challenging the authenticity of computer generated or electronic evidence as AI generated fakery to make a showing to the court that it is more likely than not, either fabricated or altered in whole or part. It requires the challenging party to produce evidence to support the claim

that the proffered exhibit is fabricated or altered. According to Grimm and Grossman, if the challenging party makes the required showing, then the burden shifts to the proponent of the challenged evidence to show that its probative value outweighs its prejudicial effect on the party challenging the evidence. Will this make a difference? We won't know for now.

The advisory committee took no action to adopt the grim Grossman proposed new rule. Another dark example, deep fake used to fire workplace rivals authorities in Baltimore County said Dajon Darien, the athletic director at Pikesville High, cloned the school principal, Eric Iceworth's voice. The fake recording contained racist and anti -Semitic comments, police said.

Dino Mauro (23:56.088)
The sound file appeared in an email in some teachers' inboxes before spreading on social media. The motive the principal had been investigating the athletic director for alleged misuse of funds is creating deepfakes illegal. The answer, like deepfakes themselves, is murky. Deepfakes are not illegal, at least here in the US. It's the misuse, however, like any other weapon, can get you sued or arrested.

In the US, states like California have introduced laws against deepfakes used in elections or to harm individuals.

This year, 40 states are considering deepfake legislation. Building on that thought, let's move into wrapping up. As deepfakes grow more sophisticated, so must our efforts to expose and counter them.

Dino Mauro (24:58.178)
And this is where is all comes to the core of the discussion. This is an arms race. Good versus evil racing against each other to harm and protect. What's clear is this, the battle is just beginning. As deep fake technology continues to evolve, so will our detection strategies. The arms race between deep fake creators and detectors will likely intensify.

with each side developing more sophisticated techniques. By staying aware and thinking critically, we can stay one step ahead in the fight against digital deception. Stay vigilant everyone, and thanks for listening. This is Cybercrime Junkies.

Dino Mauro (25:54.254)
Well that wraps this up. Thank you for joining us. We hope you enjoyed our episode. The next one is coming right up. We appreciate you making this an award -winning podcast and downloading on Apple and Spotify and subscribing to our YouTube channel. This is Cybercrime Junkies and we thank you for watching.


People on this episode