Cyber Crime Junkies

The Dark Side - New Ways AI is used in Social Engineering.

August 11, 2024 Cyber Crime Junkies. Host David Mauro. Season 5 Episode 32
The Dark Side - New Ways AI is used in Social Engineering.
Cyber Crime Junkies
More Info
Cyber Crime Junkies
The Dark Side - New Ways AI is used in Social Engineering.
Aug 11, 2024 Season 5 Episode 32
Cyber Crime Junkies. Host David Mauro.

Today's Cyber Flash Point is about The Dark Side - New Ways AI is used in Social Engineering. We review AI risks in social engineering, How Deep Fake Videos Increase Security Risks, and new ways to reduce risk of deep fakes.

Send us a text

Get peace of mind. Get Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com  
 
Imagine setting yourself apart from the competition because your organization is always secure, always available, and always ahead of the curve. That’s NetGain Technologies – your total one source for cybersecurity, IT support, and technology planning.

Have a Guest idea or Story for us to Cover? You can now text our Podcast Studio direct. Text direct (904) 867-4466.

🎧 Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss an episode!

Follow Us:
🔗 Website: https://cybercrimejunkies.com
📱 X/Twitter: https://x.com/CybercrimeJunky
📸 Instagram: https://www.instagram.com/cybercrimejunkies/

Want to help us out? Leave us a 5-Star review on Apple Podcast Reviews.
Listen to Our Podcast:
🎙️ Apple Podcasts: https://podcasts.apple.com/us/podcast/cyber-crime-junkies/id1633932941
🎙️ Spotify: https://open.spotify.com/show/5y4U2v51gztlenr8TJ2LJs?si=537680ec262545b3
🎙️ Google Podcasts: http://www.youtube.com/@cybercrimejunkiespodcast

Join the Conversation: 💬 Leave your comments and questions. TEXT THE LINK ABOVE . We'd love to hear your thoughts and suggestions for future episodes!

Show Notes Transcript Chapter Markers

Today's Cyber Flash Point is about The Dark Side - New Ways AI is used in Social Engineering. We review AI risks in social engineering, How Deep Fake Videos Increase Security Risks, and new ways to reduce risk of deep fakes.

Send us a text

Get peace of mind. Get Competitive-Get NetGain. Contact NetGain today at 844-777-6278 or reach out online at www.NETGAINIT.com  
 
Imagine setting yourself apart from the competition because your organization is always secure, always available, and always ahead of the curve. That’s NetGain Technologies – your total one source for cybersecurity, IT support, and technology planning.

Have a Guest idea or Story for us to Cover? You can now text our Podcast Studio direct. Text direct (904) 867-4466.

🎧 Subscribe now http://www.youtube.com/@cybercrimejunkiespodcast and never miss an episode!

Follow Us:
🔗 Website: https://cybercrimejunkies.com
📱 X/Twitter: https://x.com/CybercrimeJunky
📸 Instagram: https://www.instagram.com/cybercrimejunkies/

Want to help us out? Leave us a 5-Star review on Apple Podcast Reviews.
Listen to Our Podcast:
🎙️ Apple Podcasts: https://podcasts.apple.com/us/podcast/cyber-crime-junkies/id1633932941
🎙️ Spotify: https://open.spotify.com/show/5y4U2v51gztlenr8TJ2LJs?si=537680ec262545b3
🎙️ Google Podcasts: http://www.youtube.com/@cybercrimejunkiespodcast

Join the Conversation: 💬 Leave your comments and questions. TEXT THE LINK ABOVE . We'd love to hear your thoughts and suggestions for future episodes!

The Dark Side - New Ways AI is used in Social Engineering

Topics: new ways ai is used in social engineering, ai risks in social engineering, ai used in social engineering, emerging technologies changing the world, ai, artificial intelligence, deepfake, deep fake, social engineering, social engineering risks, new ways to reduce risk of deep fakes,how are deep fakes made,how are deep fake videos made,how are audio deep fakes made,how is ai making it harder to detect deep fakes,GANs,diffusion models,VAEs,impersonation,social engineering, what are the risks of deep fakes,How Deep Fake Videos Increase Security Risks,artificial intelligence risks in cyber,



Chapters
o   The Silent Threat of AI Deepfakes

·      01:38    Underestimating the Risk: Business Leaders and Deepfakes

·      02:37    The Rise of Deepfake Face Swap Attacks on ID Verification Systems

·      03:32    The Need for Employee Training on Deepfake Attacks

·      04:56    The Unreliable Nature of ID Verification Systems

·      06:10    Adopting Robust Approaches to Combat AI-Generated Deepfakes

·      07:34    Real-Life Examples of Deepfake Fraud

·      10:10     The Terrifying Truth About Synthetic Media

·      14:06    The Challenge of Detecting Deepfakes

·      16:59    The Chilling Reality of Deepfake Fraud

·      19:23.   The Techniques Behind Deepfake Creation

·      22:14.   Implementing AI-Driven Detection Systems

·      25:39.   The Importance of Verification and Vigilance

·      28:34    The Ongoing Fight Against Deepfake Fraud

Dino Mauro (00:08.814)
Today's cyber flash point are about AI deepfakes and they're considered a silent threat, but here's why. Let's do some math. Despite AI's kind of massive growing prominence in every sector of society that has technology, which is almost every sector of society. Still today, about one third of all US company leaders have little to no familiarity with deepfake technology.

third. So let's think that through. How are they preparing to defend this when it's not on their whiteboards, when it's not on their radar and they don't really understand it yet? Shockingly, 31 % of business leaders believe deepfakes are not going to increase their fraud risk or have not increased their fraud risk at all.

Dino Mauro (01:08.558)
I'll let that sink in for a second. The FBI notes that for American businesses, than 50%, more than half of business leaders are underestimating the risk and the increased risks from AI generated deepfakes. 42 % of business leaders, one poll even admitted they had no confidence in their employees that they would be able to in fact recognize deepfake fraud attempts on their businesses.

32 % doubt employees ability to detect deep fakes at all. 70 % of leaders say that their employees haven't had any training whatsoever on identifying or addressing deep fake attacks. Let me say that again. 70 % of leaders in a study of 3 ,800 mid -size businesses throughout the United States

70 % of the leaders said employees have not had any training on identifying or addressing deepfake attacks. Deepfake face swap attacks on ID verification systems are up 704 % in 2024. And it's August. We're not even done with the year yet. 30 % of

Organizations will no longer consider 30 % of organizations that are using ID verification systems as part of their business model will no longer consider identity verification and authentication solutions reliable in isolation by 2026 due to the massive rise in AI generated deepfakes.

Gartner has this analysis that we're going to talk about in today's episode. Gartner's analysis indicates that AI generated deep fakes in face biometrics are prompting business owners to adopt more robust approaches, which is such a buzzword, robust approaches, which means they have to take it more seriously. They have to do more things to verify what's true and what's not true. 66 % of cybersecurity incident response professionals

Dino Mauro (03:32.973)
experienced a security incident involving deep fake use back in 2022. When we first started reporting on it, when the FBI first had their first FBI alert on deep fakes back in 2022, 66 % of cybersecurity and incident response professionals experienced a security incident involving a deep fake back then.

which made that a 13 % increase over the previous year. We're gonna get into some crazy stats and some undeniable examples of deep fake. Cause I really think it has to hit home. have to understand it. This isn't just fear, uncertainty and doubt. This is real. And we're gonna talk about real life scenarios so that it makes sense to you. And so you understand, Hey, when would I be hit by a defake? Because I'm telling you it's when you

don't really expect it. 60 % of US consumers have encountered a deep fake video this year. Only 15 % said that they have never encountered a deep fake video ever. And check this stat out. And then we're going to get to the episode. Human detection, human detection.

what we see, what we believe. Human detection of deep fake images and deep fake videos averages only 26 % accuracy. So what does that mean? That means like more than half, close to three quarters of all of us can't even tell what's real versus what's synthetic. That, ladies and gentlemen, is a top threat today.

not only in cybersecurity, but in our society, for our national security and for our entire country. And that's why AI Deepfakes are today's cyber flash point.

Dino Mauro (06:10.19)
Imagine you receive an email from your boss while at work. You notice it's asking for sensitive information, maybe intellectual property, some W -2s, or asking you to do a financial transfer of funds, something like that. And you study the email and well, there's something amiss. So you don't do it. You don't send it. You don't do the transaction. Good. You passed. But then you receive a calendar invite from your boss for a

Teams video meeting or a Zoom meeting and you join the call and right there live in person is your boss and the CFO of your company who you know and they answer all of your questions and calm any suspicions you have. You ask everything. They answer everything. It's all legit. So after that video call, you proceed to send a sense of information documents

or the financial transaction that they explained and that you felt fine about. Everything is fine until it's not. See, afterwards you're contacted by your boss asking what the hell happened, why this was done, and so you explain. Only to learn that it was in reality

not your boss on that video call, nor was it the real CFO on that video call. It's an AI deepfake made from software available to all of us. It's done live in person right before your eyes. This is not some science fiction futuristic occurrence. It's based on a true cybercrime story, one that has really happened

In fact, it's happened several times in the last 12 months. Protecting your company and you financially from AI deepfakes being leveraged in social engineering attempts is now a top priority in the cybersecurity community for continued security awareness. I mean, after all, this isn't even me. So

Dino Mauro (08:29.451)
You don't need to believe me, nor do you need to believe this deepfaked version of me. Who better to explain than Taylor Swift?

Dino Mauro (08:40.673)
Ladies and gentlemen, I stand here today deeply concerned about the proliferation of AI generated deep fake images on Twitter. To the creators of these deep fakes, I urge you to use your talents for positive change, not deception. To the public, be vigilant and critical of the media you consume. And to our policymakers, it's time for regulations that protect individuals from such digital violations.

So from Taylor Swift, Bank of America, to the Department of Homeland Security, protecting your company, you individually, as well as your family from deepfakes, which are the most effective form of social engineering is a top priority for continued education and awareness. AI synthetic media deepfakes needs all of our attention. After all, this isn't even me.

Dino Mauro (09:41.709)
In a recent report issued by the top four consulting firm Deloitte, all about cybercrime and the current state, deep fake fraud is up 700 % this year. It's even being deployed by threat actors coupled with traditional social engineering attempts like phishing, business email compromise and vision. We've discussed this on several prior episodes. Check those out in a link to the Deloitte report on financial crime.

is in the show. What if the voice you trust is not who you think it is? What if the face you see is a meticulously crafted illusion? And what if in a matter of seconds, your financial security could be shattered by unseen, unheard enemies? Welcome to the world of deep fake fraud, a world where technology has given birth to a new breed of deception as organizations across the U

grapple with evolving threats, now face an unprecedented challenge posed by deepfake technologies. Deepfakes have moved beyond entertainment and into the realm of cybercrime. We've talked about it repeatedly on this show. According to the Deloitte report, deepfake fraud is massively on the rise, increasing exponentially, posing significant risks to

professional service firms, manufacturers, healthcare, legal firms, and more. These artificial intelligence generated deceptions are crafted using techniques like GANs, which is Generative Adversarial Networks, essentially where you have computers going after and battling each other to what seems fake to them. And they go back and forth.

tens of thousands of times in microseconds to the point where it's undetectable by the human eye. They also use diffusion models and VAEs, which are variational autoencoders, making them increasingly difficult, if not impossible to detect. Deepfakes leverage advanced AI algorithms to create realistic synthetic media by manipulating audio

Dino Mauro (12:06.937)
and video, these techniques can impersonate anyone, leading to a whole host of problems like fraudulent transactions and breaches in security. The scary truth about synthetic media is its potential to undermine trust and propagate fake news, but also diminish what is true. More importantly, is it

creates such a cynicism in all of us that when something is true, many people can simply dismiss it and say, that's just deep various industries are prime targets due to their reliance on trust and verification. Think about law, think about financial institutions, think about health care, and it makes them particularly vulnerable. Deepfakes can be used for social engineering,

tricking employees into making unauthorized transactions and worse. And this sad and tragic part is we've been talking about it on this show for two years now. This is not just a hypothetical scenario. It's a growing reality. Organizations throughout the U S must adapt to this evolving threat. This recent Deloitte report highlights that traditional verification methods

are no longer sufficient. Businesses, local governments, healthcare and financial institutions need to invest now in AI driven detection tools in order to identify anomalies in audio and video communications. Understanding how AI is making it harder to detect deep fakes. Also establishing incident response planning and separate deep fake centered processes.

to address these new hyper -effective social engineering tactics. In January 2024, an employee at a Hong Kong -based firm sent a series of transactions totaling $25 million to cybercriminals after being instructed through a business email compromise email, which they didn't respond to, but then receiving a Teams or Zoom calendar invite.

Dino Mauro (14:36.513)
jumping on a video call with what purported to be a CFO and seven other people all involved in the transaction. The only problem was is the target was the only real person. Everyone else was deep faked and it was a live call. The target was able to ask questions and get all the concerns rectified. But what they didn't do

is follow a policy or the organization didn't have a policy to simply verify independently through a verified channel that the request to exchange those funds was actually legitimate. It turns out she wasn't on call with any of those people. The fraudsters had created a deep fake that replicated their likenesses to trick her into sending the money.

In another case, a CEO's voice was convincingly mimicked to authorize a large transfer of funds. We've talked about this. It was a parent company and they had mimicked the CEO's voice to such degree that the wholly owned subsidiary president who had worked with that CEO for over 15 years could not detect the voice.

$250 ,000 was transferred fraudulently and landed in the hands of cyber criminals. The frightening truth of synthetic media is that it can seamlessly, seamlessly bypass human scrutiny. Our challenge is to stay one step ahead of the cyber criminals. The frightening truth of synthetic media is that it can seamlessly bypass human scrutiny.

Our challenge is to stay one step ahead of the criminals. Mike, a seasoned cybercrime investigator, had seen his share of sophisticated scams, but nothing had prepared him for the chilling reality of this deep fake fraud case. It began like any other day in the bustling financial district. The bank's employees were busy with their routines, unaware that a sinister plot was unfolding behind the scenes.

Dino Mauro (16:59.787)
The bank CEO, Robert Thompson, was known for his meticulous attention to detail. He picked up the phone, hearing what he believed to be the familiar voice of his trusted financial officer. The voice was perfect. Intonation, pitch, even the subtle pauses Sarah often took when she spoke. Robert didn't hesitate. He trusted Sarah implicitly and knew the importance of swift action in financial matters. What Robert didn't know

was that the real Sarah was at a conference halfway across the country, completely unaware of the chaos that was about to ensue. The voice on the phone was a meticulously crafted deep fake, cutting edge AI technologies that were undetectable by the human ear. They were made using GANs and VAEs, and the perpetrator had studied Sarah's voice from public speeches and internal calls.

using this data to train their deep fake model. The criminals behind the plot were not ordinary hackers. They're part of a sophisticated cyber crime syndicate well versed in the latest advancements in artificial intelligence. They knew how deep fake videos increased security risks and exploited this knowledge to perfection.

We need to trace this transfer immediately. responded, lock down all the outgoing transactions and start an internal audit. We've been compromised, he said. As Mike and his team scrambled to contain the damage, the gravity of the situation began to become clear. The transfer had been routed through multiple international accounts, each one designed to obscure the trail of the one before.

It was a textbook case of financial obfuscation layered in the terrifying truth about synthetic media. This isn't just a financial loss. Mike said later upon reflection, this is a breach of trust. We need to understand how audio and video deep fakes are made and leverage that in bolstering our defenses.

Dino Mauro (19:23.991)
Traditional security measures are not enough anymore. Mike's investigation revealed the intricate steps taken by criminals. They'd use publicly available information to do open source intelligence, OSINT. They sent phishing emails. When the CEO hadn't fallen for it, they gathered up voice samples of syrup. By employing GANs and diffusion models, they created a synthetic voice that could fool

even the most discerning ears. If you think this is just a dark hacker trick, think again. This tech is available right here online. I will show you. Look, all you have to do is simply create a script of what you want the person to say. Copy it over here, paste it, and then hit

and the words come out and that voice can be trained on various cadences, pauses, emotions, professional tone and age group, a cultural by, you know, a cultural stance, various types of articulations, different geographies, different slang words. It's completely realistic.

Deepfake technology can be honed to such a it is indistinguishable from the real thing. This is not futuristic. This is happening right now and available for 19 bucks a month right on the regular web. Deepfake technology relies on advanced AI models that learn and mimic patterns and data.

Using techniques like GANs, which have two neural networks that compete to improve the output, and VAEs, which we talked about, which encode and then decode the data to generate new information. These models create eerily realistic videos and voices. But we don't have to know any of that. We don't have to understand, not only that, all we have to do is find prior audio or video.

Dino Mauro (21:46.285)
use one of the platforms and insert the script. It's literally become that easy. So if you're at an organization and you don't have a deep fake defense policy, make one today. If you don't have a template or don't know what to do, contact us. Just reach out at cybercrimejunkies at gmail or info at cybercrimejunkies .com.

we will point you in the right direction. The sophistication of these deep fakes is absolutely increasing and it's not even on the radar of most organizations here in the U S we need to implement AI driven detection systems that can analyze voice patterns and detect anomalies. It's a continuous battle between creating and detecting deep fakes. When we go back to that scenario,

that story, Mike's team began deploying advanced AI tools designed to identify deep fake signatures. One of the challenges, and we've talked about this in prior episodes, one of the challenges there is that the technology to defend and detect is not as advanced as those to just create the deep fakes, meaning you can make a live deep fake of any of us.

and have it live, not a pre -recorded video. It can be live. It was just used in that Hong Kong incident that we talked about. somebody can be looking at a deep fake, see somebody that by all senses that the human being has looks and sounds exactly like them. And when asked a question, they answer it. If you change the question, you can do it.

It's that scary. Everyone needs to be vigilant. Question unexpected requests, especially those involving financial transactions or confidential information. Know the truth about synthetic media. Be aware of this stuff. Most people don't even know it exists. It's not just Taylor Swift that's being deep faked. It can look and sound real, but there are always telltale signs.

Dino Mauro (24:11.853)
example, if you're ever being asked to do anything that is compromising in any way, ask the person to hold up their hands, ask them to turn to the side left or right. The deep fake technology is not very advanced in in in that sense. That's why when you're generating these deep fake images and videos, they ask you not to talk with your hands, which is probably

I could never do it because I always talk with my hands. But also, one of the other things is always verify. The best practices are still the same as they were even 10 years ago. Always verify through a legitimate channel, meaning don't rely on the phone number in the email or the phone number that this person is giving you, right? Call the number that you know to be the CEO.

Call that number and you don't issue that transaction unless and until you hear from that person. Getting back to Mike in our story, Mike said, we've identified several key areas for improvement. Implementing continuous AI monitoring, enhancing employee training, and fostering a culture of vigilance is crucial. We must stay ahead of this. I mean, the aftermath of the incident is a soaping reminder.

vulnerabilities that are inherent today have drastically changed just in the last six months. Organizations may manage to recover some or portion of the funds and many do not. But the incident has more farther reaching implications. It's spurring changes across many industries and leading deep fake awareness, deep fake detection, deep fake verification and identity.

identity verification channels to the forefront of initiatives for security and technology. Leaders across the U .S. are saying that they are committed to safeguarding their clients' assets and restoring trust. But the real question is have they learned the valuable lessons and will they continue to invest in advanced security measures to protect their customers?

Dino Mauro (26:39.849)
against these emerging threats. The story of the deep fake fraud cases that we've talked about today, they serve as stark warnings. They highlight the need for constant vigilance and innovation in the face of the evolving cyber threats, healthcare, law, financial institutions, manufacturing. They all must recognize the frightening

truth of synthetic media and take proactive steps to combat it. Deloitte Center for Financial Services predicts that generative AI can enable fraud losses to reach over $40 billion. That is $40 billion in the US in just

to you. That is a compound annual growth rate of 32%. Check out this image that we have. It's just unbelievable. And that's if you don't take the aggressive version. If you just take the conservative version, it's still growing exponentially higher. And as we've seen in prior estimates of cybercrime, usually the reality is way beyond the aggressive.

numbers. So the point is this, the stuff is real. It needs to be on the agenda. It needs to be on the whiteboards in a world we're seeing is no longer believing and hearing is fraught with doubt. Vigilance is the new currency. The fight against deep fake fraud is ongoing and it will require a collective effort from all of us in the actual customer organizations.

to the tech companies and individuals. We need to consider coupling modern technology with human intuition to determine how technologies can be used to preempt attacks by these cyber criminals. There won't be a silver bullet solution. And if a vendor claims to have one, we know it's bull crap. Antifraud teams need to con

Dino Mauro (29:03.181)
continually accelerate their self -learning to keep pace with cyber criminals. Future proofing your organization against fraud will also require each organization to redesign your strategies, your governance, and reallocate resources. You need to work with knowledgeable, trustworthy third -party technology providers, not somebody who's just trying to sell you something.

of a gain insight through strategies, establish areas of responsibility that address liability concerns for fraud among each party involved. Think of and consider investing new talent in training of current employees to spot, stop and report AI assisted frauds. Some of it could be expensive and difficult and that's coming at a time when leaders are

prioritizing managing and cost cutting measures, but to stay ahead of fraud, extensive training at a minimum needs to be prioritized. At the end, we need to stay alert, stay informed, and remember trust must be earned each new day by every vendor that is out there. This story inspired by real events and emerging threats.

is a call to action for us all. Protect your world for the shadows are always lurking. The terrifying truth about synthetic media is not just what deepfake can do to us today, but what it can evolve into tomorrow. Thanks for listening or watching. Our next episode starts right. Well, that wraps this up. Thank you for joining us.

We hope you enjoyed our episode. The next one is coming right up. We appreciate you making this an award -winning podcast and downloading on Apple and Spotify and subscribing to our YouTube channel. This is Cybercrime Junkies and we thank you for watching.


The Silent Threat of AI Deepfakes
Underestimating the Risk: Business Leaders and Deepfakes
The Rise of Deepfake Face Swap Attacks on ID Verification Systems
The Need for Employee Training on Deepfake Attacks
The Unreliable Nature of ID Verification Systems
Adopting Robust Approaches to Combat AI-Generated Deepfakes
Real-Life Examples of Deepfake Fraud
The Terrifying Truth About Synthetic Media
The Challenge of Detecting Deepfakes
The Chilling Reality of Deepfake Fraud
The Techniques Behind Deepfake Creation
Implementing AI-Driven Detection Systems
The Importance of Verification and Vigilance
The Ongoing Fight Against Deepfake Fraud