Bee Cyber Fit: Simplifying Cybersecurity for Everyone

Navigating the Minefield of Personalized Phishing Attacks: How to Outsmart Cybercriminals

April 30, 2024 Wendy Battles/James Tucciarone Season 3 Episode 6
Navigating the Minefield of Personalized Phishing Attacks: How to Outsmart Cybercriminals
Bee Cyber Fit: Simplifying Cybersecurity for Everyone
More Info
Bee Cyber Fit: Simplifying Cybersecurity for Everyone
Navigating the Minefield of Personalized Phishing Attacks: How to Outsmart Cybercriminals
Apr 30, 2024 Season 3 Episode 6
Wendy Battles/James Tucciarone

Send us a Text Message.

Ever found yourself staring at an email, questioning its legitimacy? You're not alone. These days, it's hard to tell what's real and what's not.

On the new episode of the Bee Cyber Fit podcast, we explore the murky waters of cybersecurity in an AI-dominated era. We swap stories of deception and discuss how the cleverness of AI has turned simple email scrutiny into a game of digital cat and mouse.

And lest we forget, we tip our hats to our Yale colleagues at the Security Operations Center, whose relentless efforts are often out of the spotlight yet crucial to our online safety. You'll want to listen to our buzzword of the day.

A few highlights:

  • You'll pick up indispensable tips to bolster your cyber defenses, learn to trust your gut when an email seems suspicious, and verify before you click. 
  • We unpack the subtle red flags in communications that could save you from digital disaster. 
  • We invite you to join our May Cyber Challenge—an opportunity to put your newfound savvy to the test. 

It's an episode packed with insight, appreciation, and action steps, all aimed at keeping you one step ahead in the cybersecurity game. Our goal: partnering with you to keep Yale data and systems safe.

Let's become Yale cyber heroes together!


Mentioned in this episode:

  • Listen to Yale's Chief Information Security Officer, Jeremy Rosenberg, share about the challenges of phishing and how we can all be tricked, including him. 
  • Review the tactics cybercriminals use to trick us with our FUDGE model.
  • Sign up for our May Cyber Challenge starting on May 13. Join us to complete a brief daily task to build your cyber fitness.


Learn more about Yale Cybersecurity Awareness at cybersecurity.yale.edu/awareness

Never miss an episode! Sign up to receive Bee Cyber Fit podcast alerts.

Show Notes Transcript

Send us a Text Message.

Ever found yourself staring at an email, questioning its legitimacy? You're not alone. These days, it's hard to tell what's real and what's not.

On the new episode of the Bee Cyber Fit podcast, we explore the murky waters of cybersecurity in an AI-dominated era. We swap stories of deception and discuss how the cleverness of AI has turned simple email scrutiny into a game of digital cat and mouse.

And lest we forget, we tip our hats to our Yale colleagues at the Security Operations Center, whose relentless efforts are often out of the spotlight yet crucial to our online safety. You'll want to listen to our buzzword of the day.

A few highlights:

  • You'll pick up indispensable tips to bolster your cyber defenses, learn to trust your gut when an email seems suspicious, and verify before you click. 
  • We unpack the subtle red flags in communications that could save you from digital disaster. 
  • We invite you to join our May Cyber Challenge—an opportunity to put your newfound savvy to the test. 

It's an episode packed with insight, appreciation, and action steps, all aimed at keeping you one step ahead in the cybersecurity game. Our goal: partnering with you to keep Yale data and systems safe.

Let's become Yale cyber heroes together!


Mentioned in this episode:

  • Listen to Yale's Chief Information Security Officer, Jeremy Rosenberg, share about the challenges of phishing and how we can all be tricked, including him. 
  • Review the tactics cybercriminals use to trick us with our FUDGE model.
  • Sign up for our May Cyber Challenge starting on May 13. Join us to complete a brief daily task to build your cyber fitness.


Learn more about Yale Cybersecurity Awareness at cybersecurity.yale.edu/awareness

Never miss an episode! Sign up to receive Bee Cyber Fit podcast alerts.

[music]

 Wendy Battles: Welcome to the Bee Cyber Fit podcast, where we're simplifying cybersecurity for everyone, where we cut through confusing cyber speak and make cybersecurity simple and easy to digest. I'm one of your hosts, Wendy Battles.

 James Tucciarone: I'm James Tucciarone, together we're part of Yale University's Information Security Policy and Awareness Team. Our department works behind the scenes to support Yale's mission of teaching, learning, and scholarly research.

 Wendy Battles: Ready to get cyber fit with us? Hey, everyone. Welcome to another episode of the Bee Cyber Fit podcast. We're excited you're here and hope you're ready to get cyber fit with us. If you're a new listener, welcome aboard. This is the place to come for information and some inspiration to stay safe online and outsmart cybercriminals. This podcast is one of the many tools in our toolkit that we use at Yale University to help our faculty, staff, and students build their cyber muscles. 

James Tucciarone: Well Wendy, spring is here, and I know we've been busy with all things cybersecurity awareness. I also know this is one of your favorite times of year. Are you looking forward to the sunny days and warmer weather? Do you have any exciting plans coming up? 

Wendy Battles: I'm so looking forward to the warmer weather, James. I love being outside. I can't wait to do that more. I'm a big walker and hiker, so I've got plans for that. I love connecting with friends and just going for a walk, which is what I did last night with my neighbor across the street. So those longer days, they just lift my spirits in so many ways. James, what about you? 

 James Tucciarone: Well, I love the flurry of activity that comes with spring. This time of year, always gets me motivated for a spring cleaning, whether it be around the house, around the office, or revisiting my personal habits and my habits around cyber safety. One thing that's particularly on my mind, and the topic of today's episode, is about the challenge of identifying malicious email messages, especially with the increasingly common use of artificial intelligence. It used to be a lot easier to identify fake emails, especially when they were riddled with spelling or grammatical errors and had red flags that were much more obvious to spot. But times are changing, and it's not always so easy anymore. 

 Wendy Battles: Oh, my gosh, you are so right. And I'm going to admit something, James. Sometimes I'm confused myself. We actually have to review emails with a lot more care than in the past to decipher if they're real or fake. And even then, all bets are off. I'm really glad we're talking about this and addressing this topic because there's a lot more to unpack about it. And we've got ideas and tips to point you in the right direction. And we're also going to talk about fake login screens. But first, let's hear a little preview of our buzzword of the day. 

 James Tucciarone: Have you ever wondered who keeps your work accounts, systems, and data safe? Or just who it is that monitors the constant stream of data flowing across your organization's network? The answer lies in an often unseen but crucial department, the Security Operations Center. Stay tuned to learn about some of the unsung heroes who tirelessly work to maintain our digital security. 

 Wendy Battles: What I know is that the best of us can get fooled by cybercriminals. It doesn't matter how smart we are or how many degrees we have, any of us are candidates to get fooled by them. 

 James Tucciarone: I totally agree, Wendy. More sophisticated emails, plus busy schedules limiting our time means detecting emails can be tricky. And we shared a story from our Chief Information Security Officer, Jeremy Rosenberg, back in Season 1, Episode 3 where Jeremy actually fell victim to a phishing message. And Jeremy knows better than most of us what to look for. But not all of us have the same level of knowledge. So today, we want to help our audience build their cyber muscles by talking about the problem and how AI relates to it, what Yale's Information Security Office is doing to proactively address it, and what we can all look out for. Wendy, what are some of the most concerning challenges related to AI that you've seen? 

 Wendy Battles: There are a couple that come to mind, James, and the first one is persuasive content. You know, back in the day, it was easy to identify that an email looked fake. But with AI bots, they can write content that can bypass spam filters. So, the spam filters might catch grammatical or spelling errors. But if the document looks error-free, it's really hard to detect. So, the sophistication of AI messages makes it very hard to pinpoint that this is a phishing message. So that's one, the persuasive content. And the thing I'll mention too, is that that content could be coming from anywhere. It could be coming from within the US. It could be coming from outside the country. If someone isn't a native speaker, that all goes away. And they can make any email sound pretty convincing using these bots. 

The second thing that comes to mind is that AI has really ramped up this idea of personalized phishing attacks. So many of us, James, back in the day received those emails about the Nigerian Prince scams. And it was basically everyone got the same email, and we'd all be like, “Really. Did you see that? We know that's fake.” However the sophistication of AI means that these bots can create fake accounts. And an example is fake social media accounts. Now, I know, James, that you are not into social media at all, pretty much. [chuckles] But then there's people like me that are really into social media. And one of the things that I've seen is that on Facebook, I'll be friends with somebody and then I'll get another request from that very same person. So, I know it's not real. Or you'll get a message from a friend saying, “If you got a friend request from me, please ignore it because it's fake.”

 So, what often happens is that these fake accounts are created, and then these bots are actually looking at our social media. They're looking at our online life. They're looking at information about us. They're trying to learn how we communicate, what our emotional triggers are, what the issues are that resonate with us. They're looking at the responses we have on other people's posts. That creates a profile they can kind of get to know us, and then they can create content that's based on those triggers, on those things that maybe are vulnerability points for us. So I know it's something we might not think about, this idea of a personalized phishing attack, but it's does happen, and that is definitely another challenge of AI. James, “What are your thoughts about that?” 

 James Tucciarone: Well, I think you've given us two really great examples, and I want to go a little deeper into those. So, you talked about AI learning our communication styles and our emotional triggers. And outside of that, AI can also learn to recreate our likeness, including audio that sounds just like us and video that looks just like us. Also, back in Season 1, we talked about deepfakes, which is what that realistic audio and video was called. So, some of you may have heard about the example of a bank employee in Hong Kong and how that employee was asked to sign over $25 million in a Zoom call. And the person on the other end looked so realistic, the employee was fooled into doing just that. 

Wendy Battles: That is just so insane that that could even happen. Yet it does. 

James Tucciarone: It's true. And that's a lot of money, probably more than most of us have access to. But another example that might hit a little closer to home is I recall when an employee was in a meeting with a member of their leadership team, discussed all sorts of sensitive information. And when they saw that person later in the week and brought up the conversation, the leader had absolutely no idea what they were talking about. So, you also talked about how AI tools can make it harder for our security tools to detect malicious emails. On the flip side, AI is designed to learn and evolve, and that includes learning how to avoid using red flags that our security tools might normally look for. 

Wendy Battles: So, we just talked about the perils of AI and how it is exacerbating the problems with phishing. So, let's turn for a couple of moments and talk about what the information security office at Yale is doing to address this. Because of the sophistication of these malicious emails combined with these hard-to-detect items that we just discussed, it begs the question of what can be done within our office to help thwart such attacks and bolster security. The truth of the matter is that Yale is continuously working to improve our cyber defenses. In fact, just recently, we deployed a next-generation email security team to help address some of these emerging risks related to AI-powered cyberattacks. And this tool minimizes phishing attacks, and it also separates these dangerous emails that it can detect from the legitimate ones. 

James Tucciarone: Wendy, our team in the Information Security Office is really great in that way. They recognize that cybercriminals keep evolving and that our tools need to too. And we need to keep evolving as well by being aware of how threats are changing and keeping our cyber muscles in shape. 

Wendy Battles: So that begs the question, James, “What is it we can do, given that it almost feels depressing that it's so hard to detect these things? And yes, we have this new advanced system to help us, but what is it we can personally do?” The good news is that we can benefit from what I like to think of as a two-pronged approach to keep our Yale data and systems safe. I already mentioned what the Information Security Office is doing regarding these tools to really help sniff out these fake accounts. So that's part one, but it's also about what we can do daily to help identify these malicious emails, even when they look legitimate. There are four tips that we want to share with you about things that we all can do to help with this issue.

James Tucciarone: And that's part two. And they're pretty simple actions. So, our first set of tips is to know what to look for and to build our skepticism muscles. We want to always be on the lookout for things that seem odd. Is there urgency or threatening language in the message? Is the message unexpected? Those are all clues that a message may be malicious and it's always better to question things than to automatically take it at face value. This is especially true for our personal email accounts, where more suspicious emails are likely to get through and make their way into our inbox. We've talked about the FUDGE model of common social engineering tactics before, and it really is a great tool for helping us to identify whether a message might be suspicious. And we'll provide a link for some additional information about our FUDGE model in the show notes. 

 Wendy Battles: Second, we all need to learn how to trust our gut. You are intuitive. You know that feeling you get when something seems off? Well, we want you to take that feeling and build on it. Would someone actually say, “You've won a million dollars without entering a contest?” That's an example of that gut feeling telling you that that does not seem right. Don't be afraid to hang up on someone if they call you and it doesn't feel right. Or to not respond to a message that feels threatening or unusual or unexpected. When we can tune in and trust our gut, it helps us stay steps ahead of cybercriminals. 

James Tucciarone: Our third action is to seek verification. Scammers are impersonators. They impersonate people we know and organizations or businesses that we might be familiar with. And this is especially common with the help of AI. So, we want to go straight to the source and verify with an individual whether they sent the communication. And if it's not someone, we can immediately reach out to, get a second opinion, ask a friend, ask a colleague, or ask your supervisor if they think the message seems suspicious.

 Wendy Battles: That's a great idea, James. And the fourth suggestion is to report suspicious messages as phishing. Now, we've talked about reporting before. We did a whole episode last year about reporting. It's really important even if you're not sure and if you realize you've clicked on something that you shouldn't have, we need you to report it. The Information Security Office is not here to be sheriffs. We're here to be partners with you. And when you report something suspicious, it helps keep those messages from other people's inboxes. It also helps to better recognize these messages in the future. So, you are doing a service to everybody when you are reporting suspicious messages. 

James Tucciarone: And what is it we like to say, Wendy, “It's better to be safe, not sorry.” But it's not just email messages that we need to watch out for. It's also web pages, and that includes fake login screens. We've seen this at Yale too. Cybercriminals create fake login screens, such as our Central Authentication Service or CAS. And these login screens can look almost identical to our real pages. And this goes back to the idea of being skeptical and also the idea of trusting your gut. It's so important that we verify the URL of our login screens and proceed with caution anytime a message is trying to get us to enter our NetID credentials. 

Wendy Battles: Proactive James, it's up to us to be proactive. And as you said at the beginning of this, these are simple things we can do. These aren't hard. They only take a couple of moments. But these proactive steps can make such a difference in keeping our Yale data and systems safe by extrapolation thinking about at home the same thing is true. Thinking about what we can do to be mindful both at work and at home. Oh, this was such a good conversation. And it reminds me that it's the little things, those small, simple actions, that can make a difference in staying safe online. Now let's hear about our buzzword of the day. 

 [music]

James Tucciarone: Here's the buzz on an organization's Security Operations Center. At its core, the Security Operations Center, often called the SOC or SOC, serves as the eyes and ears of an organization's cybersecurity infrastructure. Think of it as the nerve center within an organization where highly skilled analysts and cutting-edge technology work together to safeguard our valuable digital assets by monitoring, detecting, and responding to cybersecurity incidents in real time. The members of the SOC team are our first responders when it comes to digital threats. Let's break down a few of the most common responsibilities of a SOC team. Monitoring for and detecting threats is a key part of the work done in the SOC. The SOC team continuously monitors network traffic, system logs, and security alerts for any signs of suspicious activity. 

They keep a watchful eye on our digital landscape and are ready to pounce on anomalies that may indicate a potential security breach. When threats are detected, the SOC springs into action and responds to incidents that may arise. Rapid response is crucial in the world of cybersecurity, and the SOC is at the forefront of the battle. Whether its reviewing suspicious emails containing a malware outbreak or investigating a data breach, the SOC’s incident response team is ready to tackle any challenge head-on. And the SOC team doesn’t just react. They stay ahead of the curve. They research the latest cybercrime tactics so they can continue improving the organization’s defenses. The SOC is also constantly evolving, refining its processes, and adopting new technologies to stay one step ahead of emerging threats. 

This ensures the SOC can proactively defend against potential attacks and shore up vulnerabilities before they're exploited. Ultimately, the SOC is our bulwark against digital threats. They minimize our risk by detecting and stopping threats quickly, react swiftly to prevent downtime and minimize losses, and help our organization to constantly improve its overall security posture. But even with the skill, dedication, and hard work of the SOC team, we still have a responsibility to be vigilant for the threats that can't be caught. Here is a reminder of some common tips to help us continue to stay cyber-safe. Be wary of unsolicited links and attachments and emails, especially those from an unknown sender. Use multi-factor authentication for accounts that contain sensitive information whenever possible. Trust your gut if something doesn’t seem right, it’s always better to be safe, not sorry, and report anything suspicious to the Information Security Office. And don’t forget to keep listening to the Bee Cyber Fit podcast where we simplify cybersecurity and help you to be aware, to be prepared, and to be cyber fit. 

 Wendy Battles: James, we covered a lot in our episode today. I want to say this was pretty action-packed. Let's review a few calls to action to help keep the Yale community at the height of their cyber fitness. First, we encourage you to review our FUDGE model for red flags. James referred to the FUDGE model earlier. It will help you identify tactics that cybercriminals use to try to trick us. The more familiar you are with that, the easier it is to outsmart them. Number two, we encourage you to sign up for our May phishing challenge. If you are part of our Yale community, we are doing a five-day challenge from May 13th through 17th. Each day there is a simple task that you'll complete, roughly 10 minutes or less to help you build your cyber fitness and your cyber knowhow. It's a lot of fun, and at the end, we've got some prizes up for grabs too. But a great way to keep focused on this idea of building your cyber muscles.

And finally, we encourage you to review our Spring Bee Cyber Fit newsletter, among other really interesting and timely articles. We talked at length about how to report suspicious messages. So just the thing we were talking about a little bit earlier, things we can do. You'll find the article interesting and informative, with very specific things that you can do with regard to reporting. 

James Tucciarone: Wendy those are some great actions for people to take following this episode, to help them keep building their cyber muscles by being able to recognize phishing messages, find out how they can report suspicious messages, and test their skills with our May phishing challenge. But that's all we have for today. So until next time, I'm here with Wendy Battles. and I'm James Tucciarone. We'd like to thank everybody who helps make this podcast possible. And we'd also like to thank Yale University where the podcast is produced and recorded.

Wendy Battles: And thanks to all of you for listening. We truly appreciate it. And remember, it only takes simple steps to be cyber fit.

 

[Transcript provided by SpeechDocs Podcast Transcription]

Podcasts we love