Cybernomics Radio

#40 Securing the Machines That Keep Us Alive: A Conversation with Christian Espinosa

Bruyning Media Season 2 Episode 40

Christian Espinosa, founder of Blue Goat Cyber, reveals the critical vulnerabilities in medical devices and how his company is working to secure the technology that keeps patients alive. After surviving a near-fatal health crisis that was diagnosed using a medical device, Christian dedicated his career to ensuring these life-saving technologies remain secure from increasingly sophisticated cyber threats.

• Average hospital bed connected to 14 medical devices, creating numerous attack vectors
• Medical device hacking could lead to patient harm or death through manipulation of pacemakers, surgical robots, or diagnostic systems
• Christian personally experienced the importance of secure medical devices after six blood clots nearly took his life
• Hospitals represent "hostile environments" from a cybersecurity perspective with poorly segmented networks
• AI-enabled medical devices introduce new vulnerabilities through potential data poisoning attacks
• Securing medical devices from the ground up during development is 90% more effective than adding security later
• FDA and regulatory bodies are only now catching up to security standards Blue Goat has implemented for years
• Medical device manufacturers often delay security considerations until just before submission, causing costly delays

Listen to Christian's MedDevice Cyber Podcast and visit bluegoatcyber.com for more information on securing medical technology and protecting patient safety.


Josh's LinkedIn

Speaker 1:

Christian, thanks for joining. A lot of my listeners maybe went back because I saw that after about six months or so they went back and watched and listened to the first episode, which just happens to be the first time that I ever talked to you, and so thanks for coming back on what is now Cybernomics. Back then it was Security Market Watch, but you were the first guest on my podcast, so it's such an honor to talk to you again.

Speaker 2:

Yeah, I remember that I was in Idaho when we recorded, so I appreciate you having me on as the first guest. I feel honored for that Glad to be here today.

Speaker 1:

Good, good, good to see you. You know before that. I'm thinking back. Actually it was even before the security market watch days. Back then it was cyber chomps, that's right, it was, you know. Now I'm remembering that was such a crazy idea. I was like I'm going to read a book a week in cybersecurity and then interview the author. So I read your book and remind everybody that's listening what is the name of that book. And yeah, let's go from there.

Speaker 2:

The name of the book is the Smartest Person in the Room, and it's really about developing emotional intelligence for highly rationally intelligent people, which, in cybersecurity, there's a lot of people that with high IQs but almost no EQ, skill or emotional intelligence. So it's about raising the EQ to compliment IQ.

Speaker 1:

Yeah, I read that whole book in one week. I remember I was just like I felt so called out because I feel like I'm one of those people and my girlfriend always tells me she's like you know, you're a smart guy, but sometimes you don't have EQ, you can't read the room, and every time she says that I'm like maybe I should go back and read Christian's book one more time because it's really relevant today. So whenever you pitch this book to people or you offer it to them, do they typically feel pretty offended or are they kind of receptive to it offended?

Speaker 2:

or are they kind of receptive to it? I haven't been pitching it per se. I mean, on social media I have a social media person that posts reviews and kind of soft pitches it. But I'm very careful about saying you know, you don't have any IQs or, sorry, eq skills. So you might need to read my book. I position it as if you hit a glass ceiling in your career and and it's a tech career, then you might want to check out my book, because often that glass ceiling is because a lack of awareness or people skills, uh, and and that's that's the way I typically position it huh, that's a really eq way of putting it.

Speaker 1:

You know, like if you don't have like so, somebody like me who may not have like a ton of eq, I might see that somebody needs that book and I'm gonna be like dude, you have no eq, you can't read the room.

Speaker 2:

You should read my book and that's probably not gonna read it, if that's how you yeah, exactly, if they're like dude, why would I read your book?

Speaker 1:

if this is the way that you approach me, you obviously have no idea what you're talking about, but you do, christian, so you know it's. Yeah, I'll go back and read that book again. Actually, I enjoyed it very much and you had this idea that stuck, and it was the idea of Kaizen, which I found very cool. Right, it was like the japanese way of just kind of going with the flow and the idea of incremental progress. Right, not being too hard on yourself, not being too hard on your employees and the people around you, but having a lot of grace and forgiveness for yourself. So could you, could you tell me a little bit about Kaizen? And only because I know somebody's listening to this right now and I know that typically, if they're a security person, they're probably super hard on themselves, and I think that that idea of Kaizen does give people a lot of grace and forgiveness and ultimately leads to achieving great success. So I don't. Obviously I'm not the authority on this. You are. So what is Kaizen?

Speaker 2:

Kaizen, like you alluded to, is a Japanese word that means constant improvement, or can I, continuous and never-ending improvement. And there's a couple facets of Kaizen that I think are important. One of them is a lot of people are afraid to do something new because they think they have to master it. And from my experience in life, if you adapt the philosophy of Kaizen, it gives you the courage to take the first step and realize it's okay, if I have a misstep I can correct along the way, but now I know which direction is illuminated, at least the next couple steps, and then I'll take a next step and it might be illuminated a little bit different way. So it gives you that courage to take the first step.

Speaker 2:

And also, as you alluded to, from a forgiveness and grace perspective, in life it's easy to beat yourself up. I know I used to beat myself up a lot. I probably still do. Sometimes I look in the mirror. I'm like I should be doing this, this, this and this, and if I just step back and look at Kaizen and say, you know what I'm making improvements, and if I look at my life five years from now, I'm sure I will have massively improved. But we tend to overestimate what we can do in a year and underestimate what we can do in three to five.

Speaker 1:

Yeah, it's like boiling the frog, but like in an optimistic way. Right, You're not boiling the frog to the frog's detriment. You're doing things, little things, and you know, day after day, one step in front of the other, until one day you wake up and hey, you've got a podcast. The other, until one day you wake up and hey, you've got a podcast, You've got a thousand more followers, You've got that position that you wanted.

Speaker 1:

And I've applied that concept of Kaizen after our conversation. And when I say I read your book, I really did and I took it to heart and I applied that concept of Kaizen to the point. If you go on my TikTok actually I only have like four videos on my TikTok One of them is me talking about Kaizen and a lot of people reached out and commented and liked it. And when I say a lot of people, I mean for me, a lot of people on TikTok is like 45 people, right, but it's more people than have ever reached out to me on TikTok and I applied it over the years and so thank you for that. That was a really good conversation and I applied it to my life and I saw incremental improvement. And that was about what three years ago. Yeah, it was about three years ago, Wow, yeah, and a lot has changed since then. Speaking of change.

Speaker 1:

You've got a brand new company, Blue Goat Cyber, and you are specializing in the med tech field, which is really interesting to me because, um, medical devices are kind of you know, they're kind of important, you know they they sort of keep people alive, and that, to me, is a very important thing.

Speaker 1:

And so, you know, with these cyborgs running around, people with electronic hearts and all these things, pacemakers and my dad was recently diagnosed with a not-so-good disease, not terminal, and now I'm thinking about medical devices more than ever, right, and if one of these things are to get breached a pacemaker were to get breached or hacked, because everything's connected to the internet these days, lots of things can go wrong. And so why did you form Blue Goat Cyber? Why did you go to the med tech route and the medical devices security route, as opposed to pretty much anything else that you could be working on? I mean, you're a really smart person, you're one of the geniuses, I think and I know a lot of people don't like to be called a genius, especially the geniuses, but I'm going to say it You're a really smart guy and I think you're a genius. You could have applied your mind to so many things, but why med tech?

Speaker 2:

My wife used to be a cardiac nurse, so she works for my company now and she understands the importance of these devices. My head of cells used to be a cardiac surgeon, so he understands all the language from a medical perspective and the importance of these devices and he's constantly saying I wish I had this sort of device when I was doing heart surgery. So I haven't surrounded myself by just a bunch of cybersecurity geeks. I have people that know the industry and it's a very challenging industry because you've got the FDA, the MDR, all these requirements. You've got these very complex products that you have to test, every type of interface and wireless connection and entry point. You have threat modeling. You have to do static application security testing. So we have the ability to do all that. But we have the perspective of these devices and, from a value proposition perspective, we offer a fixed fee. So we've been doing this since 2014. I did it in my first company so I know how much effort it takes. So we do a fixed fee guaranteed. So a lot of these innovators are looking for investment from investors and if I can give them a fixed fee, they know what to ask for when they add the cybersecurity. And we've done over 150 submissions so far. We haven't had any rejected by the FDA or any what was called deficiency come back, and we guarantee our work. We partner with the device manufacturer all the way through their clearance and sometimes it takes the FDA three or more months to clear the device. But we are there with them to make sure the device is secure and it gets on the market, and then afterwards we can help them monitor the device what's called post-market management.

Speaker 2:

I sold my last company in December of 2020. We did medtech cybersecurity in that company, but it wasn't our sole focus. And while I was working for the parent company, in February of 2022, I almost died from six blood clots. So it wasn't for a medical device, I wouldn't be here. A portable Doppler ultrasound was able to quickly diagnose me and after getting through that depression you know, at first I was grateful to be alive, but then I felt caged. I was taking these blood thinners. I just decided to stop taking the blood thinners. Reclaim my health. I just decided to stop taking the blood thinners. Reclaim my health, get all my blood work done, get another test on my leg, and that simple decision it wasn't an easy decision, but it was a simple one gave me the courage to start a new business and I thought well, maybe the universe is telling me this is what I should focus on, since I wouldn't be here without a medical device. If that device was recalled because somebody hacked into it or gave a misdiagnosis, then, yeah, I wouldn't be here.

Speaker 1:

Oh, okay, that's a good story. I think that's a really good reason. And that's not the universe just knocking on the door. That's the universe. I feel like breaking the door down and saying dude, you need to do this. You are specially and uniquely qualified to do this. So what would happen if a medical device were hacked? I mean, is that something that people really should be worried about, or is it kind of like pie in the sky? You know we're just going to freak people out, or, you know, are the risks real?

Speaker 2:

The risks are super real. On average, there's 14 devices per hospital bed. A lot of those have vulnerabilities and we've just started really as the end of 2023, raising the bar for cybersecurity. Before then, pretty much anything could get on the market, and we're migrating towards more and more surgical robots, as example, and those are going towards autonomous surgical robots. So imagine if somebody hacks into the surgical robot that's performing surgery on your spine. They could paralyze you or, like you talked about what Dick Cheney had his defibrillator removed because there's a legitimate threat. Someone could wirelessly connect to it and shock him to death.

Speaker 2:

So yeah, and it's not just those systems, it's also diagnostic systems. In the industry we call it IVD and vitro diagnostics. It's a system that takes a sample of your tissue or blood and tells you what's wrong with you and recommends a course of treatment. So if your blood has something like sepsis, which means your blood is toxic, and that device malfunctions or there's a delay or misdiagnosis, then you could die within 24 hours as well. So I think the risk is super real. I think I know a lot of devices have been recalled, more than most people realize, and I think there's been some incidents that have been sort of covered up.

Speaker 1:

Why are people not freaking out about this? I mean, this sounds like a big problem.

Speaker 2:

Well, if you think about it like these devices are on a hospital or healthcare delivery organization's environment, which, from my perspective and my team's perspective, we consider a hospital environment, a hostile environment, because just about every day you hear of a bigger hospital data breach. So these devices are on the hospital environment and they're typically not segmented. They're typically, you know, you can get to them from, like, the kiosk computer where you can check the internet, often for a guest, or you can go to the chapel and plug into the ethernet board and get these medical devices. So it's a, it's a big deal. They're constantly being bombarded and attacked.

Speaker 1:

Let's say one of these devices gets hacked right At a hospital. Who Say one of these devices gets hacked right at a hospital.

Speaker 2:

Who's on the hook for that? The two people. One is the medical device manufacturer themselves, the company that created the device, and the second is the hospital as well, and there's a big push now where a medical device in the US has to be cleared or approved by the FDA in order for the device manufacturer to sell it. In addition to that, though, we've seen a shift where, because of all these data breaches at hospitals, they're asking for more scrutiny of the device than the FDA requires, because they don't want to accept the risk of putting it on their environment, because, imagine, you have a patient come to your hospital. One of those 14 devices connected to their bed is compromised and something happens to that patient. Of course, it's not going to look good for the hospital or the device manufacturer.

Speaker 1:

When these devices are hacked or breached or whatever. What are some signs that the hospital can look for or that the patients themselves can look for?

Speaker 2:

The signs are typically anomalous activity, inconsistent results, like there's a lot of AI-enabled medical devices now, and if the model is skewed or there's model evasion attacks or data poisoning attacks, you know that's something else we have to look at.

Speaker 2:

So it's really about understanding the risk to the device and what some of those indicators are.

Speaker 2:

And one of the requirements for medical device manufacturers is that the device manufacturer has to provide labeling of all the risk.

Speaker 2:

So if I'm the hospital and I purchase a device, I know what the risks are and I know, like if there's a tamper-proof seal, if it's broken, maybe I should consider the device compromised. I also know what's called the software bill of materials, so I know all the third-party components that make up the device. So if something like shell shock happens again, I can look in my device and look at the bill of materials and say, oh, this is vulnerable to that and maybe I should contact a manufacturer, take it off my environment or have a tech come out from the manufacturer to update it. And that's the other challenge is these devices on a hospital network, but the hospital administrators don't update the devices. They typically have embedded operating systems. So now you have a device that you can't control, kind of like an IoT device to a degree, but you have to rely on the manufacturer to get a patch out to the device and often the devices are air-gapped and there's a USB port, so they have to send somebody to the hospital.

Speaker 1:

Well, there's no way to do this over the internet?

Speaker 2:

Well, there are, but, as you know, anytime you connect something to the internet, even though it's supposedly for secure updates, updates that opens up another avenue for an attacker to get into your device.

Speaker 1:

So that's probably why they have to recall them. Either they have to go to the patient or to the hospital, or you can send the device back. But I'm imagining if you've got a pacemaker, you can't just when they recall it, you can't just go hey, all right, like Tony Stark I'm going to take my heart out and just put it in a box and send it to you. That probably isn't going to work, huh.

Speaker 2:

Yeah, that's the dilemma, right, We've got pacemakers, neurostimulators, all kinds of implantables that often have wireless functionality so the doctor can get diagnostic data off of it, or it can be updated and, yeah, you have to take them out of the patient if there's a problem with it. Or the patient has to accept the risk, like I mean, if you like the scenario you gave, if I have a defibrillator, I know it's vulnerable to somebody wirelessly connecting to it and shocking me to death. What do I do? Do I get it taken out or just accept the risk, knowing that it may not happen but it is possible, right?

Speaker 1:

How possible? If you were to put a number on it 10% likely, 80% likely.

Speaker 2:

I would say 99% likely, 99% possible. I won't say likely, but most of the attacks on the internet are non-directed attacks. It's cyber criminals like just trying to find something vulnerable to install ransomware, for instance, to make money A pacemaker would have to be a deliberate, directed attack because I would have to be in proximity relative proximity to the person, I would have to have the right tool, I would have to follow them around. So it is totally feasible, but somebody would have to follow them around. So it is totally feasible, but somebody would have to be extremely motivated. And this is why, like you know, with Dick Cheney, it's like a nation state that would attack him.

Speaker 1:

How close to Dick Cheney would they have to be in order to pull off an attack like this? And here's the problem with cybersecurity Whenever you ask these questions, I think that there are hackers listening to the podcast, and I'm very careful to not give away any dangerous information, but I'm going to ask it anyways. How far away does somebody have to be from Dick Cheney in order to shock Dick Cheney to death with his internal shock machine?

Speaker 2:

Yeah, Well, that's a good question. I guess it depends on your setup. A lot of these defibrillators and things use Bluetooth low emission, so you're supposed to be relatively close because they don't have a massive battery life. However, I know people have connected to Bluetooth and sniffed Bluetooth signals from up to a mile away using a what they call blue sniper rifle with basically a high powered directional antenna. So I would think it's not a few feet.

Speaker 1:

I would think you could pull it off from probably a couple hundred feet. Wow, blue sniper rifle, I want one. That sounds cool. Okay, talking about AI, you mentioned AI a second ago and you mentioned a term that I've heard in the past few weeks a bunch of times and, I'll be honest, I have no idea what it is Data poisoning and I know it has something to do with AI. What is data poisoning? That sounds bad.

Speaker 2:

Yeah, ai is based on data in and data out. So if I poison the data on AI, I feed it bogus data to make it have bogus results coming out of the system. And one of the challenges with AI is training the model. And the less the model is trained, the more susceptible to data poisoning. If I'm training my AI model to detect cancer and I feed it a million samples of tissue let's say of breast tissue that don't have cancer and I feed it a couple hundred that do have cancer, that model is going to be more careful about how you train the model and then the poisoning is taking advantage of that model not being trained properly as well. Because I need to train the model to detect bogus information garbage coming into it also.

Speaker 1:

So garbage in, garbage out, 100%. And who's doing this? Data poisoning? I've heard of state actors and who else would be doing doing this and why would they even want to do this?

Speaker 2:

I think the state actors would do it. Then I have an advantage, Just like if I can take out industrial control systems in the United States or cause havoc, then if I'm an adversary in the United States, I have an advantage. So I think that is a driving factor. The probability is great on that, but the likelihood is not too high. Someone's got to have the intent. I think also it just arbitrarily happens because a lot of people don't understand AI and they're feeding it the wrong information and it's making the wrong decisions. As an example, a lot of people use chat GPT. One of the challenges with AI is it will make stuff up the answer. It will not tell you, it doesn't know. So imagine a medical device. You feed it some data, it makes up an answer. So you're sort of poisoning the model because we haven't trained a model enough to say I don't know, because humans have a hard time saying I don't know.

Speaker 1:

Yeah, yeah, I have that problem. Yeah, it's hard to say I don't know. I noticed that with Chad GPT and sometimes I have to tell it. I'm like do you really know? Are you pulling my leg? Be serious with me, be honest. And sometimes it'll be like yeah, yeah, you know, I, I try, I kind of made that up and, and here's the truth, so yeah if you're listening to this, you can go right now and check out ch GPT and try, try it for yourself.

Speaker 1:

You'll. You'll be astonished that you know it will make some things up. Um, but now I think it's becoming a little bit more honest. If I ask it something really straight up, like, what did my dad had? Uh, what did my dad eat for breakfast, it'll say something like I don't know your dad and what he ate for breakfast, but the typical person eats blah, blah, blah, blah, blah, blah and it'll give you some answer, which I guess is better than nothing. But with data poisoning, I feel like it's not. Something is better than nothing, it's you really want clean data in there. And now the intersection of AI and medical devices. How does that affect your business, how does that affect your clients, your customers, and what are some of the new vulnerabilities or the new risks that AI would pose specifically to medical devices?

Speaker 2:

It's affecting quite a few devices. We see what's called AI-enabled software as a medical device and typically we see a lot of these used for image enhancement. So you can take an MRI or an ultrasound, run it through the AI. The AI will highlight problems with your vascular system, for instance, and a doctor will make a diagnosis based on that with your vascular system, for instance, and a doctor will make a diagnosis based on that. So the challenge is, if that model is compromised and a doctor's making a decision based on the results of the AI model, then he's making a misdiagnosis or could miss something or just completely make the wrong diagnosis. So it's important these models are trained properly.

Speaker 2:

And the other thing that's to consider is not every country shares their data, their medical data. So if I'm trying to train an AI model to detect vascular disease, as an example, I'm kind of limited to the data. I can feed the model Because I can maybe get it from the United States, but can I get it from Brazil? Feed the model Because I can maybe get it from the United States, but can I get it from Brazil? Can I get it from Mexico? Can I get it from Europe? Probably not. So now your model is already tainted to a degree. It has a bias because it doesn't have an accurate sample of the population.

Speaker 1:

So let's say there's somebody, a medical device manufacturer in Brazil, right, and they call you up. They say, hey, Christian, we're about to roll out a bunch of medical devices. There's some, you know, we're concerned about AI-based attacks. What do you do then? Do you fly out to Brazil and take a look at these devices? Like, where in the supply chain do you fit in?

Speaker 2:

We fit in when the medical device manufacturer, like the one in Brazil, is ideally developing the requirements for the product. Unfortunately, most device manufacturers kind of forget about cybersecurity. Somebody going through a checklist says, oh, there's some cybersecurity requirements here, and then they contact us like two months before they're trying to get it approved by the regulatory authority and we always find a lot of things wrong and it delays that submission, frustrates investors, frustrates the innovator and ends up delaying their time to market and costing them a lot of money. So we have to get involved because I don't think anyone cares about cybersecurity unless there's a compliance driver which is the regulatory authority, either FDA, United States or, like European medical device regulation, in Europe. So we have to get involved before that device is cleared and the medical device manufacturer is responsible for choosing a vendor like my company to help with that cybersecurity a vendor like my company to help with that cybersecurity.

Speaker 1:

So if Dick Cheney's medical device company had contacted you before, I don't know why we're picking on Dick Cheney. Everybody picks on Dick Cheney, I guess because it's Dick Cheney. But would it be an accurate representation or characterization to say that if a company like yours, people like you got in at sort of the ground level and built security into that device, how much safer would he be knowing that someone like you had built security into that device at the ground level, as opposed to you know these companies just sort of putting these things out to market?

Speaker 2:

I would say upwards of 90%, safer. I mean, there's no such thing as perfect security. But you know, if we were to evaluate it through a penetration test, static application security testing, look at the third party components to make up the device then absolutely the device will be much, much more secure, especially if the manufacturer contacted us when they're still developing the requirements, Because then we can actually include the cybersecurity requirements into the design, versus try to bolt them on at the end, which is what most people do.

Speaker 1:

You also mentioned earlier that the hospitals, they might be a little bit more strict than the FDA. Are you more strict than the FDA or are you depending on the FDA to pretty much give you the guidelines that are necessary to protect these devices?

Speaker 2:

Yeah, it's interesting. In my first company we did medtech cybersecurity, as I mentioned, and we did a lot of the things that the FDA is just now asking for, because, from my perspective, if we sign off on one of these devices, somebody hacks into it, my paperwork is tied to that device. It does not look good for my company, especially if we kill somebody. It won't look good for me either and I certainly won't feel good about that. So I make sure we are very accurate, thorough, complete and diligent in all of our testing, because it's not just about compliance, it's about the real impact of these devices and the real risk, which is patient safety.

Speaker 1:

What do you want, your?

Speaker 2:

legacy to be.

Speaker 2:

As an entrepreneur.

Speaker 2:

I know how challenging it is to bring a product to market or build a company, and one of the challenges is a lot of these innovators don't understand the road in front of them, especially from a cybersecurity perspective. So part of my legacy is to raise that awareness so they're not caught off guard, because we've had companies come to us two months before submission to the FDA. We found 4,000 vulnerabilities. They have to fix like 2,000 of them and eight months later they still haven't fixed it. So they're scrambling and this is a product that took, on average, like three to five years to build and if it delayed eight months is it going to upset the investors, the innovators that came up with the product going to be upset. Everyone's going to be frustrated. So from a legacy perspective, I want to try to solve that awareness problem so it costs less money to get the product on market and then get to market much faster, because there's a lot of outstanding inventions. That I think will really help elevate humanity if we can get to the market faster and avoid those delays.

Speaker 1:

Christian Espinoza, Blue Goat Cyber. Reducing risk what else? You are increasing revenue for your clients. You are saving lives. You are raising awareness and, as a byproduct, you'll probably save Dick Cheney from being assassinated with his internal shock machine, as now I am starting to call it. Is there anything else that you want our listeners to know? And if people want to find you, how can they find you? And if people want to find you?

Speaker 2:

how can they find you? I think one of the things that's important to understand, because we have a lot of traditional, what I call traditional cybersecurity. People come to me looking for jobs and to me, traditional cybersecurity is way less risky. Typically we're concerned about information disclosure or someone stealing your credit card information or your health records, which is very different than medtech cybersecurity, because we're talking about somebody potentially killing somebody or maiming them or injuring them. So I mean that's one thing I like to leave people with, just that the industry is very different. And then people can find my company, bluegoatcybercom, or LinkedIn, YouTube. We have our own podcast, we have webinars monthly and we're doing everything we can do to become the market leader this year.

Speaker 1:

That's the goal what's the name of your podcast?

Speaker 2:

the MedDevice Cyber.

Speaker 1:

Podcast. Thanks for stopping by, christian. Always good to see you and I'll see you on LinkedIn. I see your stuff all the time. I read your posts, I comment and like whenever I can and I'll check out your podcast. Thanks again for coming on Cybernomics. Yeah, thank you for your posts. I comment and like whenever I can and I'll check out your podcast. Thanks for thanks again for coming on, cybernomics.

Speaker 2:

Yeah, thank you for having me on, josh. I appreciate it.

Speaker 1:

All right.