Conversations on Applied AI

Donna Herbel - Your Path to Becoming an AI Tech Hobbyist

Justin Grammens Season 4 Episode 8

The conversation this week is with Donna Herbel. Donna is the founder and chief igniter of Blue Phoenix Learning, where she provides speaking, training, coaching, and consulting services on the topics of leadership, using technology to streamline processes and improve communication and talent development strategy. Additionally, she is the co-founder and COO of Savii, where she works to advance Savii's mission to provide employer sponsored health insurance options to employees that are simple to understand and implement. Donna says that her purpose is to be a force for good, supporting individuals and organizations in achieving their most compelling goals.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode

Enjoy!
Your host,
Justin Grammens

[00:00:00] Donna Herbel: I think number one, if you're listening to this podcast because you're interested, if you're kind of a tech hobbyist like me, and that's a safe place to be, we acknowledge you just don't know everything. Number one, create the space to learn. And I always recommend find 25 or 30 minutes in your calendar or in your week and just decide to use a tool that's available.

AI tools are showing up in, in any program I mentioned, you know, baked into messenger, it's baked into your Google docs, it's baked into your Microsoft. It's It's baked into your word search, like just try to use it.

[00:00:29] AI Speaker: Welcome to the conversations on Applied AI podcast, where Justin Grammons and the team at Emerging Technologies North talk with experts in the fields of artificial intelligence and deep learning.

In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at AppliedAI. mn. Enjoy.

[00:01:00] Justin Grammens: Welcome, everyone, to the Conversations on Applied AI Podcast.

Today, we're talking with Donna Herbal. Donna is the founder and chief igniter of Blue Phoenix Learning, where she provides speaking, training, coaching, and consulting services on the topics of leadership, using technology to streamline processes and improve communication and talent development strategy.

Additionally, she is the co founder and COO of Savvy, where she works to advance Savvy's mission to provide employer sponsored health insurance options to employees that are simple to understand and implement. Her purpose is to be a force for good, supporting individuals and organizations in achieving their most compelling goals.

I love that purpose and mission, Donna. That's awesome. So thank you so much for being on the podcast today.

[00:01:39] Donna Herbel: Thrilled to be here, Justin. Thanks for the invitation.

[00:01:42] Justin Grammens: All right. Sounds like you got a couple different things going on here with your Blue Phoenix Learning Company and then you're also sort of in the depths with a company that sounds to be doing some really cool stuff.

around employer sponsored health insurance options, kind of bringing AI into that industry. And I know we'll talk about both of these, but oftentimes what I like to do is ask people, okay, cool, we know where you are today. Maybe draw us a sort of like a, a through line or like just maybe how you got to where you are today.

[00:02:05] Donna Herbel: I love that. Thank you, Justin. So through line might be optimistic. It's a really spiraled path. So I come to where I am today, very honestly. So I'll start, you know, the long backstory, grew up in rural Midwest, where we are today. And in our teeny town, because of my mother's work in education for the state, we were one of the first families with a computer.

So I grew up on Zork, that'll resonate with a couple of people, uh, and a little game called Zaxxon. And so throughout my time, my parents owned a small A& W franchise when I was growing up. So time passes. The restaurant industry was my part time job that accidentally became my career. So I worked for a couple of decades specifically with.

Perkins restaurants and overall throughout the hospitality industry with the Council of Hotel and Restaurant Trainers, narrowly focused in education, which was corporate training, HR function, and then L& D. But my background in technology really was important. Through that time, because I've worked with a lot of systems development, systems implementation, and just like helping workforce use tools well, so they could do, you know, 36 hour life in a 24 hour day by letting technology help them perform better, communicate better, be more efficient.

So I call myself a tech hobbyist. I've been a tech hobbyist and I took a step from that corporate learning and kind of started this second stage with Blue Phoenix Learning, which was meeting the need for technology training, leadership education for what's evolved into the new normal where people are time poor, there's more information than you can possibly process.

And as a result, It's really hard to get actionable insights and grow a relevant knowledge base. So Blue Phoenix Learning started some work that I've done for a couple of decades and translated that into coaching and consulting. And through that work, because I am a tech hobbyist, I met Mary Margaret Irish with Savvy.

She's got this big vision. We're actually having coffee and talking about how Some of the healthcare selection systems were as well as a conversation around how AI can really help companies and businesses solve complexity in a way that's never been available before. So that evolved into, hey, let's build something cool.

And that's what we've been doing with Savvy. So like I said, it's a, it's a spiral, probably not a straight line, but hopefully that

[00:04:26] Justin Grammens: helps. Yeah. Yeah. Well, it sounds like, you know, I think it's interesting. You were talking about tools and tools for employees or just general people to use. And I was thinking like, yeah, no one really showed us how to use the internet.

It was just like. Okay, you know, you get this little disk from AOL, right? And you kind of dial up, and everyone's just sort of figuring this out sort of like along the way. And like, how does email work? And what is a domain name? And, you know, there's no user manual for this. And I feel like today is just another, you know, repeat.

And it's going to keep repeating with all sorts of new technology that comes out. Even the smartphone, for example. Like, no one knew what the App Store was to begin with. We were just kind of figuring it out. In fact, I even remember using the iPhone for the first time. And I was like, how do you actually turn this thing off?

You know, and it was like, Oh, you got to long press the button and then you get the swiper and all that sort of stuff. But it was like, there is no on and off button. So I just had to kind of figure out how to like learn that, you know, and of course we all kind of figured this stuff out. So I think there's something there around like what you've been giving back with regards to teaching people.

And I'm curious to know, like, maybe what were some of the first examples that you've been doing at Blue Phoenix Learning where you came into companies and said, you know, They're kind of like deer in the headlights. They're like, I don't know what I should be doing or not. How have you been kind of walking them through

[00:05:34] Donna Herbel: this?

Yeah. So great question. And what we frequently find is being able to have the safe conversation that says, where are we confident and where are we not confident? No one goes up to their boss and says, I don't really understand how we're using Microsoft teams or I don't know how we're using Google workspaces or I just heard the company turned on co pilot.

I don't know what to do with that. Right. So there's kind of this disconnect between. technology solutions and platforms that have been implemented, but not really baked into organizational processes. And so that's really where I work most, like easy examples, the email inbox, which comes up a lot when I talk to senior leaders and when they say, look, we're kind of like, what could a technology solution look like?

Well, when you look at how much of a work week is responding to calls for attention because we haven't stopped to say, okay, we have an email solution, but here's when this company or this organization, these work teams use email. We have a chat channel, we have a Slack channel, or, you know, there's a variety of different channels.

We've channels, but here's when you use channels, we have documents and here's where the documents go. So the team can use them. And there's this pause point of reflection, I sometimes challenge to say, just go back through the last week. How much time did you lose? How many meetings did you have where you couldn't actually get to action because people weren't prepared, didn't have the same information, weren't level set?

How many times are you working late or taking emails and phone calls around dinner time or at kids? hockey games when that information could have been better constructed once instead of shared in six to ten emails and messages. So we start with not only how should work work, but how is work currently working?

And what we find with that is There just isn't a common language for when do we communicate, how do we communicate, how does data get stored? And then when we back into different platforms, and this happens pretty frequently, when we'll have like an executive group, I can think back to one session we did, you know, the training team says it would be really great to use some of these AI tools because AI can do some really great stuff with, you know, cleaning up text, right, or image generation, or just making templates really easy to use and easy to understand that are repeatable can save that workload.

And of course, the marketing department says, we've owned Synthesia for over a year, which is one of the solutions that would do that. Well, There's just a talking point of can we get an organization's teams to have a consistent language on the tools they're using and really back into what are the problems we're solving, how should work work, and then what technology is currently in place.

And sometimes we do a lot of training around that. People don't like to admit they don't know. That they've actually had it. Yeah, that they've had it. They're like, well, I looked at it, but I wasn't sure, so I just never used it. And then you have somebody else who's like, they've never been in the platform.

Why aren't they checking their notes? Well, we can fix that. So.

[00:08:31] Justin Grammens: Yeah, well, who is your typical audience when you go in and talk with people? I mean, is it somebody from marketing? Is it somebody from sales? Is it somebody from leadership, the C suite or all of the above? Yeah,

[00:08:41] Donna Herbel: it's usually C suite. So generally what I find is at least most recently, especially with the explosion of like the AI tools, here's what's available.

How can it work? I've been doing a lot more work in lunch and learns or even half day sessions with executive teams or leadership teams representing multiple departments, just helping everybody understand in one conversation, what are we talking about and what's possible and how could it work and more specifically, how should it work?

A lot of those conversations have been really siloed and so there's enough curiosity where leaders want to be able to have more of a general dialogue. And really talk about it through the filter of the common business. So, and then I do some work with teams specifically. I do some training, hands on training for individuals who just say, look, I know this is coming, I know I need to use it, I don't know how to use it.

My organization doesn't have any training. Like I'm just here and I kind of go for it. But I don't even understand what it's supposed to do and I'm being asked to do stuff with it. And so we'll do some really block and tackle on the keyboard, you know, let's analyze a few reports. Let's apply it literally in our day to day work.

So those are the two buckets.

[00:09:51] Justin Grammens: Yeah, that's great. And I'm assuming, yeah, there isn't like a one size fits all, right? Probably each industry or each company kind of has more opportunities to use it.

[00:09:59] Donna Herbel: Yeah, I think different organizations have a different tech history, right? So when we talk about Applied AI, there's some organizations that are very advanced.

Because Justin, as you know, and I always, I always joke you're my favorite futurist, but it's true. You've known for a long time that AI didn't just arrive. AI didn't just happen with the launch of chat GPT. So when you talk to the IT department, even as an executive or a professional, you're like, I've got questions about AI.

They're like, yeah, so do I. I've been doing this for a while, but that's not the same conversation. So what they're really saying is there's these new tools that are work tools, and I want to talk about those. And there's this big disconnect. So sometimes that's part of the conversation,

[00:10:38] Justin Grammens: too. Yeah. Yeah, I just, I started listening to this book on Audible called The Worlds I See, and it's by a woman named Fei Fei Li.

She was at Stanford and then was at Google, like one of the pioneers of AI. And, you know, I'm, I'm only, the book's pretty long. Most of these books are like, you know, 14 hours or so. I listen to them when I'm out running. But I listened to the first couple hours this morning, and it was about her history.

about coming here as an immigrant, actually. So she came from China and really, really interesting story about how she grew up there and how she ended up moving here. But what made me think about this was, you know, she talked about the history of AI, right? And that, you know, that there was actually a lot of work that was done in the fifties and in the sixties.

And then machine learning kind of got this bad rap is actually like not the right way to do it. People were more around hand coding and expert systems, like sort of thought that that was the right way to go. So, you know, you're, you're right when you're saying like people like, well, we've been doing this for a while.

Yeah. They definitely have been doing it for a while, there definitely are certain industries, I think, that are far more advanced around computer vision, for example, right, you know, autonomous, you know, a lot of planes fly themselves, right, so there are a lot of things where we have a lot of these algorithms that have been sort of, you know, put into practice, but I think the unique thing that kind of happened was like, ChatGPT was like the killer app, it feels like.

Like it just, it's just democratized. Anybody could just sit down and start playing with it. I think that was, that was kind of a game changer. At least that was, that was my opinion. Now, real quick too, as I will say, I actually had the guy who started a project called the AI Dungeon. And he actually was using GPT like two, and I interviewed him here on the podcast, boy, probably 20, 2021, maybe early on days.

So it was, it was funny. I was watching this sort of like get better and better over time. And I think, you know, GPT 2 isn't that good actually. And it was just a matter of time though, how it's going to get better and better. And now, you know, we're talking about four and we'll be talking about five and six, like how do you see this?

There's a sort of exponential sort of growth that's happening, you know, in time. And again, like I say, people have been trying this for 50 years, but I think we've made more advancements in the past 15 months than we have in the past 50 years. It feels like, how are we talking to organizations about that?

[00:12:41] Donna Herbel: Yeah, I think there's, there's, well, there's two things. So number one is you look to the future. The shift, the digital bridge, the technological bridge is getting a lot. So you've got some early adopters who they experiment, right? They're interested. They've been on chat GPT, and then you've got others who that's wonderfully interesting, but they don't actually use it or having, or now dabbling with it and don't have as much confidence.

You know, I was talking with one gentleman who, you know, Runs a really large team and chat GPT. He'd heard of it, but he was like, yeah, that's just a fad until one of his employees started literally using chat GPT to produce documents and reports and information in such a fast time projects he thought would take days.

All of a sudden he's got this employee who's coming up like 10 minutes later with really good stuff. That was our conversation when he was like, I want to learn more. Can you help my team learn more? Because. It's the famous, we have to do more with less. We've got a lot more work. That's a lot more detail, fewer hands.

And so we're trying to solve for capacity and this feels like this could do it. So we actually went down, we did a training for the team. And part of that was just, just starting with some ideation, right? Be confident about using it. Where does it go from here? I hesitate to use the crystal ball. It's fun for me.

I'm right as many times as I'm wrong, Justin. I think in the future, the big shift is going to be one organizations and companies have to decide what's their acceptable level of use. There was a lot of use with chat GPT when it came on early. I talked with a lot of individual contributors early on, even February 2023 had started doing some training in the LND space, learning and development.

And in short order, I started getting calls back from these trainers who said, hey, we know what we want to do. Like, we know the recipes, we know the standards of service. We feel like we can take this information and make it more easily accessible for end users faster than we ever have. Big upside. Except the company's concerned that our stuff isn't secure and so we've been told not to use it.

So I think, you know, as we look forward, I think organizations that think they can dig their heels in and just not adopt or not put any kind of rules or processes in place. The reality is your people are probably using it and just not telling you. So I think that's one. So I think, I think in the future case, it will be as assumed as being able to use email for work, right?

There were some industries that were slow adopters. I mean, there's still some who you literally have to send the fax. They won't take the email. Which blows my brain up. And in the future, I think you get to that same place with different AI tools. This assumption that the work is better because people have put their time and attention into the minutiae.

I think companies have to rethink that a little bit. I had a really interesting conversation with Damien Real, who's a local to the Minneapolis market as well. AI Conference. as well. Yeah, he's been on the

[00:15:34] Justin Grammens: podcast, actually. Yeah.

[00:15:35] Donna Herbel: Damien, a big, big fan. Good guy. Phenomenal. Yeah. I've known him for a long time.

But one of the conversations we had early on was this idea of AI is really a 10X component. So people have to decide, do you want a 10X your impact in the same amount of time? Or Or Do you want to put the same amount of impact at a tenth of the time, right? So I think businesses and employers are going to start to rethink the impact, productivity, and physical time equation when they introduce AI.

Now there's the second part of concern, like, where does it go from here? There's a lot of concern about how does it impact employment and jobs? You know, there's a lot of citing of, I think it's the World Economic Forum 2020 report, which, by the way, was Before chat GPT, even that pointed to a loss of like 81 million jobs with AI.

But most people missed in that was that same article pointed out an estimated growth of over 90 million new jobs. So it's not that work stops, it's that it happens differently. So where do I think it goes from here? I think it's important to inspire curiosity and dabble and learn it as it evolves because it's evolving and there's some risks and there's some rewards and we don't know what's going to happen next, but

[00:16:49] Justin Grammens: it'll be interesting.

Yeah, I like how you mentioned about people using it and not telling you about it. I mean, it is like this thing in my back pocket that I use and I also feel kind of weird, like, should I disclose or not? Like, so it was basically a statement of work for a client of ours, right? And so I cranked it out in just a fraction of the time, right?

And sent it over to the team, they took a look at it, and they were like, Wow, you did this, like, that quickly? And I'm like, well, I use chat GPT, right? I mean, so I kind of said it under my breath. But the question is, is why not actually just be open about it, right? Now, it didn't do all of it. This is the other caveat that I love to tell people.

You know, all these jobs are going to get lost or whatever. It gave me a good framework, right? And I was able to essentially sort of, I guess, enhance what I had already created, which, you know, I already have processes and templates in place and all that type of stuff. So, I wasn't working on it from scratch, but it definitely was able to sort of help me frame it up and give a much more clear, concise, um, You know, body of text than I would have been able to do on my own, or maybe not even better than if I would have been able to do on my own, but kind of like Damien was saying, just faster.

Right? I mean, I could have spent six hours on this. I ended up spending 16 minutes, right? It was just, I mean, an order of magnitude change. And that's what I think companies are going to need to sort of figure out. You're right. How much do they want to bring it in? Certainly it should be a tool that you try and use.

I kind of, kind of, I think that's what you're saying.

[00:18:02] Donna Herbel: Absolutely. Well, and I think the other part too is. What we've learned is these tools can really help you do great work fast, but it doesn't do all the work. I was having a different conversation with an instructional designer who said, you know, do I even have a future in my career?

I mean, I design learning events and maybe AI can do it. And I think the answer is AI can do it different. But in my experience, looking at the outputs. They're not that great. Like literally, if all you want is a point of work that says, here's how to use the hand towel dispenser or what, right. Things that are, chat GPT can do a reasonably good job at that.

If you're looking for, how do I meet a learner where they are, whether this is their first job or their 10th job and help them understand how to build a personal connection with a family of four in the dining room of a restaurant, somebody has to help. with the human side of that. So I think it does a great job for baseline.

But I think those that can use this information, like you said, to do scope of work, to do work really well as a baseline, the biggest advantage that I've seen companies benefit from and even individuals. is usually when we're doing project work, we have a window of time, right? If you're going to spend six hours just on a scope of work, if you spend five of those hours putting together the basic outline, the detail, the good to great, you've only got an hour left because you're on deadline.

So if you put in that same six hours, but you use a tool to do what I call kind of the minutiae work that here's the outline, here's the format, here's the key things that have to be here. Here's the stuff we all know that can be imported from a database and you spend four or five hours. If you're going to spend that same amount of time, if you spend that same four or five hours, you're really spending it on doing great work that's client specific, high impact.

And it's that polishing of good to great that I think just the cadence of work and the pace of time misses. And I think AI can, can solve that. And that's back to that conversation of, do you do 10 times the work or 10 times better in the same amount of time? It's really, what do you do with that time today?

Okay. I don't know about the rest of our listeners, but I know personally, there are times when I've, you know, put work out into the world and I've known, man, if I could just have two more hours, I could format it so it looks better, easier to navigate, or there's a couple of final enhancements that would make it really great.

But I'm out of time and I got to move on. And I think that trade off looks different when some of these AI tools, whether it's chat GPT, whether it's perplexity, whether it's cloud, I've worked with a lot of some of those different backends for different reasons. They all have different strengths and weaknesses, but being able to use the right tool at the right time out of the AI tools, and I add new tools to the tool belt every day, and some of them I'm like, that's a terrible tool.

I take it back out. But having that tool belt just allows you to do your best work. In the time allocated, which I think is probably the biggest benefit that I've seen organizations use initially. Yeah,

[00:20:57] Justin Grammens: yeah, yeah. And we all have a limited time here, not only with projects and work, but limited time on this earth, in some ways, if you just think of it at a more higher level, like how do we want to spend our time?

I want to spend our time creating yet another template for something or, you know, spend our time on the higher level tasks. And that's what I think, I think it can be in that positive, right? I believe we can use this to get rid of some of the mundane things that we do. Are you seeing anything around generative, like images at all?

Are companies talking to you about, about images

[00:21:25] Donna Herbel: as well? Yeah, I think, well, specifically on the L and D space, the ability to use images. Image generation and in painting has been really, really valuable. So both image and then video generation. So the biggest benefits when we get into training and even communication, picture's worth a thousand words, right?

And sometimes you're like, this is a great picture. This is a great video. This is a great training. This is a great demo. Except when we shot it, the tools were different. Except The colors were different. That would have been perfect except Joe's in the background and Joe stopped in the background for where this is going.

So that real practical application of how can I take like what I have and use it well. So that's kind of more of the in painting image modification has been really helpful. It's also saved a lot of time when you have the video or even the stills that are good, but they're not quite right. And you're like, I do not want to haul everybody back on site and do a reshoot.

Can we just move the eye contact? Which is helpful. I think what's been fun and unexpected for image generation is that creativity and ideation where people sit down and say, if I had to dream this thing, what would happen if I just threw all these words into an image generator and what would it come up with?

And sometimes you think. That's really fascinating. And sometimes too, you get to see some of the way these processes put information together. I laugh a little bit. I use the example of the grilled cheese. I did some image generation of, I do that for demo when I'm training, like how do you even generate images?

Right. And we talk about how do you apply different types, right. To get different image outputs. And so I start with a simple. Let's do a grilled cheese on a plate in a park. Nine out of ten images will make a grilled cheese with lettuce on it because somehow it recognizes that a grilled cheese is a sandwich and it should have lettuce.

Either I've never done grilled cheese sandwiches right and I've always skipped the lettuce or that's not how the world works, right? But that's, that's an example of where some of this image generation can put some ideas together. You can get something really creative and you can look at it and be like, I wouldn't have thought of it that way.

But what if I did? Right? Right. Yeah. So I see image generation mostly in creativity and some recreational stuff. You know, I like to, I like to send people in Facebook messenger, their custom birthday cake. I'll always do like a forward slash imagine if you're ever in like messenger where you can have it do some AI image generation, you know, send a birthday cake that features, you know, dolphins, seashells or whatever.

And it can take some of that ideas. I mean, that's just the fun stuff,

[00:24:00] Justin Grammens: but. Yeah, there's a lot of fun stuff, you know, involved for sure. And I think it was interesting. I was listening to a podcast a couple of weeks ago. They were talking about image generation and like, it was around the idea of basically saying that it's been AI generated, right?

So there's this really worrisome thing that's going on right now, right? Around deep fakes, right? There's a lot of stuff in the news, has been in the news, but this is going to be more and more, especially not only with the election year, but just the tools are just getting so much better. And it's really, really hard for people to discern what is a fake and what's not.

And so companies are starting to put in watermarks, right, to basically allow. But this person was talking about, I forget who it was. I'll have to go back and find it if I can again. But they were talking about this idea of like, sometimes you just want the original image, right? Sometimes you actually just want it all, you know, with all of its, imperfections, right?

So imagine a digital SLR camera, you know, that basically took the picture and that's the way it is. And that, you know, you might want to add some AI to it. You know, you might want to change the light. You might want to improve the person's, you know, face, all this type of stuff all the way to this other side of it.

And so they gave a pretty good interesting thing about as long as you're open to it, like, Hey, I just applied something to this and I made something different. Right. And so that's okay. Like the whole point of this was, as long as you're open and transparent, and he believes that You know, that there's, imagine it being like a slider of like a filter and you're basically like, okay, you know, I want to add all this stuff in there.

It's great. It's all about transparency, I guess, is kind of what that thought was, which I thought was an interesting scenario.

[00:25:28] Donna Herbel: Yeah, I think there's two parts when it comes to this level of customization. that's now available with AI. And I'll kind of give a shout out. This is a super oldie, but goodie.

It's on my list of favorites from the 99 percent invisible podcast, but it's super old. So I'm not, you know, pitching another podcast, but if you have the chance to check out, they did some great work on the Eliza effect and in that example, so the Eliza effect for those that aren't familiar, when I train on just AI and computer systems in general, I think we have to talk about the Eliza effect, and this was really when the gentleman Weizenbaum, I think it was, he really solved one of the AI issues early on, which was you had to have this ton of data in order for these systems to know what to do and work.

And so he kind of said, well, what if you use the Socratic method? What if in interactions with individuals, you just pulled out keywords and fed them back?

[00:26:17] Justin Grammens: Yes. Just keep asking the

[00:26:19] Donna Herbel: questions. Which we know, and it's actually evolved into the 70s into kind of some self help therapy stuff. There was some fall down parts with that.

Now we're kind of back into that cycle, but what became known as the Eliza effect was because Eliza was the chatbot. What he noticed very quickly was when people would engage with the chatbot, they found that it was trustworthy faster than they would have in engagements with a human being. And so they gave a lot more information and they trusted the feedback.

They started to, you know, kind of put human characteristics on the computer that the program likes me, it's interested in me, right? All of those things. And of course, it's ones and zeros in the background, don't kid yourself. And so, Over time, he actually going from being the person who really kind of solved it really became very public about humanity is not ready for this and we should not do it because people don't know how to separate the human experience from the computer experience.

So when, and we've seen that right in social media, that's the problem with deep fakes. That's the problem you see a picture, you see an image. Well, of course it should be true, except now in the world of AI, it's not. And there are some very weighty ethical questions. What happens in a world where you can literally erase people from being on a public street?

Like, what happens in a world where you don't see the reality that is? And what happens where everything you see behind a computer screen isn't necessarily valid or trustworthy. What does that do to communication? What does that do to how we make decisions? There's a bunch of question marks. I'm not a fear monger, but I think we have to be smart as a people because it's here.

We also have to be mindful to challenge our own assumptions and not suspend critical thinking. And one of the examples I love. And one of the reasons it stuck with me, and this is probably seven, eight, nine years ago now, it had, it had taken a body of data and put it into a machine and just fed it some assumptions and said, Hey, you know, let's assume that, heard of unicorns has been discovered in a previously, you know, undiscovered valley by some team, right?

But it generated a very convincing, what seemed to be a news article, right? Like they named the valley, it named the professor, it gave them like the credentials. It talks about why these unicorns were never found before. And if you don't start with there are no unicorns, you could have bought it, right?

And so that's, I think one of the things too, that we talk about. What are the risks, rewards, whether it's with image generation, image adjustments, or even just what we read daily, like the ability to take misinformation and publish it in 15 places and then redirect it to itself so it looks legit. Like that's a nefarious problem.

[00:28:56] Justin Grammens: Yeah, for sure. For sure. Yeah. I mean, even, even more than, than in the past, we're going to be looking for trusted sources for this. This stuff, and it's going to be, and kind of what you're saying is kind of don't believe your eyes and your ears anymore, or even anything that's written. Meet

[00:29:11] Donna Herbel: the people for coffee.

I think that's, I think that's the other thing when we talk about what's the future coming. I think the power of the in person meeting of like, you are who you say you are, you look like what you like, I think that becomes more valuable. Because The online avatars have just gotten so incredibly good, you know, so yeah, I was kind of talk about letting technology do what tech does best so people can do what people do best.

And I think that separation comes back to critical thinking, building relationships, dreaming a little bit, solving real problems, and being present. And I think that's going to be where The future power of this will be for individuals. It's going to be their time. It's going to be the value of their attention, but there is, there is kind of a retraining in the computers are not the smartest, they might have the most information, they might be the most adept at dealing with the data, but don't confuse that with being the smartest person in the room.

That's I maintain that's a human job. Yeah.

[00:30:06] Justin Grammens: Yeah. Very, very good. No. Yeah. No, that's what humans are really good at is critical thinking and human interaction and emotions and all that type of stuff. I think you're right. That's still uniquely human. likely will be uniquely human, you know, forever. As part of this, this book that I was just listening to, what she said was interesting.

It was like, you know, so humans are carbon based, but all these computers are silicon based. And so why are we actually trying to essentially recreate ourselves in a silicon based environment? Like almost kind of questioned the reason as to like, why are we even doing this in some ways? And I was, I was like running along thinking about that.

Yeah, we're basically trying to take all the things that we're doing. that are human in, in some way that we're really trying to push it to the edge, right? We're trying to make artificial intelligence, you know, that is general artificial intelligence, right? And we're not satisfied with just kind of stopping with where we're at, which I think is fascinating.

Well,

[00:30:56] Donna Herbel: and I think the other piece is this blend between the carbon based and the silicon based because there is, there's some really interesting exploratory work in what do you do when you can think a thing and it translates into energy and sends out communications. I mean, it seems, you know, for those of us who are kind of like old Trekkies or Jetson folk or like, really enjoyed sci fi, it feels very like the Borg is now, you know, the future is now kind of, kind of sensation.

And so I think that's an interesting question of, you know, kind of, you know, Okay. It's cool. But so what, like, what does that look like at the end? And man, I've had a couple of like times where I just had to stop, like stop myself because you start to wonder, like, so what, what is uniquely human and, and how does work in economies look like in the future and how should it look?

And do we know, like, we might, we might have the opportunity to really redefine our best future with a different set of rules. Thanks. But we should be smart, be smart about it, be super thoughtful and, and be well informed about the decisions we're making. For

[00:32:04] Justin Grammens: sure. For sure. Well, one of the things I do like to ask people that have been on the show or that are on the show is like, you know, obviously you're just an experimenter.

It sounds like, you know, you, you just, you love to get in and, and start doing stuff. Tech hobbyist, I guess that's the word I was looking for. How would you recommend people, like, is there anything in particular that you recommend people start to do? Are there, I don't know, books, classes, conferences? How would you?

Yeah. Frame that up. Tons.

[00:32:26] Donna Herbel: I think, I think number one, if you're listening to this podcast because you're interested, if you're kind of a tech hobbyist like me, and that's a safe place to be where you acknowledge you just don't know everything. Number one, create the space to learn. Whether that, and I always recommend, find 25 or 30 minutes in your calendar or in your week and just decide to use a tool that's available.

AI tools are showing up in, in any program. I mentioned, you know, it's baked into Messenger, it's baked into your Google Docs, it's baked into your Microsoft, it's baked into your word search, like, Just try to use it. I think number two is periodically find yourself and surround yourself with people who know what you don't know, like the applied AI conference in Minneapolis, St.

Paul. I love it, but there's, you know, different conferences or events where you can literally just walk into a room. Sometimes we think, well, I don't know if that's for me. I don't know if I belong. I don't know if I know enough to get anything out of it. Go hang out with people who are smarter than you and go hang out with people who are doing stuff you're not doing.

And just. Be super curious. I think that's, that's the second thing that's really valuable. And I think the third thing that's really important is these tools are most powerful and most interesting when you have a problem to solve. So when you get into a day and you think, man, I wish. Then finish that sentence and literally go into like a chat GPT or even a, you know, a search bar and just say, Hey, how can AI help me solve this problem and then find something to take action on?

Be curious. Yeah,

[00:33:52] Justin Grammens: yeah, totally. Yeah, I was thinking about, you know, there's this new search engine called perplexity. Not sure if you've tried around with that, but you know, they raised a bunch of money and everything like that. And they're trying to redefine, you know, how we search for stuff, which I do think.

The old Google search way of getting a bunch of resources and links is definitely a one track. You know, there's certain people are going to use that. But this perplexity idea was like, Hmm, why don't I start searching for stuff in perplexity? And what I realized was just that we're creating more of a dialogue.

As I got done with stuff, it actually suggests, well, here's some other things you might want to ask about. And so I found myself on there for like 20 minutes. You said, I love this idea of carving out time. Like I want to solve this thing and then you just kind of experiment and play around with it. And I found myself, I think I was, I'm actually looking to train for my next marathon.

And so I need ideas around like, like that. And I can certainly search for that on Google and find some blog that somebody wrote, but this was a really nice, compact way. It kind of prompted me with other things that I might want to think about and ask. So just carving out time to try a new tool. I mean, I'll, I'll just say perplexity was interesting.

I don't know if, again, not sure if it's the best one I'm going to go to all the time for everything, but it was a fun experiment.

[00:34:56] Donna Herbel: I think to your, and I think to your point, Justin, you know, the assumption that what is today is as good as it gets, it's just not true. So perplexity and perplexities evolved over time.

I love perplexity. Claude, you know, has had some great work and some great impacts as well. I mean, there's, and there, there become more and more ways to use it. What I love about Perplexity specifically is it has rich responses, but I also like when I ask Perplexity, this isn't related to the search bar, but in terms of information gathering, Perplexity was one of the first that actually showed me where it got the information.

True. Right. There's a lot who are, you know, there's a lot of chatbots where they're like, Oh, here you go. And then I say, well, cite the sources. And they're like, well, that is out of scope. Well, that's not helpful. Did you just dream this up or whose name is on the byline? Right. But I think as listeners kind of dabble with different solutions, you just become more aware of what's possible.

And you also become a little bit more savvy about what works, what doesn't work in different situations. But I would also mention, you know, when you talk about training for a marathon, I've been fascinated with the GPT store launched with OpenAI. And so, you know, you're also seeing some of those just like chat helps of expertise.

Like I could see, Justin, your personal AI, uh, GPT running coach would be super interesting in that environment.

[00:36:14] Justin Grammens: Well, so this is something I've actually been thinking about for the past week or so, is actually just blowing that out even further. So I haven't done much research on this, but I know people have been working on this.

They've been working on it for years or decades, probably. But just this idea, so I write in a journal every morning, right? I wouldn't say I write every, every morning, but probably 95 plus percent of the time I'm actually writing in this thing. And a lot of it is just sort of random thoughts of things, things that I want to do, things that I have done.

It's just a nice place for me to sort of dump content. Yeah. But I was like, wow, what if, what if, you know, all of this stuff got put into a large language model and then, you know, boy, I'm just sort of thinking about legacy, right? A hundred years from now, great, great, great grandchildren or whatever are able to then ask me questions about how my marathon was, right?

And how did I train for it? And here's, you know, information. So, boy, I just, I, I, I definitely think that there is something there around just being able to train, train the large language model around your entire life, really. And it's going to, it's going to happen. It's definitely going to happen. But, Being able to train these things is becoming easier and easier, isn't it?

Like, you know, the thing called Workbook, boy, we just talked about it at our last meetup, that Google has, super easy for you just to point it out at a Google Drive with a bunch of documents in it, and it will go ahead and pull it in. So, it's fascinating. I love that. I'm going

[00:37:29] Donna Herbel: to now, well, now, of course, you know what I'm doing directly after our conversation is learning that one because I'm not as familiar with, with workbook, but I love it.

[00:37:37] Justin Grammens: Yeah, I was just going to try and look it up. Boy, I don't know why I'm spacing out. I will definitely include it in the links here, but it's a, it's just a tool that Google put out to allow you to, and again, it's sort of a sort of like, as long as you have a Google account, you can start playing around with that, start using it.

So the other thing I was going to say is, is, you know, how, how do people get ahold of you? That's one of the probably good next question. Okay. Sure.

[00:37:58] Donna Herbel: I, you know, I'm a pretty, I'm an open connector, so you can find me on LinkedIn. That's probably easiest. You can also find me through bluephoenixlearning. com or my speaker website is donaherbel.

com, but, uh, reach out. I'm, you know, available. My contact's open. LinkedIn's probably best and easiest or directly through the sites, or you'll see me around the horn at, uh, podcasts, conferences, and events, just. Encouraging people to get curious and imagine the future and dabble in some tools and processes.

[00:38:28] Justin Grammens: That's cool. Very, very good. Good. Is there anything else that maybe you wanted to talk about that we didn't really discuss here

[00:38:33] Donna Herbel: today? I think, Justin, probably the biggest piece when it comes to applied AI and you touched on it briefly is where does this data go over time and how do we use it? I think there's something around organizational knowledge, right?

The coming silver wave that a lot of legacy organizations have, you've got people, you've got experts who are retiring and leaving your workforce. And there's a lot of concern around, man, how do we grow up this next set of legacy leaders? Well, you have a unique ability. to pull that information, those avatars, those coaches out of people's heads before they leave.

You know, is there a model where if a person was like, give me all the Donna Herbal good stuff so we can just source it on demand, you just pay a subscription fee for that and my little magic GPT and I can be your hospitality helper because I've got that background. I don't know. Is there a, uh, lab 651, like, Hey, we're just, this is our expertise and you can subscribe to this source of knowledge.

Like, I think, I think that's an interesting question. I would like us to figure that out because I think that part is fascinating. Yeah,

[00:39:41] Justin Grammens: for sure. And we haven't been able to really do it much, you know, and it's been, again, it's becoming easier and easier for us to get this information. And then B, be able to build these systems and I wasn't, I was wrong with a workbook.

It's notebook. So Google Notebook LM, yeah, Google Notebook LM and you'll find it and I'll be sure to put a link to it in our liner notes here. It makes it super easy for people. Again, it's this idea of kind of building GPTs based off of, you know, notes, but I think What's interesting is, you know, it's kind of integrated into the Google ecosystem, which, which is where all this is going, right?

So we're, we're a G Suite company. I've been using G Suite for forever since even before it even launched. And so, you know, Google can just kind of just turn this thing on, right? And so it'll allow you to be able to kind of crawl over all your documents. Now there's the privacy question, and am I okay with Google having all this stuff?

And believe me, they already do have it all anyways, but again, it's a fun sort of tool. So yeah. Take a look at NotebookLM. Well, that's great. We've talked around a lot of different things, and there's so many new things that come out. I mean, that's the beauty of this field, I think, is literally every day I learn about something new.

There's some sort of new tool. And sometimes I remember the name, and sometimes I get the name wrong, but there's always, there's always new stuff that people are doing. And most of these things too, I would say, they kind of have a freemium model, right? You can generate images, and they're watermarked or whatever.

And if you think it's worthwhile, then start paying the subscription for it. You know, you can use ChatGPT, And it might time out and yada, yada, yada, but you get a sense for what it is. If you think it's useful, then you can pay for the full version. There's all sorts of easy ways for people to sign up and start playing around with it.

And I mean, it's just every day I'm like, I wonder if there's an AI tool for X. And I search for it and I find it. And there it is. I love it. Yeah, exactly. Well, great. Great, Donna. I appreciate the time today. It's been a lot of fun. And, you know, good luck with all your work that you're doing. You know, we actually didn't get a chance to talk too much about Savvy, but I should have you back on to talk about health insurance specifically.

We could talk another hour on that, I'm sure. I

[00:41:34] Donna Herbel: would love to do it. Yeah. Let's, let's talk about how AI helps us solve our most complex challenges and problems easily for the end user. That's a follow up. Justin, thank you so much. This has been a gift.

[00:41:46] AI Speaker: You've listened to another episode of the Conversations on Applied AI podcast.

We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at AppliedAI. mn to keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at AppliedAI. mn if you are interested in participating in a future episode.

Thank you for listening.




People on this episode