The Buzzworthy Marketing Show

Deciphering Reality: Navigating AI-Driven Content with Seeker's Ethical App

June 04, 2024 Michael Buzinski Season 7 Episode 10
Deciphering Reality: Navigating AI-Driven Content with Seeker's Ethical App
The Buzzworthy Marketing Show
More Info
The Buzzworthy Marketing Show
Deciphering Reality: Navigating AI-Driven Content with Seeker's Ethical App
Jun 04, 2024 Season 7 Episode 10
Michael Buzinski

Can you trust what you see online? With deep fakes and AI-generated content on the rise, distinguishing reality from fiction has never been more crucial. Join us on this episode of the Buzzworthy Marketing Show as we chat with Pat from Seekercom about a jaw-dropping encounter with a fake Brad Pitt ad and the broader implications for trust in AI. Pat drives home the point that AI should inform, not manipulate, advocating for content that prioritizes verified and credible information over mere popularity. Imagine a world where AI-generated content comes with a rating system, much like restaurants or products—how would that change the landscape of digital trust?

We also delve into the ethical integration of AI in regulated sectors like healthcare, where compliance is non-negotiable. Pat outlines how AI can be trained to adhere to stringent legal and ethical standards, making content creation both efficient and compliant. From reducing errors and biases to driving responsible innovation, the conversation zeroes in on the necessity of aligning AI outputs with set principles. Finally, we shift gears to focus on small service-based businesses, highlighting practical strategies to leverage AI for optimizing operations, from HR to legal reviews. By identifying and addressing bottlenecks, businesses can dramatically boost efficiency and ROI. Don't miss this enlightening discussion on the future of ethical AI and its transformative potential for businesses.

Follow Pat LaCroix and Seekr Technologies:
www.seekr.com
www.linkedin.com/in/pat-lacroix-1883506

Follow @urbuzzworthy on LinkedIn | Instagram | Facebook | Twitter. Get your copy of Buzz's best selling book, The Rule of 26 at www.ruleof26.com.


Show Notes Transcript Chapter Markers

Can you trust what you see online? With deep fakes and AI-generated content on the rise, distinguishing reality from fiction has never been more crucial. Join us on this episode of the Buzzworthy Marketing Show as we chat with Pat from Seekercom about a jaw-dropping encounter with a fake Brad Pitt ad and the broader implications for trust in AI. Pat drives home the point that AI should inform, not manipulate, advocating for content that prioritizes verified and credible information over mere popularity. Imagine a world where AI-generated content comes with a rating system, much like restaurants or products—how would that change the landscape of digital trust?

We also delve into the ethical integration of AI in regulated sectors like healthcare, where compliance is non-negotiable. Pat outlines how AI can be trained to adhere to stringent legal and ethical standards, making content creation both efficient and compliant. From reducing errors and biases to driving responsible innovation, the conversation zeroes in on the necessity of aligning AI outputs with set principles. Finally, we shift gears to focus on small service-based businesses, highlighting practical strategies to leverage AI for optimizing operations, from HR to legal reviews. By identifying and addressing bottlenecks, businesses can dramatically boost efficiency and ROI. Don't miss this enlightening discussion on the future of ethical AI and its transformative potential for businesses.

Follow Pat LaCroix and Seekr Technologies:
www.seekr.com
www.linkedin.com/in/pat-lacroix-1883506

Follow @urbuzzworthy on LinkedIn | Instagram | Facebook | Twitter. Get your copy of Buzz's best selling book, The Rule of 26 at www.ruleof26.com.


Speaker 1:

Is it just me or is it getting really hard to decipher like reality from deep fakes and AI generated content? I mean, just the other day I was, um, I, I, I suffered from tinnitus and, uh, I saw this ad with, uh, uh, um, brad Pitt that was talking about this miracle nasal thing that that killed the the rain in your ears. I'm like holy cow, this is so awesome. And before I could push the buy button I was like wait a minute and I went and did some research and all that stuff. And then I dug more, went into Reddit and all these other things and I had to spend 15 minutes to find out it was all a hoax.

Speaker 1:

Is AI creating a space where we're not going to be able to trust our own eyes? Is AI creating a space where we're not going to be able to trust our own eyes, or are there ways for us to be able to leverage AI to keep us in the know of reality? Today? That's what we're going to dive into and I've got Pat from Seekercom to talk about this. This is all they do is creating an ethical stream of information using AI. Let's join the conversation. Welcome to the Buzzworthy Marketing Show, pat. Welcome to the show. Great to be here. Thanks, buzz.

Speaker 2:

So where are you calling in from I?

Speaker 1:

am in the Boston area.

Speaker 2:

Thanks, Buzz. So where are you calling in from? I am in the Boston area.

Speaker 1:

The Boston area Enjoying a nice wonderful spring day.

Speaker 1:

Yes, there you go. We went to Boston a couple of last year, I think yes, it was last summer. We did a East Coast road trip. Since then, we flew in from Illinois and it was nice because it was nice and cool from Illinois, and it was nice because it's nice and cool. They're cooler than where we came from for that week that we were in the Upper East Coast there.

Speaker 1:

But, yeah, so now I'm in Virginia, but anywho, today I wanted to bring you on the show and talk about AI and trustworthiness. We've been talking through this whole season has been dedicated to AI here on the Buzzworthy Marketing Show, and we've talked about how to use it, how to leverage it, what to avoid as far as bad tools, good tools, what's worth paying for, what's worth not paying for all these other things. One of the things that really interested me in our conversation is your approach to helping, or what your point of view about the validity of what we're seeing in AI. We've got deep fakes. We've got AI that can basically sell you things and you don't even know it. They can now cold call you. We got AI cold calling people, those types of things. So what is your take on our responsibility as marketers when it comes to using AI and how can we use AI to be better and more ethical marketers?

Speaker 2:

Yeah, I mean, I think it's a great question. Really, it starts from kind of a mission that we have here at Seeker is to really to, first and foremost, inform and not influence. You know, we are all, as consumers, as professionals, just overwhelmed with information, and AI is only proliferating. That I mean. I think we all see the benefits and the possibilities, the power and potential of AI, but it definitely needs to be used intelligently, responsibly and ethically. You know a lot of. You know what a lot of offerings have out there right now is basically data that is sourced and trained on what is most popular or what is most available. You know, I think, for how I think about AI is let's focus more on what's most verified, what's most credible, what is most reliable. You know a lot of the large AI offerings. The technologies are out there right now. I mean they're sourcing on the most popular data. You have no idea where they're coming, where they're getting it from, how they verified it, how models that are analyzing it and generating content, what they're being trained and developed on.

Speaker 1:

So really it's important to kind of have principles and standards to guide you know where you get your sources of information, kind of how decisions are being made and then, ultimately, what answers and insights are being generated from you know the technology. Yeah, I mean I think that a lot of people are already skeptical. I think that it's just a matter of time where we're going to have the blue check of AI output. It's like a verified, like hey, this has been verified by blah, blah blah. Here's all the sources that it used to find it. I think that because AI can be a huge leveraging tool in research, it can chew on data faster than we can even fathom it. I mean, we can talk about the numbers, but we can't understand how much data can be consumed and then analyzed and then spat responses back out. I mean it happens in front of our eyes when you use the power and the magnitude is is just, you know, infinite.

Speaker 2:

That said, I mean you you said, like you know, kind of having an idea to verify that idea of that check mark you know very much how I think about AI and a lot of what we're doing at seeker is this idea of our largest sources of information and news, you know should be rated. Rating systems are pervasive in all walks of life. You know how we look to where to eat, where to shop, where to travel, what to buy, but our largest source of information tend to not have a rating system. And I believe, and we believe, that by having ratings to kind of guide and evaluate the content that we're being delivered, that enables much greater critical thinking, it enables much greater transparency and with transparency, there's much greater trust in the information we consume, the decisions we make, so on and so forth.

Speaker 1:

So how I mean? So how is it that? How can we then rely on AI to police AI as far as its authenticity? How does that work?

Speaker 2:

Yeah, I think you know AI can be trained and deployed in a number of ways.

Speaker 2:

So I think, if the purpose and the attention is to deliver trustworthy outcomes, reliable, credible information and insights, it's, you know, tuning and developing AI to that end is, you know, not that difficult.

Speaker 2:

What we do at Seeker is we work to develop AI that we say is principally aligned, and what that means is understanding specific standards and guidelines for different use cases and industries and verticals and then training the AI to evaluate, score and rate and then generate content against those principles and standards. So, like I know, we'll probably talk a little bit more about this, but we've developed different platforms already in terms of how we rate and evaluate the news, again trying to provide customers and partners the most reliable, incredible insights on what is trustworthy news. In that case, we trained the artificial intelligence to evaluate against journalistic standards and principles. We believe, when it comes to news, using trusted, universal, timeless journalistic standards is the best way to identify what is objective, credible, reliable. Does the headline match the body? Is their clickbait, is their personal attack, is their political lean in the content? Kind of scoring and rating to that end, you know, ultimately provides a much more trustworthy experience and a much more reliable, incredible, you know. Feed of content.

Speaker 1:

So we're talking like pre? Was that 2009, 2010,? When they released, like the propaganda mandate, or basically they, they lifted the, they rewrote or I don't know even how you say it. Like they, they legalized propaganda right where it's. Like you, like news sources could literally just now they can. It's it's opinion and and it's being held as opinion right.

Speaker 2:

So you know, have a rating system to say this is opinion, this is breaking news, this is biased, um, this is clickbait. I mean, I think it's not about censorship or eliminating that content. It's just providing a labeling system and a transparent and explainable way so that people understand where there? Is bias or where there could be lean in terms of what they're consuming and what they're reading they're consuming and what they're reading.

Speaker 1:

Yeah, my wife uses something that's been around for a while and I can't now I'm trying to remember what it's called, but it basically it rates and it's for new specifically. And it rates like whether or not it's left-leaning, right-leaning, um, you know, do they have bias and stuff like that? I'm sure that, but they've been doing that manually for a while, so I'm sure that the yeah, this is helping them get better.

Speaker 2:

To do it at scale, you do need artificial intelligence. I mean the proliferation and just the volume of content that comes out every single second of every single day. You do need the AI to ultimately read and rate everything at scale. I think having strong guidelines and human intelligence to guide the artificial intelligence, human intelligence that is trained and principled against set standards, is key, but to ultimately deliver it at scale, you have to have the artificial intelligence.

Speaker 1:

So, and yeah, I totally agree with you as far as like scalability, Because, like this one I don't know what, I can't remember what it's called, but it basically only identified the networks and or sources, so the newspapers, the networks, the radio stations, like that, and so it's like, overall, their views are this way, their opinion pieces are this way, or editorials are this way, dot dot, dot, right, yeah. And then you're saying it's like it can be basically article by article news piece by news piece, advertisement by advertisement, absolutely All of it.

Speaker 2:

Yeah, I think everything needs to be evaluated on its own merit. You know not not everything from one publisher is good or bad, or left or right. I think you have to evaluate every piece on its own merit and that's how you ultimately provide a much more you know trustworthy ecosystem. You also provide a much more kind of informed base of points of views. That way, people are applying critically thinking skills. They're more open to consume and understand different perspectives, different points of view. A lot of where we've gotten to a very polarized, fragmented, misinformed kind of echo chamber is because we just described something from this is always good, Something from that is always bad and I will only consume something from this, not that.

Speaker 1:

Right, yeah, cause they're basically they're branding and they're they're branding their ideologies at the front end and therefore, whatever they say, yeah, I have, I have in-laws that only listen or watch one type of news channel and every time they post something on social media, they're like, yeah, they've been watching the news too long, they're they're news too long.

Speaker 1:

So, when it comes to advertising, do you think we're going to get into a space where, I mean, how is it? Because I mean advertising is sensualizationism I can't even say that word, but it's sensualized, sensationalized. There you go. Sensationalized, they're opinions, right, we're the best, we're the number one. And really, I mean, statistically, you can, that's the thing about stats, right, there's lies. I mean statistically you can, that's the thing about stats, right, you can hear there's lies, damn lies, and statistics right. And so you can make any statistics say whatever you want, and marketers use that, have been using that for centuries to basically, you know, put forward their message and their brand that they're advertising as the best. It depends on how it is. So how are we going to utilize, or how can AI be utilized to not like, I don't know, like, I mean you can sit there and say yeah, there's a lot of fluff there, but I mean it's all fluff, so how do?

Speaker 2:

you read that, I know you. I know we have a time limit on this podcast so I guess we'll let's get the follow-up. But just by way of background, I I my the genesis of my career was in the advertising world. I spent 12 years at large advertising agencies and then the last 10 or so years leading a host of marketing functions that you know, notable major brands like Bose and CVS Health. So, kind of to answer that statement you made, I would say a few things. So one there is the you know. You know where do we run our ads, where do we invest our media buys, and that's very much where we're developing with artificial intelligence and really understanding the credibility and reliability against set standards at scale can really come in.

Speaker 2:

So, working with our advertising partners, be it agencies, brands, ad tech companies to kind of hone in and have precise dials and signals that you know, based against set advertiser standards, this is the most suitable, reliable content that you can grow your reach responsibly and confidently against in a way that aligns with your values and your objectives. So there's the you know where to invest and where to run the ads. And then, in terms of the messaging of the ads itself, I think a lot of what we can do, or something we do at seeker, which I'm happy to talk more about. But for anybody who's, like you know, training and deploying AI to identify, you know, the best messaging, the best copy, but applying it to principles and standards to be compliant and adherent to principles and standards. So, if you are a healthcare company looking to advertise, you know you need to be very compliant with healthcare standards and principles and kind of like you know, certainly for me.

Speaker 2:

I worked at a healthcare company, in CVS Health. I've worked at big agencies and big brands. Like you know, legal nothing goes out the door without legal sign off. So, even though, yes, there is a lot of creativity and you might call it fluff those. Those messages still have to cross the legal review board, so to speak. So you can work with AI to analyze and kind of save a lot of time to make sure like, look, these following messages are compliant and adhere to the following legal standards. And then that gives you a tighter box of where to be as creative, persuasive as you can with your story and your messages.

Speaker 1:

So I'm hearing something really interesting, and I think that it's really important to point it out, is that we can train even something like chat, gpt or Claude our ethics and our guidelines of what it can and can't say.

Speaker 1:

And now, with the advent of like, being able to create your own GPT's for free, you can basically customize AI in.

Speaker 1:

I mean, I just made a GPT the other day and I think it took us maybe 20 minutes to to basically train it in what we wanted it to do and make sure that it did it every time. Right, that was with testing, right and used to be like you mean, programming that kind of thing would take, you know, hundreds of hours, right. And now we have these, these tools that we can literally do what Pat just said, which is, you know, put your code of ethics in there, put your all of your guidelines, your legal guidelines, the things you can actually tell it, what it can't say, what words are forbidden to be put in any copy, especially if you're doing long form copy like email marketing or maybe you're doing a magazine advertorial or anything like that. So, um, I think that that's really important to point out that you, you can do this type of thing on your own. It's just you need to to preempt your bot to act the way you want it to.

Speaker 1:

And I think you kind of led to that earlier, in that you know there's human intelligence and there's artificial intelligence, and artificial intelligence only knows what the humans can tell it.

Speaker 2:

Right, and that's where we're really principally aligned models and technology really comes in. So I won't speak about some of the other firms you spoke about. I will just say the PT stands for pre-trained. So it's a little difficult to say that you can do a lot of training because I think a lot of stuff is already pre-trained on your behalf.

Speaker 2:

But yes, coming up and informing the technology to be applied against principles and standards and then training it in a way so that everything it is finding, scoring, rating and then generating is compliant and adherent with those principles and standards. So, whether it's, you know, legal review against certain laws and requirements, healthcare standards, financial regulation, you know regulation. Insurance standards, you know, the list goes on and on.

Speaker 1:

Oh yeah, Consultants I mean have to worry about that, because if you over promise or you promise the wrong types of things and you allow AI and you don't let, you don't check AI, I mean you got to check it, Like even even says it in there like, hey, this is this, is this information, you should check it because it, I mean there is a thing called AI. Oh, I just lost the word Hallucination. Oh yeah, Right, yeah, you know, and it's just. It just kind of makes stuff up.

Speaker 2:

And we like to talk about how it can't be made up stuff we like to talk about. You know how do we? We separate the reality from the hallucination and kind of solve for error and bias by doing all that work on the front end to make sure everything is aligned against, set aligned principles and standards, and doing so in a very transparent and explainable way. So you know the answers you're getting, you can trust they're verified, there's integrity to them and it's explainable for the following reasons because they apply to the following standards Nice.

Speaker 1:

And so where do you see this type of technology when it comes to AI, say, just in the next couple of years? What's the vision for companies like Seeker, who are really focused on the ethical application of what AI can do for humans?

Speaker 2:

Yeah, I mean, I think it's really about how do we fuel innovation and unlock productivity, but do it in a way that is very responsible and ethical and table stakes.

Speaker 2:

Like AI, when applied the right way, you know, saves so much time and resource against a lot of the um. You know tasks and activities that we all do individually in our day-to-day and certainly organizationally across the enterprise. Um. So you know how do we, you know, kind of automate decision making and insight and information gathering and knowledge bases, but doing so in a very principally aligned way. So, yes, you're saving a lot of time, but you're not taking shortcuts to kind of give you that base of knowledge to make smart intelligence decisions. And then what do you do with that data? Ideally, it's to, you know, free up your time to be more, you know, invested and engaged in terms of developing strategies and thoughts and perspectives and recommendations. That's where I see AI done well and applied in the best way, going so saving and optimizing time and resources to be focused on much higher value, strategic output and return on investment.

Speaker 1:

So if a small company and maybe it's a service-based business, maybe they are around in law or health or maybe they just have liability in consulting or a management firm or something like that, where do you suggest they start with this type of approach, this ethical approach to their AI?

Speaker 2:

Well, I mean, they should certainly give us a call at Seeker. That's S-E-E-K-R.

Speaker 1:

No.

Speaker 2:

I mean I think you know there's a whole host of things you can do. I mean you can use AI across any number of you know needs and opportunities, whether it's kind of HR operations, invoice processing, claim processing, you know the legal review, paralegal type stuff. I mean the list is endless. I mean I think, really understanding you know kind of where your bottlenecks are, where do you need to significantly accelerate kind of time to market and where you spend your time, as well as kind of you know understanding where the most you know critical decision points are. I think that's where you know you would start like just being really clear on you know where in the where in your process, where in your flow do you need to speed up your time, get to market quicker, and then, kind of how do you use technology and things like artificial intelligence to fuel that? I think being really specific on the need and the objective is where you need to start.

AI and Trustworthiness in Marketing
Ethical AI Principles and Compliance
Ethical AI Implementation Strategies for Businesses