Tech Travels

EP14: AI Cognitive Personas with Dr. Allen Badeau

May 10, 2024 Steve Woodard Season 1 Episode 14
EP14: AI Cognitive Personas with Dr. Allen Badeau
Tech Travels
More Info
Tech Travels
EP14: AI Cognitive Personas with Dr. Allen Badeau
May 10, 2024 Season 1 Episode 14
Steve Woodard

Send us a Text Message.

Prepare to be enthralled as we welcome back an esteemed guest of the show,  Dr. Alan Badeau, a trailblazer in the realm of artificial intelligence, takes us on a deep exploration of AI cognitive personas. His knowledge casts light on the fascinating process of training AI to reflect humanlike behaviors and communication styles. It's a discussion that transcends the conventional, revealing how psychological models can revolutionize our interactions with AI, making every digital conversation feel remarkably genuine and lessening the frustration that often accompanies our current experiences. Whether you're in software engineering or customer service, the insights Dr. Alan Badeau imparts are set to reshape the future of user engagement. 

In this podcast we discuss how Dr Allen Badeau is working to build AI models that incorporate the Big 5 Personality Traits: The Big Five Personality Traits, also known as OCEAN or CANOE, are a psychological model that describes five broad dimensions of personality: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.

Confronting the elephant in the room, we grapple with the issue of bias and ethics in artificial intelligence. Dr. Bedeau unveils the criticality of a psychological understanding in AI training, a step that cannot be overlooked if we wish to trust AI with decision-making. We dissect common misconceptions about bias in language models and shed light on the creators' paramount role in installing AI guardrails. This chapter is crucial for everyone involved in the AI sphere – developers and users alike – providing a thought-provoking perspective on the responsibilities we hold in sculpting technology that is both competent and ethical.

The final beat of our journey with Dr. Badeau confronts the challenges in developing these nuanced cognitive personas. From inadvertent biases reflecting in AI-generated content to the global disparity in regulatory frameworks, this part of our conversation underscores the importance of ethical AI development standards. It's a candid discussion that highlights the complexity of creating AI that not only understands group dynamics but also respects individual privacy.

As we eagerly await our next session on AI and quantum computing, we extend our deepest gratitude to Dr. Bedeau for an episode rich with insight and anticipation for the technological frontiers ahead.


About Dr Allen Badeau
https://www.linkedin.com/in/allenbadeau/

Harmonic AI
https://harmonicai.ai/

Support the Show.



Follow TechTravels on X and YouTube

YouTube Channel
https://www.youtube.com/@thetechtravels

Tech Travels Twitter
https://twitter.com/thetechtravel

Tech Travels
https://techtravels.buzzsprout.com/

Tech Travels +
Get a shoutout in an upcoming episode!
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Send us a Text Message.

Prepare to be enthralled as we welcome back an esteemed guest of the show,  Dr. Alan Badeau, a trailblazer in the realm of artificial intelligence, takes us on a deep exploration of AI cognitive personas. His knowledge casts light on the fascinating process of training AI to reflect humanlike behaviors and communication styles. It's a discussion that transcends the conventional, revealing how psychological models can revolutionize our interactions with AI, making every digital conversation feel remarkably genuine and lessening the frustration that often accompanies our current experiences. Whether you're in software engineering or customer service, the insights Dr. Alan Badeau imparts are set to reshape the future of user engagement. 

In this podcast we discuss how Dr Allen Badeau is working to build AI models that incorporate the Big 5 Personality Traits: The Big Five Personality Traits, also known as OCEAN or CANOE, are a psychological model that describes five broad dimensions of personality: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.

Confronting the elephant in the room, we grapple with the issue of bias and ethics in artificial intelligence. Dr. Bedeau unveils the criticality of a psychological understanding in AI training, a step that cannot be overlooked if we wish to trust AI with decision-making. We dissect common misconceptions about bias in language models and shed light on the creators' paramount role in installing AI guardrails. This chapter is crucial for everyone involved in the AI sphere – developers and users alike – providing a thought-provoking perspective on the responsibilities we hold in sculpting technology that is both competent and ethical.

The final beat of our journey with Dr. Badeau confronts the challenges in developing these nuanced cognitive personas. From inadvertent biases reflecting in AI-generated content to the global disparity in regulatory frameworks, this part of our conversation underscores the importance of ethical AI development standards. It's a candid discussion that highlights the complexity of creating AI that not only understands group dynamics but also respects individual privacy.

As we eagerly await our next session on AI and quantum computing, we extend our deepest gratitude to Dr. Bedeau for an episode rich with insight and anticipation for the technological frontiers ahead.


About Dr Allen Badeau
https://www.linkedin.com/in/allenbadeau/

Harmonic AI
https://harmonicai.ai/

Support the Show.



Follow TechTravels on X and YouTube

YouTube Channel
https://www.youtube.com/@thetechtravels

Tech Travels Twitter
https://twitter.com/thetechtravel

Tech Travels
https://techtravels.buzzsprout.com/

Speaker 1:

how humans make decisions based on our entire history. That you just said right. All of our past experiences lead up to how we make our decisions. But if you put AI in there and we've taken all of that bias away, then how can we trust their decisions that they're making, because they don't have that historical context around that.

Speaker 2:

Welcome to Tech Travels hosted by the seasoned tech enthusiast and industry expert, steve Woodard. With over 25 years of experience and a track record of collaborating with the brightest minds in technology, steve is your seasoned guide through the ever-evolving world of innovation. Join us as we embark on an insightful journey, exploring the past, present and future of tech under Steve's expert guidance.

Speaker 3:

Welcome back, fellow travelers, to another exciting episode of Tech Travels. In today's episode, we're going to dive deep into the topic of AI cognitive personas, and today we're excited to have Dr Alan Bedeau returning to the show. Dr Bedeau is a seasoned AI evangelist, ceo of Alan Bedeau LLC, where he specializes in AI, blockchain, quantum computing and other advanced technology solutions for his customers. He holds a PhD in mechanical engineering and he boasts over two decades of experience, bringing in profound understanding of the technical, business and ethical aspects around AI application, and his expertise in this domain is unparalleled. It's a pleasure to have him back on the show to dive deep into this fascinating world. Alan, welcome back to the show. It's amazing to have you back on the podcast.

Speaker 1:

The listeners have used ChatGPT, large language models, whatever, cohere, etc. Etc. It doesn't always listen to you. You'll ask it a question and it may give you the right answer. It may give you the wrong answer. When you ask a follow-up question, it may give you the exact same answer that it just gave you, and that's an issue for a lot of folks.

Speaker 1:

And you start to interact with these things enough. They start to behave in certain ways. So you start to learn what to expect from them at times and you know we'll use prompt engineering to say, oh, I want you to behave like Shakespeare and write something or a play about X, y and Z, right, and you know that'll work for a little while. But then all of a sudden, for some reason, it starts to give you the wrong answer, gives you random answers, those kind of things, and it's because it doesn't have a personality. You've just told it to behave like him.

Speaker 1:

You haven't built that into his DNA, and so, from an AI cognitive persona perspective, we believe that we can take the properties of that person, or the properties of whatever the entity is, and we ask it personality questions. We ask it a whole bunch of other different types of questions around leadership, your big five that they use in psychology, and then we train the models based on that and then when you ask it to behave in a certain way, it's more accurate. It gives you the right answers significantly longer. Usually we see about a 30 percent improvement over just a normal chat GPT by using this type of approach and it's it's. It's significant when you start to apply it to real world scenarios, because then you can actually develop software engineers, you can develop different, you know different markets and verticals and people within those. So that is the underlying factor of what an AI cognitive persona is.

Speaker 3:

So it sounds like and if I could kind of wrap my head around it, it really is trying to kind of train an AI model to kind of mimic, or almost kind of mimic, certain behaviors or communication styles that tend to be more human-like and specifically around human-like interactions and engagements with humans from an AI perspective, right.

Speaker 1:

That's exactly right, because what we want to do is we want to improve the user experience. And I don't know if you've you know if you've called in to a credit card and you talk to the old bots and you're like, you know, you get off the phone and you're angrier than when you got on the phone right, or if you're in a hospital. You know, they've tried a lot of different studies with robots in hospitals and they tried to always make the robots happy. Well, people didn't like that because they knew it was fake and it's fraudulent. And so what we want to do is we want to give them the entire spectrum of personality traits and then the interactions become real.

Speaker 3:

It seems like there's a lot of the psychology that's kind of wrapped into this and I kind of want to dive into this a little bit deeper. As you mentioned kind of the top five, kind of personality traits or tests. I think it is right, I think that's the big five. How are you kind of taking that and you know again, to technical people, how are you taking that and kind of applying that into something that's more binary, something that's more kind of robotic, something that's more like a machine? How do you really get into that application of a personality into like a natural language model?

Speaker 1:

Yeah, and that's the fun part, and you know. So it was about three years ago. I even started thinking about some of these things and I didn't get to apply it in my last job, unfortunately, but it was always around the user experience and thinking you know what, if I'm a soldier in the field, if I'm a hospital worker, if I'm something like that, how am I going to make that interaction as real as we possibly could, right? And so you go through the entire process of answering, just like you would, as personality test. Uh, the human will go through and I'll select whatever the appropriate answer is, based on the behaviors that they're trying to to mimic in that, that, that persona that they want to leverage, and it's about 75.

Speaker 1:

It's pretty in depth, but you go through the process and we score it. And once we have that score, then we can say that, um, they have a neuroticism score, they have an aggressiveness score, they have an openness score, and these are all the traits of those scores. We build those into the models, train them. Course, we build those into the models, train them. You know we've got about 75,000 different data points that we'll use to train our models, solely based on the characteristic traits and then when it comes out, then it has that personality that we are trying to shape it to.

Speaker 3:

That's incredible. Is it actually able to respond to people that have high levels of neuroticism, or people who have high levels of sarcasm? Is it able to almost kind of detect the type of person that it's interacting with? Right, some people tend to be more one on end of the spectrum than the next, and the AI model might say okay, I think I know how to kind of gear my next responses based upon this person's personality type.

Speaker 1:

It's a little scary. A good example is a development team, and that's where we started to play, because I had a long time ago we were doing a demo.

Speaker 1:

The demo went awful just because the team dynamics were terrible. Right, Just personality caused destruction, not technical, but just personalities. And so when we started with the personas that's where we put them we had some folks that were very aggressive, very dominating from a conversation perspective, and we wanted them to interact with other personas that were not all technically skilled and trained to do. We had trained them on software development, and when we put those five together it was a disaster. They couldn't solve anything. They would divert, because the dominant persona in the conversation would always try to butt in and say, no, we have to do this, no, that's not correct. No, it's this and no, it's that. And watching those AIs communicate back and forth was it was, quite honestly, it was fascinating, but it was a nightmare. And so that's when we really knew that we were onto something. And then you start to apply it to other things product evaluations, you start to apply it to any sort of evaluation or any type of work that really requires some sort of specialized skill. Then it really takes off.

Speaker 3:

It's interesting and I guess the. I guess the end result is that you know you're really kind of looking to kind of have these interact. I mean, what's the what's the? I guess these are things that you would probably use like customer service, something that you'd use as maybe therapy aids, maybe educational assistance. What are some other use cases that you're looking for as kind of the ideal use case for something like this? Where are you specifically targeting kind of that first reach?

Speaker 1:

So it's going to be around their product evaluations is a good place to start Another one.

Speaker 1:

If you think about any sort of service industry, you're always trying to start up a service or turn off a service and bring something new.

Speaker 1:

If you have a customer base that you would like to model and say if I take this service away, what sort of impact is that going to have on my bottom line and are my customers going to be ticked off because I just took something away that was one of their favorites, right, that's a perfect use case for that. Also, looking at trying to give leaders different perspectives around their leadership team for example, if I'm the CTO of a certain company and I have a persona that I can just tap into and say, hey, I want you to look at this, this and this from the competitor's perspective, then that just gives me more information that I can use to make a better decision. And then when you add that to the rest of the team, then your information that you get is much more accurate. Your information that you're able to process then and make a better decision is much more impactful than if you don't have something like that.

Speaker 3:

That's incredible. I mean, it seems like there's wide ranges of applications and uses for something like this. You mentioned kind of having a secondary sounding board. Is something when someone like a cto to kind of evaluate, examine, give some sort of prescriptive or predictive, uh type of indicators? Um, what are you seeing in terms of kind of certain some of the certain behaviors and communication styles that that that are key ingredients that might still be kind of missing?

Speaker 1:

So the biggest thing that we always are looking for is the leadership styles, because there's so many different indicators for leadership styles, and we continue to play around with some of those.

Speaker 1:

One of the biggest challenges, though, is always going to be making sure we get that interaction with the human right and it's presented appropriately. Our biggest, you know, triumph, I think, is really that ability to have those communication channels be like they are with a human, you know, and, of course, the large language models have really made that possible, but we only rely on about 30% of the large language model as our base technology, and the rest of it is everything else that we're doing from an AI perspective. So, you know, we're taking all of those important functional ai fields of research and we are integrating them together so that we can really do a much better job in, in modeling, you know, those types of things, and we've got a long way to go. There's a lot more tech that I want to. I want to build into it. It's just. Then it comes down to time and priorities and everything else that, uh, that goes along with that you mentioned.

Speaker 3:

You mentioned the, the complex center, interactions and everything else that, uh, that goes along with that.

Speaker 3:

You mentioned, you mentioned the, the complex center, at interactions and engagements, right, that humans and normally more normal everyday people, right, in our normal roles, uh, you know, we deal with conflict a lot and when, of course, we deal with things that happen over a long period of time, I'm wondering, you know, are, are, are you, are you thinking about managing those complex interactions that evolve over a long period of time, and how AI is able to identify the relevant topics, how they're able to think through appropriate responses, just the personas, just based upon short-term, long-term memory?

Speaker 3:

There's got to be a lot of components that are being built into this, not just from the technology aspects, but also from kind of the psychology of this as well too. Are you bringing in different practitioners from different disciplines and psychology to kind of help help it build, understand and train that model? Or is it we're kind of just bringing in just kind of a different approach where we're just using technology and then five different types of personality traits? What are some of the other disciplines you're incorporating into this new type of venture?

Speaker 1:

Yeah, psychology is the biggest one. I mean, you know, if we can't get that right, then we're really in deep trouble, and so that's been the, you know, the primary focus. From that perspective, we are using, you know, some folks for AI and the other pieces of that, but that's the most important piece and I think people have forgotten about that. And even when we talk about bias right, and you know, everybody wants the AI to be able to make a good decision.

Speaker 1:

But if you think about how humans make decisions, we make them based on our entire history that you just said. Right, all of our past experiences lead up to how we make our decisions. But if you put AI in there and we've taken all of that bias away, then how can we trust their decisions that they're making? Because they don't have that historical context around that. And so we're quantifying the bias, and that's the other area that is really helping us is that we are quantifying how much bias they have, we're controlling that and then we're building around that and then, as they are making decisions, then that context becomes much more relevant and much more important.

Speaker 3:

I love that you've mentioned the bias topic because this is something that I've wanted to explore and I hope that you can double click on this for a little bit. When it comes to building biases into artificial intelligence, I think there's some people that maybe don't understand its full implications of what they mean by bias, and I've always kind of said, you know, building bias into AI is somewhat of a good thing, right, because it kind of helps forgive. It's a set of preset, a preset of guardrails that allows it to make sure it can't operate out of a little bit. So, from your perspective, help our listeners understand a little bit more about what bias in AI really means and what is the good application of a bias in an ai model yeah, and that's that's a huge topic I love, I love talking about it.

Speaker 1:

So that's that's fantastic. Because you know, what everybody believes is that when they go to any of these large language models, that it's evenly distributed, that it's positive, so it's going to come right down, right down the middle, the normalized curve, and the answer is right in the middle, whereas in reality we know that these models are inherently biased because the data is so large there's no way that we can shape it one way or the other. They try to by putting a filter on it and they'll say, oh, it can't do political stuff and it can't do this and it can't do that. Well, they haven't realized the implications you know, chaos theory of what can happen inside of that data when you start putting a filter, because there are going to be downstream effects of those. So these models, to start, are inherently biased one way or the other.

Speaker 1:

You have to figure out what that is, per your question.

Speaker 1:

What we're doing is then we're training those characteristics to help us quantify what that bias is, so that if somebody wants to model a negative behavior, then they can do that, but they know a certain number is going to be representative of how much it's biased in that direction. If they want to, if they want to do a salesman that has this kind of personality, then we're gonna bias it. So it's happy, so it's got an energetic personality, so it. But we know we can tell them exactly how much it's happy, so it's got an energetic personality. But we know we can tell them exactly how much it's biased in that direction so that when they go through and they make their evaluations then they understand that upfront and that's something that we don't talk about. If you ask any of these commercial large language models, how much are you biased? It can't answer the question or it won't answer the question usually and I think folks have a misconception that these things are right down the middle and they're not, not even close.

Speaker 3:

How does and help us understand that just to just keep double clicking on that again a little bit is that when is it kind of up to the organization, the entity, the creator of the artificial, the AI, is it up to them to kind of look at bias and figure out what type of guardrails are going to build into it? Is there AI that already kind of comes already pre-built with bias built into it and we just add in extra filters and extra guardrails? How does that process work in terms of just understanding it from a layman's perspective?

Speaker 1:

Yeah, they take all that data and I think, as they put that data together and they're training their models, I think they try to weed out as much as they can. But we know in reality there's too much data, way too much data, and they can't. And so as they start to see a model drift and it starts to maybe it's answering a political question that it's not supposed to answer, Maybe it's asking a hot topic from a gender perspective, potentially and they start to see negative feedback come, Then they'll flip a switch and pretty much say you cannot answer those kind of questions anymore and that's how they try to put those guardrails around that. But in reality it doesn't always work because there are always different ways to get around it. You can ask it and, irregardless the best example that I had irregardless, if you ask a certain model to write a poem about a certain presidential candidate on a certain side, it will write an absolutely beautiful one.

Speaker 1:

It's fantastic. If you ask it to do the exact same thing on the other side, it says I can't answer that question because it's political in nature. Okay, Well, there's a perfect example of bias, right, If you ask it to generate a script about who makes a good scientist, for example, it will give you an answer and it's going to upset an awful lot of people because it's going to be biased. I'm not going to say which way it's biased, but it is biased, and those are activities that folks can go out and do today.

Speaker 3:

Yeah.

Speaker 1:

And that's a problem. That's a problem.

Speaker 3:

Yeah, it's funny. It's often the debate like who's the best scientist? I have this question sometimes like who's the greatest historian sometimes, and I get all types of who's the greatest philosopher? Right, it's very subjective, it depends on who you ask. That's right. You know, when you're looking at building these models and you're building them now with these cognitive personas, what are some of the things that are the challenging aspect of it? Is it more of the ethics and compliance? Is it more regulatory? Is it? You know, what are some of the challenges that are aside from the technology, right, just technology out of it. Outside of that, what are some things that you see are some challenges?

Speaker 1:

Yeah, the ethical piece is a huge one for us and we pay very close attention to that. We do not try to mimic a specific person. We don't want to do that at all. That doesn't do us any sort of good. We want to look at the group dynamics. We want to understand that because, as a whole, when you're looking at these kind of numbers and the size of the data that we're using, getting it down to a single person is not, I don't think it's realistic. But, you know, making sure that we can continue to quantify what those biases are. So we are meeting the ethical standards that we are have set for ourselves. That is the most important thing for us. We do not want somebody to use this for or have the model go sideways and cause HR issues, cause any sort of ethical you know issues when it comes to those kinds of things. So we pay really close attention to that.

Speaker 3:

And do you have to, and how does this typically work? I know there's been a lot of attention being paid to things like you know what's happening in the European Union with AI legislation. They've been even talking about Joe Biden's executive order around AI. I mean, what are we seeing from the US government in terms of kind of applying some sort of kind of guardrails around ethical and responsible use, around artificial intelligence and creation of cognitive personas? Is there anything that we're working towards, kind of from the tech level all the way up to the federal level? Are we kind of working in compliance, or is this still an area that we are still exploring with vague regulatory guidelines?

Speaker 1:

Yeah, I would say it's the latter. I think the president's guidelines that he set out was a good start. I would say that the European Union is far ahead of us when it comes to legislation around. You know those types of things. The European Union, though, decided that they were going to focus more on the data aspect and the interactions that, you know, a human has with the AI, understanding how the data is stored, their personal data, those kinds of things, which is fantastic, so that they can, you know, have their own, you know, get their digital ID back. But you know they are still farther ahead of us, though.

Speaker 1:

From a legislation perspective, I think we've got a long way to go. I'm encouraged that the president was able to get something out, but, you know, I think until we start to actually put some real legislation in place, then I think we're all going to be in a guessing game, because what we don't want to do, at least from our perspective, we don't keep any data. We don't keep personal data. We don't even go. We don't buy personal data. We don't do any of that stuff.

Speaker 1:

We use our psychological exams that we take, and you are filling that out based on what you're trying to model or simulate. I mean, I could go buy shopper data, I could go get all that kind of information on what a shopper or a typical demographic could look like. But from our perspective that's not really what we're trying to do, because you know it's not going to be helpful in the long term for us. So we're just trying to do those sort of things and we believe everybody should have their own digital identity. You know, I don't want to hold any of that stuff. I don't want to model a certain person. I don't want to keep any of that stuff.

Speaker 3:

It doesn't help sourced from published texts, probably published tests or what type of data is it specifically?

Speaker 1:

when you mentioned tests or things like that, yeah, it's psychological tests. It is now about 20% of that data that we're using is actually AI augmented data. So we use we'll mix that in with our validation data just to have some additional questions around that but the rest of it are human generated psychological responses that we're using. No names, no information like that, but just general answers to those types of questions, and that gives us so much that we can work with. It's really quite fascinating.

Speaker 3:

That's interesting, because I would think that the opposite would be true is that the more data you have on a much more wider net you can cast, the better your data model might be. But it seems like you're able to do this more efficient, better, leaner, faster with less data.

Speaker 1:

Yeah, we don't need billions and billions of data points for these sort of things, because it is really fascinating that you start to swarm into different categories based on certain, you know, characteristic profiles and you know, as those responses start to come in, they quickly group, very, very fast, and that's one of the good things that has really helped us. If we had to use the same amount that these large language models have to use, we would, we would be in trouble, we, we.

Speaker 1:

There just wouldn't be enough data out there for us to be able to to do that because, and again, we're not trying to get down to the individual person, we're trying to get to the group size.

Speaker 3:

And I guess that's because what we're. I guess maybe that's because what you're trying to do is so very niche, right, You're basically looking to build kind of a cognitive persona and you only really need a smaller amount of data to kind of really kind of create some sort of persona that could respond to humans, respond to certain behaviors and certain communication styles. So I guess that in a sense, it's almost like its own layer of data that you need to be very prescriptive to have a very specific outcome. That's right. That's very interesting. What are you looking at from your forecast, let's just say, the next 12 to 24 months? What are you kind of seeing on the horizon? Where do you kind of see the technology moving to?

Speaker 1:

Well, I think it's going to continue to get more advanced to Well, I think it's going to continue to get more advanced. We've seen some information that's been published on some folks doing emotions, and that's great. Emotions is a small part of ours, but we're looking at the broader context. Let me take a little bit of a step back. We've started to see a little bit of stagnation in these large language models. Okay, they've released a brand new one. It can do a little bit more. They've released something else. It can do a little bit more.

Speaker 1:

But people are using it for the same thing. You know, just because they go from three, five to four in chat, gpt, for example, they're not doing any more. For the most part, they're usually asking it the same sort of questions that get them through their day. We want to expand that by layering in new capabilities, and some other folks are looking at other capabilities that they want to layer in. I think that's where we're going to go in the next 18 months. Is it's not going to be? Oh, I can interface with Excel or I can use this tool. No, it's going to be. Oh, from my perspective, it's doing it for me and it's acting like me. And now the results I'm getting back are more like my results that I would normally do. That's where I think we're going to get to.

Speaker 3:

That's interesting. I kind of think for a second. You know how profoundly impactful that would be in terms of our society to be able to interact with a true cognitive type of AI persona. I guess maybe in a second here, and if you could kind of going back to you know 2001, A Space Odyssey, when you had Hal, the computer right, and you interact with an AI entity that almost is stubborn and refuses to comply, right, it's like sorry, Dave, I can't do that. I mean, is there a way that you can, that you're looking at, kind of looking at how it could possibly be an AI entity that does not comply, or how are you kind of factoring in that non-compliance ability with inside the AI entity?

Speaker 1:

Yeah, we have modeled some of those and we have played with some of those. Some of the large language models that come that are unfiltered. We have layered our personas on top of that, and some of the responses that we get are not very nice when it does certain things. Some of the responses that we get are not very nice when it does certain things, and so it's not so much that. You know, we've created almost a bill of rights that we will build in as part of our operating model and as we train them.

Speaker 1:

These are the things that you're allowed to do. These are the things that you're not allowed to do. Don't violate X, y and Z to do. Don't violate x, y and z, and if you do, then you know you have to let the user know that this is. This is the reason why you're doing, uh, these things. It's. It's. It's actually, uh, made some, some decisions and putting guard rails on it significantly easier. Again, it's. It's part of the training and the building process of the models themselves. So that is, uh, that's something that we're watching out for, though, because there are more and more large language models coming out with no filters on.

Speaker 3:

Incredible.

Speaker 1:

And it's yeah, it's going to be interesting.

Speaker 3:

And, from the user perspective, would I ever know that I'm interacting with an AI cognitive persona? Is there any type of warning label that you know, hey, me as a consumer, as an everyday user, I'm going about my life, working, and I'm interacting either with online travel or booking a car. Is there any type of anything that would allow me, as an end user, to know that I'm interacting with an AI cognitive persona? Or do I just have to guess and say I don't think this person's real?

Speaker 1:

No, from the app. When they use our app, they know it's all over the place. One, they've helped build it. Two, they know what its decisions are. Three, it's quantified right for them. If they're interfacing with one of our customers, they know because, again it is, it is broadcast in there. I, I, I, I have an issue with people interacting with things that they don't know are ai. I I don't think that's appropriate. Personally, um, even if it is as simple as a travel agent or something else, people still have the right to know that they're not talking to a human or they're not talking to what they thought they were they were talking with. That it's actually an AI, and so we take that as part of our ethics. Uh, credo that everybody knows. If you're interacting with a persona, you know it's a persona.

Speaker 3:

Interesting. It's an, it's an, it's an, it's an, it's a bright and her, it's a bright horizon. I think that, uh, it's going to be interesting to see how everything plays out over the next six to 12 months. What are some of the projects that you're currently working on? Is it just the AI cognitive persona? What other ventures are you currently working on that we can still keep in touch with?

Speaker 1:

Well, we're still looking at ways that we can accelerate AI from a quantum perspective. I can't wait until that hits because, you know, then the capabilities are just going to be phenomenal, you know. And so we're looking at ways that we can layer identification of objects and you know, like, taking those you know some of the newer models, like the, you know the YOLO stuff that has come out, and putting some quantum classification in there to see, oh, how much better can we get at different grid scales, at different sizes. That's one of my other big projects that I'm working on.

Speaker 3:

That's fascinating, alan. Thank you so very much for coming on the show and talking with us and helping to educate our listeners on this topic. This has been an exciting conversation. I've been looking forward to this all week. Thanks for taking the time and thanks for dropping the cognitive persona info on us. I really greatly appreciate this. I would love to have you back on to talk about AI and quantum computing. I think that's a conversation we've still not yet really truly explored, so I am definitely looking forward to that deep dive.

Speaker 1:

Yeah, I appreciate it, steve. Anytime Awesome. Thank you, cheers.

AI Cognitive Personas in Tech
Managing Bias in Artificial Intelligence
Challenges in Developing Cognitive Personas
Exploring AI and Quantum Computing