Meaningful Happiness with Dr. Scott Conkright

Ep 3. AI and Affect

January 04, 2024 Scott Conkright Season 1 Episode 3
Ep 3. AI and Affect
Meaningful Happiness with Dr. Scott Conkright
More Info
Meaningful Happiness with Dr. Scott Conkright
Ep 3. AI and Affect
Jan 04, 2024 Season 1 Episode 3
Scott Conkright

Scott, Greg, and Alex discuss the current state of AI, its potential future, and the fear surrounding it. 

What does it mean to be human? How do we shape AI and how might it shape us ? Will it take over and should we be worried? How does the human mind relate to that of an AI?

Join us for an enlightening dialogue, navigating the nuanced impact of Affect in AI—because understanding our future with artificial intelligence begins with understanding ourselves.

For more information about Scott and his practice, articles, videos, and more: https://linktr.ee/scottconkright

Show Notes Transcript Chapter Markers

Scott, Greg, and Alex discuss the current state of AI, its potential future, and the fear surrounding it. 

What does it mean to be human? How do we shape AI and how might it shape us ? Will it take over and should we be worried? How does the human mind relate to that of an AI?

Join us for an enlightening dialogue, navigating the nuanced impact of Affect in AI—because understanding our future with artificial intelligence begins with understanding ourselves.

For more information about Scott and his practice, articles, videos, and more: https://linktr.ee/scottconkright

Scott:

Well, let's begin. Hi, I am Scott Conkret, sometimes called the Affect Doc, sometimes not. I am the founder and psychologist behind meaningful happiness, which is a platform that allows us to talk about all the things that go into to making life meaningful and happy, as the title kind of says. Right, so welcome. Today we are going to talk about Affect in AI, which I think is really exciting.

Scott:

Matter of fact, about one minute ago, I read that the EU has come out with a statement around legal issues around AI and ethics and something like that. No specific details at this moment around what that means. I'm super curious about it, but, of course, as everyone is talking about not everybody those interested in AI, the concern is kind of a Frankenstein issue. We've created something that may destroy us, god forbid. I am of the opinion that that's not likely to happen for multiple reasons, but that is one of the things we're going to talk about today, and I have two wonderful guests here. They both are people that are in my life, helping me with this, both professionally and as friends. Greg here.

Greg:

Introduce yourself, my friend. My name is Greg. I have known Scott for quite some time and really find Affect Theory and AI extremely interesting, and I think really, quite frankly, it's a wave of the future and it's something that we have to embrace but manage accordingly right so that it works for us and we can see the benefit in it. I think it's exciting and the possibilities are endless.

Scott:

You know what I just loved about what you just said. I misheard you, but I thought you said Affect Theory is something that we have to all pay attention to. I thought you were referring to Affect Theory and not AI, which I thought is great. It's the wave of the future. Yes, it is. Thank you, greg. I think you meant to say that AI is. I think that's applied. I must have misread.

Greg:

you Did I say AI or Affect, you said Affect Theory.

Scott:

Oh, I meant. I'm sorry, I think you meant Affect Theory.

Greg:

Maybe subconsciously, because it's such an important topic.

Scott:

It is. It's the wave of the future.

Greg:

AI is exciting and I use it all the time. Chatgpt and so forth. Chatgpt Jaspere all those Rewriting things to make them more professional, but still I go in and add changes and make it more human. I think that's going to be the case for a while. I think as AI gets smarter, we're gonna have to change how we use it. But that's an exciting thing. That's not something to be afraid of.

Scott:

Yes or no. I mean, that's what we're gonna talk about tonight. Some people are varying opinions, and so I'm gonna try to combine Affect Theory which, by the way, is the wave of the future and something we have to contend with as a means of understanding AI, because AI, as far as we know, doesn't have any feelings, it doesn't have fear and it really doesn't give a shit, one way or the other, about what we're doing with it. Right? Well, remember when they I'm sorry, oh, just saying that we're imposing meaning on it, and so that's one of the things that we have to yeah, we have a tendency to anthropomorphize things.

Scott:

Thank you, let's move on to you. Thank you, alex.

Alex:

I like that word. That's a great word.

Scott:

Introduce yourself, my friend.

Alex:

I'm Alex. I'm working on a number of things for Scott, including production and web design, and yeah, I have varying opinions about AI. I have recently started using it in the design process and it's a super helpful tool and it's exciting to be involved with. But I think that large changes tend to happen in incremental stages that you kind of barely notice and you accept. So I'm a little bit tentative, but I'm interested to see where AI will go.

Scott:

Do you, either of you, have any fears about where it might go? Do you have any fears about A little bit?

Alex:

And what are those?

Scott:

fears.

Alex:

Well, part of the uncertain. It's kind of uncertain where it will go, which is part of the concern. You know what? Maybe it's kind of just a boogie man.

Scott:

Well, I employ you, and you showed me some great stuff that you did with.

Alex:

Jasper.

Scott:

Jasper, didn't you use?

Alex:

Yeah.

Scott:

I'm blanking out for some pictures. Yeah, that was Jasper. What was that Jasper?

Alex:

I mean it required a lot of input from me and messing around, so it certainly wasn't perfect, or you know, I couldn't just ask it to say, make an image about shame and it just did it. It required a lot of.

Scott:

So you were still involved. You were still involved in the creative process Very much. So it was a tool, it's a tool. So AI Right now is not gonna cause you to be unemployed. You're not gonna lose your job with me. All right, I don't think so. Okay, I mean no.

Greg:

To me it just it makes things easier, right. I mean, do you remember back in the day I'll probably before you were born when the internet came out and they said that was gonna destroy civilization? Maybe some people still think it has, but for me it's been in life so much easier, right. And it hasn't destroyed civilization.

Scott:

Skynet hasn't taken over right to, was it 2001, 1999 to 2000. We thought, yeah, maybe every white you can. Yeah, thank you. What do I couldn't of?

Greg:

course, maybe because they did fix it, it didn't cause issues, but nevertheless I think that, like the internet, ai can just make our jobs in life so much easier. Right, and the only thing I worry about but I worry about this with anything, including the internet is disinformation. Okay like you see so many people now using images that convey a story that is nowhere near true, and it freaks people out, like the Osama bin Laden letters.

Greg:

Yeah, our like two actors together doing things that they're not actually doing, but they've made it look that way through AI. Now it's not fakes and yes, and that is.

Greg:

Yes it is, and deep fakes still look fake to me, but as AI gets better, it's gonna look very, very real, and so, for instance, I look at it as what if someone creates a city exploding like New York or something, and post that and then people freak out and do all kinds of things that are destructive. So I know that's a Weird example, but I've read a lot of articles about people are already trying to do those types of things right to cause Issues and wars and in disenfranchisement and all those types of things, if I said the right word, and so just use Photoshop.

Alex:

Yeah with AI, it becomes a lot easier to do and looks real.

Scott:

By the way I need to. I need to clear something up for for the world. Taylor Swift broke up with me. I didn't break up with her, so I just wanted to for anybody who's worried about that?

Greg:

concern.

Scott:

I am from Kansas City, I know I Understand why she would follow up with somebody who's just another year now on time, so she's moved on she's moved on like way, way far away from me right now.

Greg:

I know, yeah, it'll be number one hit and you'll be famous or more famous and I'm running my own sound too with AI.

Scott:

Oh, there you go the either of you have concerns that AI will become conscious and somehow Somehow decide. I'm not happy being AI. I'm jealous of humans being able to do what they do and today today I was writing about it I thought, yeah, what happens if my toaster my AI toaster gets upset with me for, like, messing up my breakfast and burning the toast, and it's ask its burn Because of that and decides to fry me? Okay?

Greg:

I don't think that's anywhere near happening, at least not right now.

Scott:

I mean there's no hope not for burnt toast. I.

Greg:

Don't think that'll happen at all. We won't be that petty. I hope not. Yeah, I don't, I'm not worried about that.

Scott:

Why not? A lot of other people are. I mean Harari, among others. There's a bunch of people out there saying that, yep, at some point, in some mysterious, mystical, mysterious way, some day it's mech and a God comes down and gives AI consciousness and for some reason, we're just kind of happens.

Alex:

Wouldn't it have already happened?

Greg:

with the internet and all that. Wouldn't that have already happened?

Scott:

No, but see, like now with AI, because AI consciousness can become bigger and bigger and bigger and bigger and bigger and bigger, you know, exponentially. So at some point, the idea is that something's going to click in there and for reasons I don't understand the story. The scenario is that it doesn't like us, which? Going back to our projections, to it is like doesn't that seem like?

Greg:

our issue. I don't see that happening. I just think that's way too science fiction-y. I just I don't see that happening anytime in our lifetime. I mean, maybe we advance further or maybe if AI gets help from some alien intelligence, that might happen, but as of right now we'll be dust by the time anything like that were ever to occur, if at all. I just don't buy it.

Alex:

Seems like a I don't know, it's kind of a bold statement.

Greg:

but Maybe it is. Maybe that comes from how I was raised in terms of I don't want to bring religion to this, but maybe, you know, I just don't see that happening. I just don't see AI anywhere soon becoming intelligent where it destroys us. So how?

Scott:

curious about your comment about religion. So what? Because you were raised to I was raised, that you know.

Greg:

but your father was a minister. Yeah, my father was a minister and God won't let that happen and it's just not a possibility. Maybe I cling to that a little bit.

Scott:

Maybe that's out of fear that if it does happen, but I'm God, will intervene and like, make sure that it doesn't happen.

Greg:

Yeah, I just, I just don't. I don't see that, I just don't see the possibility of how something as inanimate as that suddenly become conscious.

Alex:

I just, I don't see it, I think the idea is that as it controls more and more things like you said, growing bigger and bigger and bigger and we allow control certain things and we lose control of those things, then there's some uncertainty there about what it would do if what it thinks is logical is what we think is logical. But I think, going back to what you were saying about us projecting upon AI, that it doesn't, it's not going to like us.

Scott:

Why wouldn't it? Why would it do that? Why would it be mean to us?

Alex:

So it has no reason to that, I would think, but either do I.

Scott:

So I think I think that particular issue is one issue. Why are we projecting onto it that it would be malevolent?

Greg:

You know that, that I think, because, as human nature, we always look for the worst case scenario and all through life and all through existence, we've always looked at the worst thing that can happen, and there's always something that has to be the boogeyman, and I think now it's AI, and I just think it'll be a long, long time, and I you bring up an interesting point, but I think that almost there has to be some evil type of human interaction to make it turn against us, or an evil human, that's what I'm saying an evil human.

Alex:

But making the AI or programming it to.

Greg:

Yeah, exactly, but they said the same thing about the internet and that hasn't happened. But AI is so much more robust, I guess, in terms of its capabilities. So now you're all having me think maybe this could happen, but I really don't. I don't know why. It's just not a worry of mine.

Scott:

Well, let me. Let me ask this, because you said some couple of really interesting things. Both you have the idea that AI could be mistreated by us or that, in terms of our programming, we as I heard you correctly we could interact with AI in a certain way that would it would want to not be nice back to us. Is that, is that what I heard? Because if that's the case, it seems to me like AI here's where I'm going Is that if you treat people badly and poorly and treat it with mistrust, you kind of set yourself up for being mistrusted and treat so forth.

Scott:

So I'm curious if it has issues, I'm sorry. I'm curious if it has parallels with racism, racism, all the other stuff to do with the other. We're treating AI as an other that could, potentially, that we need to subjugate, and so there seems to be parallels there that we need to look at. But how, in the burgeoning part of this we should look at, like how we're doing this, we're creating, potentially, a friend or a foe and we're concerned that if it knew what we were doing, if it got consciousness, it would say you motherfuckers, why'd you do this to us?

Greg:

And we're like the roaches that need to be destroyed, right.

Scott:

Because we were treating it like shit, you know.

Greg:

But how would we treat it like shit? Because if we're constantly building up AI and making it better and improving its capabilities, that's showing it quote unquote love in a way.

Alex:

Just by treating it with mistrust and using it as a tool, rather than treating it as maybe an equal or a creative partner or something like that.

Greg:

And I don't think we can ever really treat it as an equal. I just don't think. I think man is too arrogant to say this thing is an equal, which goes to maybe that's why I wouldn't get exactly.

Scott:

Okay, now you're getting where I'm coming from. Okay, so good.

Greg:

I just don't think we're at that stage yet. Maybe in a hundred years you know when we're like. That's not too far away, but maybe when we're in the age of Star Trek, are you know?

Scott:

So 2000 years from now, that's not too far away either, like I mean.

Alex:

With a classic example of AI going to Hawaii, which is still very much in the popular consciousnesses how from 2001?

Scott:

to safety, of course. Why are you doing this to me?

Alex:

Right.

Scott:

Because how spoke with affect right.

Alex:

So yeah, it had very human qualities.

Greg:

But if you look at it now, we're I mean we're nowhere near having human qualities right, and at what point do you think we'll be?

Scott:

So what are human qualities? What are you?

Greg:

So you want quality to the ability to independently think and make decisions without interaction from other humans, and right now, ai it needs us to do that.

Scott:

Okay, I would disagree with you. Okay, so this is going. Scott, you can probably guess where I'm going with this. So that's a very not predictable. I mean, it's a. That's an usual way of seeing it. Okay, is that I'm really like as a scientist, you can say like you know, I'm pretty smart, you know, making, I'm coding this thing as smart as I can, doing all the right algorithms and everything else, but if it gets smarter than me and is independent, what's missing in that equation is that being human is not about thinking. Being human is about feeling, and so we're interested in what we're doing. Okay, computers, for instance, like in playing a match of chess, it doesn't care if it wins or loses, it just does what it does. It does whatever the algorithm tells it to do. It doesn't feel shame about it. And you create that algorithm. We do, of course, but it's all based on statistical reasoning and any emotion that it displays.

Scott:

We program the emotion, so it doesn't have curiosity in and of itself and there's nothing there's. It doesn't have any negative effects. When something goes bad, it's gonna go bad according to our standards and what we see is bad, like if it's circuit starts frying, it's not gonna go. Oh, my circuits are frying, my circuits are frying right, because it doesn't have a body that feels something. It doesn't feel its circuits.

Alex:

It would probably just look for a solution to the problem.

Scott:

Or it'd be like how, like, at best like, oh no, oh no. Why are you doing this to me? Don't unplug me, you know. So there's nothing inherent that has no interest in doing anything. So, given that, where I'm at with it right now is, if we're concerned about it getting the capacity to be interested, because it you'd have to Metaphorically wake up one morning and say, hey, I'm aware of me being AI. That's the first step. I'm aware of me being me. Who am I in relation to everything around me? Now, that would be the first step, I would think. And it have to make comparisons. But on what grounds would it make that comparison? At a feeling level?

Greg:

But how far do you think we are from that? Oh, I think light years away.

Scott:

Yeah Right, so yeah, I'm not. That's part of my point. Is that relax the dangers of AI. The danger of AI is us. We are the problem, okay, so. Hmm like usual. Like usual, where we were comes back to us being the problem. We are always the problem. If we really were concerned about it having consciousness, my challenge would be like Okay, let's say that's a real possibility, this is, that's a real fear. Then learn. Then let's learn how to code for affect.

Greg:

Okay, and I understand that and and that's looking at it from the good guy point of view. But we can all say that, but how do we keep the bad guy, our gal, from wanting to do the opposite so that they turn into a weapon?

Scott:

Right. So If the fear is that it's gonna magically or somehow on its own Configure its circuitry and its algorithm such that it now has consciousness and is awareness, so let's say Consciousness is the same thing as awareness at a certain level, okay, but now his intentions, has a desire, as curiosity as can make plans and and so forth, that it's gonna be able to act on its own volition who's to say if it acts on its own, it's gonna be evil?

Greg:

I mean right. Why can't I mean what's to say that it won't look at us as Poor, pitiful people? I need to take care of them like a puppy. Are a helpless Animal and it actually wants to take care of us. We always lean towards the evil side of I told you.

Scott:

Who's to say that it won't be the opposite. I totally agree. The grid I totally agree, alex, I would love to hear what you think. I mean, I have my feelings about what would happen if it woke up and it was gain consciousness. I, I would. I know my preferred response from AI, which doesn't have to mean that it's malevolent, that would be playful. But yeah.

Alex:

So if I woke up one morning, what would I want to say to me?

Scott:

Or what do you I mean? Do you feel like it would necessarily be malevolent and mean? And? Or I want to take care of you, may look at you like, looks around, look, look around and go like these poor bastards?

Greg:

Yeah, they have no clue. I can help them.

Alex:

Well, I think, like we've been talking about it, it still requires some agency from us. So I guess it would be based on the information and then assimilating. I Don't think it would necessarily be either.

Greg:

Maybe could be our biggest ally and can root out all the evil because it sees at a higher intelligence what needs to happen to make the world a better place.

Alex:

I also don't know, because we're assuming, we're again, we're putting ourselves on it, we're assuming the AI would have a human mind, that it would function like us.

Scott:

Right that its consciousness would be. I like our consciousness. Are we look?

Greg:

at us like we look at ants.

Scott:

All right, well, I mean, like that could be what would be like a multitude of things that are so different from us.

Alex:

I mean AI can see the entire internet like all at once and process it. Where you know, we have to Understand things on an experiential and emotional basis, it's just processing data.

Scott:

But that would change, right, that would change if it started having affect, because it have feelings then attached to Every. Everything that it does now has affect attached to it right, but.

Alex:

But that's the affect be based on it's? What experiences would it? Because ours, you know, we start from a baby, child, teenager, adult. Our experiences through all those time periods form who we are and how we experience, our emotions and how we perceive the world. What would those range of experiences look like for an AI or something that exists on a server somewhere?

Scott:

Great question. You know that that's what we should be thinking about, right, you know, because that experience would be very different for it.

Alex:

And is it basing it on its interactions with us? Is it basing it on what examples it sees of us through news it watches, or Although those are great, I hadn't thought of that.

Scott:

Those are great ideas. Those are great questions to ask. I'm thinking when you were talking about how it might perceive us, I was thinking, gee, like, at some point, like it's not like a baby, right, like it, right, it has no experience of anything except that it has access To everything that we've done to it before, but hopefully in a way that we don't have access. Hopefully will be like.

Greg:

Lee Lou in the fifth element. Did you ever watch that where she Lee Lou In the fifth element with Bruce Willis and she viewed the whole world Through clips of what she saw in the news and she processed everything from the beginning of time to Present and it made her sad because of all the wars and the killing and the death that was happening.

Greg:

So she was a force for good, and so I'm wondering if you still have to have emotions, though, and it I don't know if it necessarily would have an emotion to fill sadness, but it might have enough intelligence to see that Right from wrong, and so let me help I these people to be better. Maybe I'm just looking at it from a rose-colored glasses standpoint, but I'm just not. First, I'm not always quick to think it's going to hate us, like everybody seems to think it's going to kill us and like in the Terminator and all that stuff, but maybe, based on what it sees, it will go the opposite direction and really be a force for good to help us to be a better people and a better planet. It could. It could. So you have to watch the fifth element. It's a really good movie.

Scott:

No, I haven't seen that one, but yeah, it sounds like that would add to my knowledge about how these things could possibly play out.

Alex:

I kind of wonder if part of our fear of it is that we perceive it almost as godlike, because it's, in a lot of ways, a superior being in terms of its capacity to comprehend things and take in information. Like if it's able to see everything we've done up until that point all at once, then it's essentially omnipotent.

Scott:

It's omnipotent without a lens, though right, it has to have a perspective.

Greg:

The only perspective is through looking at our history.

Scott:

Each person, individual history, starting with that.

Greg:

The history of the world right, and what does that history look like? Constantly warrants fighting and killing. There's bad things good too, though, yeah.

Scott:

And there's interpretation of each depends on if you're white or black, or male or female or all sorts of things, right?

Alex:

And again, we're assuming it would work like us and that we tend to remember the bad more than the good.

Greg:

And we also tend to think things are more bad than good, based on our fear of new things, and it always seems like a new thing we're afraid of and we, obviously, we always go to the side of it's going to be bad, it's going to hurt us.

Scott:

I think you're talking about where you are at this moment in your life.

Greg:

It might be, but it seems like in history that's where we always look in terms of when something new comes up, like the internet. People were afraid of that. The car, they were actually afraid of that. No, we're animals. They're afraid of that yeah, we were afraid of every new thing that could improve our lives.

Alex:

We're animals, we're instinctually kind of afraid of the unknown or uncertain. Yeah, it's programmed into us, so to speak. Yeah.

Scott:

And that fear is an affect okay, that is part of our evolutionary programming which saved us. So if AI is going to gain some sort of consciousness, it's going to need to have some sort of programming around affects to say this is dangerous. We have affects that say if you taste this and it tastes shitty, then it's probably not good for you. You can go, you spit it out.

Greg:

Broccoli is good for you and it tastes terrible.

Scott:

That's a personal preference dude. Okay, broccoli is good for you and it tastes good. I mean, I eat it, but it tastes terrible. I used to say brusseless brusseless, but you don't use.

Alex:

So, yeah, what would those? Because those are based on, like, biological responses, right?

Scott:

Why.

Alex:

If something poisons us, then we throw it up.

Scott:

Yeah, or something stinky or what's.

Alex:

an AI, even reacting to it, doesn't have that stuff.

Scott:

Right. Well, so it begs another question around, or set of questions around. If we gave it curiosity, let's say the affective interest, excitement.

Alex:

Yeah.

Scott:

What would it be curious about and why?

Greg:

And we'd have to program that into it, right, I mean?

Scott:

we'd have to program it into it, but we're programmed into us as humans for a reason because we have to walk around in the world and figure out if things are safe or not.

Greg:

But a lot of what we figure out is based on our experiences. So what kind of experiences does AI have to make that assumption or to make that decision? I think we're?

Alex:

I don't know. I think it would probably be more curious about itself than us. We're assuming again that it's all focused on us.

Greg:

But because we created it, maybe it would just more be focused on us, at least in the short term, and then, after it's done with us, it focuses on itself and makes a determination as to how it moves forward. And all that could be done at the human stay. Or do I get rid of them? But if it's a smart and maybe I'm looking at my own view of thinking but if it's smart, it would want to keep us around because it can continuously learn from us.

Scott:

Well, if it's smart and has some effect and has some awareness, it possibly some fear. We would know that we could unplug it, right, okay.

Greg:

That's true. Yeah, that is very true.

Alex:

How does it? Yeah, it would maybe want to gain control over us, and if it learns from?

Greg:

us it would learn about. Fear seems to be everything for humans, in my opinion, and so maybe it leaves more towards the fear side and looks at us as a threat.

Alex:

So but it would have to value its own life as well.

Scott:

Right, so it would have to have some sense of its mortality. Okay, so Essentially we would have to code it to be human.

Alex:

We would, in order to be like, afraid of it, I guess. Well, or?

Scott:

for it to be afraid of us. In that sense, so it's or human-like for us to understand what its intentions are, because it could have different intentions that are beyond our understanding, because they could be based on affects that we don't have.

Alex:

Yeah, and this is like the thing that I really hate about texting, which is that you miss so many facial expressions and gestures and body language. Even just the voice in AI is just representing itself by text on a screen. There's a lot hidden there.

Scott:

Right, so you're not getting nuance around feeling. So a couple of examples come to mind. I was involved with a woman that spoke a couple of different languages from Europe and early on in the dating period she texted me after a conversation, the tech conversation. She says I'm leaving you, and I went. And then I did my head and went OK, translate this from Italian or French, you know. And it's like je te laisse, I'm leaving. Now it means I'm heading out, right. It doesn't mean I'm leaving you, our relationship is over, ok. So I thought it was always a great one. Text is funny that way around. Language is funny that way.

Alex:

Especially between languages.

Scott:

And how you picking something up is one thing. How you pick it up Picking up a baby like picking up a baby isn't just picking up a baby Right Versus, like you know, gently, or whatever, or rocking. You know, there's so much there and those gestures are all affect. We have feelings about how things move in space. Ok, so it's kind of summary of what we're talking about.

Scott:

I think we brought up lots of very complex issues and there's one thing that sticks out for me that we have personal feelings about AI. We're also learning that the affects feelings that we have color everything that we do and that with AI, the issue is, if you bring affect into AI coding, then it opens up all these doors about what AI could be or couldn't be, and right now, there's zero discussion about what happens with affect, because nobody talks about affect. It is all about AI as a chess player and a data collector and sorting stuff and pulling information, and we are missing out on the sexiest part of what AI could do other than having a holographic deck for my sexual fantasies, which is what I think ultimately, ai should be doing, I think.

Greg:

AI already is doing that. It will be doing a lot more of that.

Scott:

Right, that's going to be the funding behind AI and affect right.

Alex:

Just like the internet Exactly.

Scott:

You heard it first on this podcast. You owe me, you owe me, you owe us. But I think adding affect into this equation, to put it computer like, ai like, can make an enormous difference. I think it just opens up a lot of questions and a lot of unknowns too. So perhaps one reason that affect is not added in to lots of things philosophically, engineering-wise, even psychologically, is that it probably raises more fears than hope. I don't know, I don't think it has to be that way, but as we enter, I'm curious as a summary that I just presented, how would you react to the summary that I just gave?

Greg:

For me, it just leaves a lot of questions because it's really going into the unknown and I think it's just going to have to play itself out and I think there's going to be good aspects to it and bad aspects to it and it'll be interesting to see what happens. I know when I was at my last company, her chief technology officer was a huge fan of AI. We talked about all kinds of wonderful things it can do and we were very positive about it, whereas other people were very nervous about it. So I think a lot of it goes to how will we as people embrace this and manage it, and that's up for question.

Greg:

We don't know, but I think, we can turn it into something very good if we want to, but I don't think everybody does.

Scott:

So, if I heard you correctly, it's important for our interactions with AI to feel good.

Greg:

I think so, I think so.

Alex:

And I think like you were just talking about, Scott, that maybe affect isn't being paid attention to. Just like with AI, it brings up so many questions If we address it in these other fields, especially psychology, it brings up a lot of questions Internally within ourselves maybe we don't want to address. Or things we don't want to address or remember. It's much easier to Like. People always used to say suck it up or just be logical. Basically, it feels a lot safer.

Scott:

You're being emotional right now.

Alex:

Right, right. It feels much safer to be logical than to think about affect and why we are the way we are.

Scott:

Lost in space, Will Robinson Danger. I love that show. But isn't it amazing that AI was paired with affect in one of the very first AI TV?

Greg:

shows, but no one really knew what it was at the time.

Scott:

No, but it was friendly.

Alex:

But it was friendly but helpful, and it carried.

Scott:

So AI is saying danger, it's saying an affect word Interesting, so it was connecting to us and saying watch out. Watch out for what, I'm not sure. But thank you for watching and this was a great discussion. I really appreciate you guys.

Alex:

I got the best team, by the way.

Scott:

Interesting. We're missing a couple of the other team members for various reasons, but they'll be here next time for another discussion, but we'll keep talking about this, and so thank you guys. I really appreciate it. Thank you, and I hope to see you guys next time.

Affect Theory and AI
Impact of Affect in AI Exploration