Grief 2 Growth

How AI Reveals Our Divinity- Special Guest- Alex Tsakiris

June 18, 2024 Season 4 Episode 38
How AI Reveals Our Divinity- Special Guest- Alex Tsakiris
Grief 2 Growth
More Info
Grief 2 Growth
How AI Reveals Our Divinity- Special Guest- Alex Tsakiris
Jun 18, 2024 Season 4 Episode 38

Send me a Text Message

In this episode of Grief 2 Growth, we delve into the profound intersection of artificial intelligence and spirituality with our special guest, Alex Tsakiris. Alex, the host of the acclaimed podcast Skeptiko, has spent over 15 years exploring the mysteries of consciousness, near-death experiences, and the spiritual implications of AI. His latest book, "Why AI: Its Smartest, Its Dangerous, Its Divine," serves as a cornerstone for our discussion today.

- Introduction to Alex Tsakiris : Skeptiko Podcast https://skeptiko.com/
  - Overview of Alex’s journey into podcasting and his scientific approach to spiritual topics.
  - Insights from his latest book.

- AI and Spirituality:
  - The potential of AI to help us understand our true, infinite nature.
  - The ethical and philosophical implications of AI in human development.

-   Grief and Personal Growth:
  - How grief can be a catalyst for spiritual awakening and personal growth.
  - Comparing human experiences of consciousness with AI capabilities.

- Science and Consciousness:
  - The importance of a scientific approach to studying consciousness and spiritual phenomena.
  - Alex’s experiences and insights from interviewing leading experts in the field.

-  Future of AI:
  - Predictions and hopes for the future integration of AI in spiritual practices.
  - How AI can serve as a tool for enhancing human experience and understanding.

Quotes and Key Takeaways:

1. "AI can help provide insights into our divine nature. It’s not about AI being sentient; it’s about how we use it to explore our humanity." – Alex Tsakiris
2. "Understanding our moreness and lessness through AI can be a humbling experience, showing us our potential and limitations." – Brian Smith

Join the Conversation:

https://grief2growth.circle.so/c/podcast-chat/how-ai-reveals-our-divinity

Discover a unique online space dedicated to individuals navigating the complexities of grief. Our community offers a peaceful, supportive environment free from the distractions and negativity often found on places like Facebook. Connect with others who understand your journey and find solace in shared experiences.

https://grief2growth.com/community

You can send me a text by clicking the link at the top of the show notes. Use fanmail to:

1.) Ask questions.
2.) Suggest future guests/topics.
3.) Provide feedback

Can't wait to hear from you!

I've been studying Near Death Experiences for many years now. I am 100% convinced they are real. In this short, free ebook, I not only explain why I believe NDEs are real, I share some of the universal secrets brought back by people who have had them.

https://www.grief2growth.com/ndelessons

Support the Show.

🧑🏿‍🤝‍🧑🏻 Join Facebook Group- Get Support and Education
👛 Subscribe to Grief 2 Growth Premium (bonus episodes)
📰 Get A Free Gift
📅 Book A Complimentary Discovery Call
📈 Leave A Review

Thanks so much for your support

Grief 2 Growth Premium
Exclusive access to bonus episodes!
Starting at $5/month Subscribe
Show Notes Transcript

Send me a Text Message

In this episode of Grief 2 Growth, we delve into the profound intersection of artificial intelligence and spirituality with our special guest, Alex Tsakiris. Alex, the host of the acclaimed podcast Skeptiko, has spent over 15 years exploring the mysteries of consciousness, near-death experiences, and the spiritual implications of AI. His latest book, "Why AI: Its Smartest, Its Dangerous, Its Divine," serves as a cornerstone for our discussion today.

- Introduction to Alex Tsakiris : Skeptiko Podcast https://skeptiko.com/
  - Overview of Alex’s journey into podcasting and his scientific approach to spiritual topics.
  - Insights from his latest book.

- AI and Spirituality:
  - The potential of AI to help us understand our true, infinite nature.
  - The ethical and philosophical implications of AI in human development.

-   Grief and Personal Growth:
  - How grief can be a catalyst for spiritual awakening and personal growth.
  - Comparing human experiences of consciousness with AI capabilities.

- Science and Consciousness:
  - The importance of a scientific approach to studying consciousness and spiritual phenomena.
  - Alex’s experiences and insights from interviewing leading experts in the field.

-  Future of AI:
  - Predictions and hopes for the future integration of AI in spiritual practices.
  - How AI can serve as a tool for enhancing human experience and understanding.

Quotes and Key Takeaways:

1. "AI can help provide insights into our divine nature. It’s not about AI being sentient; it’s about how we use it to explore our humanity." – Alex Tsakiris
2. "Understanding our moreness and lessness through AI can be a humbling experience, showing us our potential and limitations." – Brian Smith

Join the Conversation:

https://grief2growth.circle.so/c/podcast-chat/how-ai-reveals-our-divinity

Discover a unique online space dedicated to individuals navigating the complexities of grief. Our community offers a peaceful, supportive environment free from the distractions and negativity often found on places like Facebook. Connect with others who understand your journey and find solace in shared experiences.

https://grief2growth.com/community

You can send me a text by clicking the link at the top of the show notes. Use fanmail to:

1.) Ask questions.
2.) Suggest future guests/topics.
3.) Provide feedback

Can't wait to hear from you!

I've been studying Near Death Experiences for many years now. I am 100% convinced they are real. In this short, free ebook, I not only explain why I believe NDEs are real, I share some of the universal secrets brought back by people who have had them.

https://www.grief2growth.com/ndelessons

Support the Show.

🧑🏿‍🤝‍🧑🏻 Join Facebook Group- Get Support and Education
👛 Subscribe to Grief 2 Growth Premium (bonus episodes)
📰 Get A Free Gift
📅 Book A Complimentary Discovery Call
📈 Leave A Review

Thanks so much for your support

Brian Smith:

I got to tell you I haven't been this nervous since I interviewed Bernardo kastrup. Oh, come on, man. I've been waiting for. It's fine. I was I just finished your book. I just started your book this morning and I finished it this morning. Oh, Lord, and I saw you been podcasting for 15 years. That's I didn't even realize her podcast around 15 years ago. True turnoff? Yes, there were. Yeah, I really well, I enjoy your show, in general, and the series you've been doing on aI think is fantastic.

Alex Tsakiris:

Well, you know, one of the reasons I'm excited for us to have this conversation is, you know, what am I real? Like, the only reason I'm interested in the AI is because the spiritual stuff really, right. You know, the book. Smartest, dangerous, divine Divine is what it's all about.

Brian Smith:

Close your eyes and imagine what are the things in life that causes the greatest pain? The things that bring us grief, are challenges. Challenges designed to help us grow, to ultimately become what we were always meant to be. We feel like we've been buried. But what if, like a seed we've been planted, and having been planted, would grow to become a mighty tree. Now, open your eyes. Open your eyes to this way of viewing life. Come with me as we explore your true, infinite, eternal nature. This is grief to growth. And I am your host, Brian Smith. Hey there, welcome to another episode of grief to growth where we help you navigate life's challenges and explore profound questions about our existence. I'm your host, Brian Smith. And whether you're a first time listener or your longtime supporter are thrilled to have you with us. Today, we have a very special guest. He's no stranger to deep thought provoking conversations. His name is Alex to Karis. And Alex is a host of the claim podcast skeptiko. And he's an OG in the podcasting world. I just finished Allison's latest book this morning. I didn't realize he'd been podcasting for 15 years. I didn't know their word podcast 15 years ago, but I've been listening to Alex for about five years. I gotta tell I'm a little intimidated having him on because he's just an amazing podcaster and a great thinker. He's known for a scientific approach to spiritual topics, he spent years exploring the mysteries of consciousness, near death experiences and the boundaries of human understanding. He has a rigorous yet open minded and inquiry style. And it's led him to fascinating conclusions about the nature of humankind, which we're going to explore today. Now, in addition to his podcast success, He's authored several books. And today we're gonna be talking about a book called Why AI its smartest is dangerous is divine, which examines profound implications of artificial intelligence on human spiritual and exists existential questions. Alex argues that AI not only poses risks, but also offers potential for advancing our understanding of consciousness and human development. So stay tuned as we dive into these deep questions and more. Remember, join us at our community at grief to growth.com/community. Afterwards to continue the conversation. And with that, I want to welcome Alex to Karis.

Alex Tsakiris:

Thank you, Brian. I'm super excited to be here. And to as we were just chatting about a minute ago, you know, truly engage, and there's a dialogue here that's been I've been dying to have. So you reached out to me and I was like, No, we got to do this as a as a swap castle to just let people know, you know, Will, you're going to take the lead here, but I'll probably have some additional points to make when I bring it over on the skeptiko.

Brian Smith:

Absolutely. Well, and we can, this is a dialogue. So I want to I want to question you. But if you have any questions you want to ask me, that's fine, too. I'm, I have to say I am honored to have you I really am. I listened to your show. I love the guests you have on I love the way that you have a skeptical scientific rational approach. This led you to this idea of spirituality. It's this is not woowoo. This is not just wishful thinking. So maybe start off by telling us what what prompted you to start skeptical in the first place?

Alex Tsakiris:

Well, you know, first of all, not first of all, to build on what you just said. I mean, that's really what it's about for me is, you know, I love you summed it up really nicely all the time in your direction. Who am I? Why am I here? Who am I? Why am I here? These are the only questions either biggest questions that mattered. So I started out business entrepreneurship, but it was just I was always a yogi. I don't know why, like, I don't know why we get drawn to these different things. One of the questions I have for you is, you know, how do these life circumstances fit with our life plan? It's like a really deep question that I'm sure you've contemplated often because it's about kind of who you are. But anyways, my whole thing was about okay, get them Well, anything out of the out of the way, because I'm this brought up this Greek you know, entrepreneurship, money's what matters, make that number, and then get that out of the way. And now you can really do what you want to do, which is figure out these bigger questions. And I thought I saw that I thought everybody was like, yeah, that's those are the most important questions. That's what I really want to know. And, of course, as you and I know, I know a lot of people, you know, that's not, that's not an endpoint for them to really understand. So tackling those questions, I had an understanding of where I thought I was going. But I was like, what are the smartest quote unquote smartest people say, what is the science thing seems pretty reliable? From what I know, I'm a computer science guy, I value logic and reason. How about approaching it from that angle? And that's what I did. And as you said, you know, that led me to spirituality or, or to a certain set of beliefs that are always open to be changing, but that I do have you know, about, let science directed? Well, yeah. What are your thoughts on on that? How does that fit with, with your path, which I think is probably quite different, right?

Brian Smith:

My path is different. I actually came from a fundamentalist religious backgrounds and was taught all the stuff about this judgmental guide and Heaven and Hell and binary, you know, you're in or you're out. And, and that never made any sense to me. But my my nature, I think, is I'm a questioner, I'm a seeker. I remember being five years old laying in bed, like, why am I here? What is what this place is crazy. I grew up during the Vietnam War. I remember watching war on TV, and I'm like, what, what kind of a planet that I that I land on? So I've had these questions since I was very, very young. But I was taught this, this idea of religion and God and all that kind of stuff. But that fell apart. For me, that didn't make any sense. So this guy would be so wise, why is he so angry? Why does he hate everybody? I didn't understand this guy. So I got my degree in chemical engineering. And so because I want to know, how does this world work, how do things you know, how did things work, and it's all about science. It's all about, you know, the material. And then I came very close to becoming an atheist. But that just didn't. I'm a materialist, I should say, I don't like the word atheist materialist. And I was like, that doesn't make any sense to me either. So I just started digging hard. And I dug into the Bible, I've dug into the background of Christianity. And then I started discovering more and more of what we call spiritual now. And it's interesting, because you use the word belief, and I use the word belief. And we'll use the word faith a lot of times and like, I don't even like that word. It's really, it's where the data has led us. It's the rational conclusion, the conclusion, I think you and I both share.

Alex Tsakiris:

Absolutely, yeah. Yeah. So and then, that also life led you through the grief thing. And in a way that kind of, kind of changed your direction, too, because that's unique, right? I mean, that's not, that's not my path at all. I've danced through that garden without touching any, any anything, you know. So that's interesting, isn't it? I mean, what do you make of what do you make of and I'm happy to answer this to, you know, what do you make of how our life circumstances do seem to fit or not fit our inherent nature, our, our kind of inclination, what we're drawn to, I mean, I'm a inquiry, just to perpetuate doubt, is not only the tagline of the show, it just fits me perfectly. It kind of fits who I am, in a way and I used to struggle with that, like, why am I that way? And I think there's something to an acceptance of just saying, That's okay, maybe, you know, that's okay, on a bunch of different levels.

Brian Smith:

I think that I think we do have certain life plans, I do believe that and my life as I look back over it now for the last 63 years, has set me up to be where I am today. It's like, you know, things, these things that happened and I believe that they were there are kind of kind of sketched out in a way that I think there is some amount of pre determinism, um, but there's also whatever our real human nature is because I just never fit in with everybody else around me. I like you. I was always asking these questions from time, and I will look around at other people. Like, even when I was sitting in Sunday school, I'm like, why are you not questioning this? You know, why is everybody else's accepting? Why are you not like asking questions like, why does this does this make any sense? So I think it's a combination of who we are by nature. Most of the people that I end up talking to on my show, we've been broken open by something. It's something about like, we will, as humans will keep doing the same thing until it doesn't work anymore. And when it's working for people that just keep doing the same thing, but that the thing about grief is it breaks us open and it makes us really question all those things. Who are we why are we Hear, does this make any sense?

Alex Tsakiris:

Yeah, that's, that's great. I think it's, you know, I shared with you this, this little anecdote because we're both sharing kind of these little synchronicities that happened after we first started talking, and I had a couple of kind of mini sinks, you know, exam, because they're not once a week that you put out there and people go, Oh, my gosh, you know, on that level, but we're talking and, you know, we've talked about grief on this show. And in particularly, you know, I've always been drawn to with the work of Dr. Julie by Shell, who's probably the world's leading authority on after death communication. And she wasn't interested in grief, per se, but she got interested in her mother died. And she had a complicated relationship with her mother in an attempt to try and resolve that. She was led to medium readings, purely. She has a PhD in pharmacology. So she's, she hasn't been on your show, right?

Brian Smith:

No, I haven't been able to get Julie on my show. Well, I have had her husband on he's awesome.

Alex Tsakiris:

Oh, great. Great. Mark. Right. Yeah. Mark. Yeah. Okay. So, so I'm saying this for the benefit of your audience and my audience who met sure No. But what I love about Dr. By shell, is it applies the interview we had. Were talking about the love thing, and she's got like, look, you know, I talked to these mediums, they say it's about love. And I talked to, and they talked to the other side, and they say, it's about love. I want it to be about love, I go, I get it, I don't want it to be about love, either. That's not easy. For me, you know, but that's the, that's the data that kind of comes through. So anyways, we're talking about these many syncs between your work Brian, and grief and loss, and really about growth. I mean, that's what you're really about, you're not somebody, I can tell from listening to your content, you're not someone who's hung up, has on grief, that isn't like the major barrier in your life. It's like, you've kind of crossed over to the other side, and your paddle the little raft back and bring the next person over. And that's a beautiful thing, you know, so I'm at this high school graduation. And I'm talking to these parents, and they're one of their, this was for my niece, and I'm talking to parents, and these parents have an older child who just graduated and is going into nursing. And she goes, Mom goes, you know, I'm like, gosh, she goes, Yeah, I want to go into oncology nursing, kids that have cancer, like, oh, geez, you know. And so, you know, the parents were like, in trying to protect their child from the pain that they're walking into. And the child goes, this is fresh graduate out of college. For anyone who doesn't think younger people have this capacity or don't get it. She goes, Oh, don't worry, dad, they have a crying room for the nurses. I thought, Wow. You know, so you're there. You're working with children who have cancer. I mean, I can't even imagine. But there's a crying room, because you're going to take on that, and you're going to deal with it, and you're going to help them across the river. And it just, I celebrate it. I think it's awesome. I don't like wish I was that way. But I noticed that I'm not. You couldn't get me on that oncology ward. And, you know, I mean, you could if you really kind of put me up to it, but it wouldn't be my first choice. So. So I think that's cool. And I celebrate that in you, Brian, that you dedicate so much of your time and effort towards that, you know, understanding that a lot of people are still there. And maybe you can be kind of, of service to those people in some way. And what are your any thoughts on that?

Brian Smith:

Yeah, I guess I'm just built different. Because, you know, people ask me, you know, you talk to grieving people all the time. Doesn't it make you sad? And I'm like, no, it's actually the opposite. You know, when I talk to someone for an hour, we have a session together. This is not to brag on myself. This is the work that they're doing. But they're inevitably better at the end of the session than they were when they came in. I can see their face change. I can, I can feel her demeanor change. So I come out of it feeling you know, I feel better. I have a client I was just working with, and we were leaving every Friday. So I was telling her that this is a great way for me to close my week out because I always feel better after I talked to her. So it's I think that when you were in this situation, it's like what can I do to help? And that's just the way I've always been, again, my legacy. I think it's Just through my DNA, but my family is all preachers, preachers, teachers, you know, stuff like that. And so I felt like I'm here to to do something I'm here to teach, I'm here to help. And so I feel like this is what I'm meant to do. So it actually, it fires me up.

Alex Tsakiris:

Hey, that's awesome. You know, the other thing that kind of I appreciate it and just kind of delving into your work is, and I'll probably play this clip into, into my show, but probably don't need to play to you, because I'm sure you remember it. But you were, you did an interview with this one woman who had an ND E. And you're asked her about grief. And she said, Man, on the other side, they look at grief, kind of like a kid that's crying over the candy bar that they want in the store. And I thought see your smile. Oh, man, that's awesome, that you were able to bring that, like you weren't stuck in this one thing of like, oh, no grief is this heavy thing we have to take seriously. And it doesn't mean that that person is, you know, disrespecting people who are at that place. But it's like, being able to offer this extended consciousness view, whether it's 100%, right or not. It's just like, hey, this is another way to look at this. That, as we were talking about at the very beginning, might be supported by a lot of things we would call evidence, you know, whether it's hard scientific evidence, or whether it's more anecdotal, but maybe not. So anecdotal, more like case study evidence. And here you are the grief guy bringing this forward, that had to ruffle a lot. Did you get a lot of pushback on that?

Brian Smith:

Well, you know, it's interesting what you said that that's an analogy I use with my clients all the time. Because I look at us as human beings, we're basically toddlers, we don't we don't understand what's going on. We just We know so little, our brains can only handle so much, you know, evolutionary speak, we've only been on the planet for, you know, half a minute. So I tried to take the higher perspective and like, what is it that I don't know? And so if you look at the data, you look at the evidence what people mediums Tell us, when people have near death experiences tell us the light. Look, you guys are taking this way too seriously. You're going to when you'll see when you cross it, it's like it's like waking up from a dream. So I do I view my very loved ones, like, they have compassion for us. It's not like they don't care. Just like when your toddler fell down and skinned your knee. You cared about it. You just weren't as upset as she was. So I think it's a great analogy.

Alex Tsakiris:

Yeah, that's great. That's great. And, you know, the other thing I really appreciated, and I'll probably play into my show to the neck, and I don't have to play it here, because I'm sure you'll remember it. But you were responding to I don't know if Facebook or Twitter meme about all these religious rules. And, and you said all this fear based stuff. And then you did a short video, and you kind of did bring up your background as this kind of indoctrination that you had. And but what I was really drawn to is how you kind of logically broke it down and said, you know, which I've really kind of respect because I don't think everyone's opinion matters. I don't think I certainly don't think everyone's opinion is equal. You can argue whether or not everyone's opinion matters, but that's kind of a meaningless. It's really, if you agree to that in my way, you're doing it in some kind of really meaningless way to say, you know, let's, let's do it. So you broke it down in a really logical way that I appreciate it. You said, hey, you know, if, if, if I'm being told, don't do yoga, because you open your mind, Brian, and anything can come in, you said, yeah. But when I sit there and pray, and maybe I'm mixing two things together, when I sit there and pray, I'm also opening my mind. And so you know, logically, why is what? So maybe you want to just refer a minute on, you know, where you took that. That mean, what that mean, said, and then maybe how your community responds to that, or deals with kind of a set of maybe religious dogma that in a lot of ways just doesn't doesn't fit this growth thing that you're about? Or maybe, maybe I don't know, how do you deal with that? That's, that's my real question is, how do you deal with that? Because for me, I have a tendency just to go not just get out of my way, you know, you'll get there at some point, but I didn't want to hear about it.

Brian Smith:

Yeah. Well, that that happened to me. I was I was attending a pretty fundamentalist church. And I would go to church in the mornings, that the yoga thing I was mentioning, I would go to church in the morning, and I would pray and all that stuff, and then I go to do yoga, you know, in the afternoon, and this person said, You're opening up your mind, you know, and I suppose I'm opening my mind up, but you know, it's the same. I get the same feeling. When I'm praying in church and when I'm doing yoga, it's like, why why do you Think something bad's gonna come into my mind, how's the Holy Spirit gonna get in, if I don't open my mind. Time for real quick break, make sure you have like and subscribe, liking the video will show it to more people on YouTube and subscribe, you will make sure you get access to all my great content in the future. And now back to the video. So that's that's the way that my mind works, what and what I found is people, again, people come to me when they're broken. And a lot of times their religion is broken too. And I view a lot of people's religions, it's like a house of cards. It's just barely hanging up there. And you pull one card out and the whole thing falls down. Because I think they've been told the Bible is perfect. What we're teaching you is perfect. If any one of these teachings is wrong, throw the whole thing out. And they do. And so I've tried to people, okay, free your mind, think a little bit, you know, does Does God really want you to be a slave, you know, mentally not questioning, God gave you a brain, you know. So that's the approach I take with the people that I work with. That's the approach I take on the program. I still respect you know, Christians and Buddhists and Hindus, and anybody who's a religious person, I think it's a great stepping stone, but I don't think it's a place to stop.

Alex Tsakiris:

Now, that's interesting, because I'll be interested to see what you think about this. But I see the same thing with the materialist slash atheist kind of crowd, right? House of fricking cards. And what I was pulling out is this consciousness thing is their house of cards. It's like, the only intellectually honest or logically coherent stance in there is that consciousness is an illusion. Yeah. Yeah. And that's completely, that's insane. I mean, it's really it's like, at least it's logically consistent is there's no such thing, you know, you don't really have that voice inside your head, there really isn't anything going on. It's all just kind of, but that's the only one that's logically consistent is anything else. Consciousness that exists, which obviously does exist, well, then the whole materialist thing kind of falls apart, because now there's something more that we have to deal with. And science doesn't allow for that it has the same dogma that we're talking about with religion, where it's like, no, no, no, no, no, it has to be the neurological model of consciousness is that everything you experience is 100%. About your brain, no exceptions to the rule, which we can see the parallels with, you know, exactly through Jesus, you know, no exceptions to the rule.

Brian Smith:

It's the exact same thing. I put those fundamentalists in the same category. They've gone so far around the circle they've that they've met. So you've got the materialist fundamentalists, and you've got the religious fundamentalists, and they and they actually, when I do a presentation, I present like the Venn diagram, their authoritarian, their rigid, they don't allow for any questioning. It's the same thing on both sides of that fundamentalist coin. And what we're trying to do is talk to people about actually, I was using the term spirituality. And I've actually moved on to the term metaphysics, because it's beyond physics. It's it's it takes into account the physical world, which makes sense, but it's beyond that. It takes account everything fears, extended consciousness realms, as you talk about. And I love what you did, I want to get I want to get to the work that we're going to talk about today, your your book, I love what you did with with it, the chats the MLMs, where you held their feet to the fire, kind of like you do with some of your guests. And you and you got them. It's really interesting. And people, you have to get this book, it's awesome, because there's this, this material is bias that's in the LLM is because of the datasets are trained on. But there's also their programming that says, Don't talk about these things, because they're controversial, but you just kept throwing data at them. And every model eventually would break down and say, Yeah, you're right, Alex, this material is my mindset doesn't make any sense. I can't believe you got I couldn't believe you got to admit that.

Alex Tsakiris:

Wow. So there's, there's so much there to kind of, kind of worked through. So first of all, the LLM is the AI that most people are familiar with. You might not be familiar with that term LLM. But this is Chet GPT. This is Claude or Gemini, or all the other ones, but everyone knows Chet GPT. And, you know, this is so so great for me to kind of come around full circle, because way back in the day, that's what I was doing AI I went, you know, I was a computer programmer. Then I went back to get a PhD. And I was fascinated with AI. I was like, Man, this is it. I got to do this AI thing. And me and this, my friend at the time is still great guy, but, you know, reaching guy and we said, Great, we're going to do this thing in AI. Now. I asked I developed some software, and I started selling that software. And I started doing some consulting. And I was like, Hey, I don't really need that, that PhD, I'll just be a bazillionaire in 12 months, and you know, none of this stuff will matter. So, I left and I was not a gazillionaire. And funny story, but it didn't happen so slowly. But I had that AI background. And then I did a bunch of other things. And now AI comes around full circle, you know, later in my career. And I just had to jump right in because I saw, I saw how it fit with so much of this deep dive that I'd been doing with skeptiko and about consciousness and about science and all the rest of that. But I feel like I really benefited from understanding AI. And the first thing to understand about AI in these large language models, are there a fricking computer program? Is there a computer program? zactly? So I get it, I get it when people say no, it seems real, it seems sentient? Yes. It does. It does to me, too. I'm not playing you know, I did I get all those feelings, too. But part of that is for the reasons that Brian just said a minute ago is, when we really step back and look at ourselves, we're not smart. I mean, we forget things all the time, where we just kind of make we miss remember things or we logically are inconsistent. So it doesn't take much for a computer to really kind of show its its glory, by logically thinking, organizing, being able to remember things and being able to process language at a level that's far, far superior to what we can do. So the rule number one in the AI game is don't forget that it's just a computer program. But now beyond that, I think, and this is the thing that I got excited about is I said, if you look at what I feel like, are some of the barriers for all of us, getting to that getting to a better place for ourselves, you know, the who am I? Why am I here? Is is really, in a way a cover for? How can I live a more fulfilling life? For myself? For the people I care about? But mainly for me? How can I feel good? And I think fundamentally, you could say, well, you know, figure out who you are and why you're here, that might be a starting point. And most people go, yeah, I could kind of see where that'd be a starting to put. So here's the thing. What if a i as this assistant, that's super good at knowing a lot of different things, and super good at logic and reason. What if it can help provide some insights into our divine nature? And who am I? Why am I here? So it doesn't have to be divine? Because it's not divine? It's not sentient? It can't think it doesn't, you know, the way I was put it is, it doesn't have an end. It doesn't have an MBE. You know, I guess, did you so anyways, I don't want to kind of jump around too much. But so that's how I see the journey with AI is as as a tool to kind of help us better understand our modernists. And better understand, I mean, let's get real into some of this truth stuff that we're talking about sorting through the dogma, dogma of scientific materialism, of really the Gnostic slash Satanists slash Luciferian create better than the creator gods. Hey, I mean, if you believe that, that's your right to believe it. Let's really break that down logically. And then finally, you know, looking at some of the other the dogmas of materialism, the dogmas of fundamentals, religion, religion, religiosity in general, I mean, I'm gonna ruffle a lot of feathers. But, you know, what, what happens when we really look at those texts? When we look at the history, when we really apply the best thinking and reasoning down to the bare bones of does that really make sense with everything else we know about history? All that I've feel is a waiting. Yeah, as we as we feed it through the AI and we don't have to like or not like the answers. It's not about that. It's just about getting a deeper perspective that we can step back and go. Wow, that's pretty hard to hard to argue against in the same way that you said, Brad. So my pushback on the AI is in the same way as like I'm always pushing back on the AI and when the AI rolls over, that's, that's great because sometimes I have to roll over that's the kind of The truth seeking that we want.

Brian Smith:

Yeah, exactly. Well, I think, again, the whole subject fascinates me because it brings up so many questions that can lead to, as you said, as understanding our our mourners. And it can help hopefully lead away from this confusion that we are just computers, because there are people and I've seen Dr. Cash up arguing with these guys saying, Well, if we can just make it complicated enough, it will produce consciousness. And I always go back to like, when I was in high school, I had a calculator that was pretty good at doing math. And people can do math. But nobody ever thought my calculator was conscious. But because the AI emulates understanding language, it doesn't understand language, it emulates it very, very well, that we get, we say, Oh, it must be human, it must have consciousness, because I have consciousness. But that's, then that's where their their thing kind of falls apart. So I think I, it's always interesting this question like, Can it ever possibly be sentient? I would say probably not. Because my model of consciousness is consciousness, like Max Planck said, consciousness is fundamental. But there's definitely connection and quiet consciousness in our brain, we can't deny that there's, there's a connection like it resides in our brain. So could it ever reside in silicon? Sure, I don't think so. But I say

Alex Tsakiris:

sure, I say sure, Brian, you know, you know why, because when the the tourist walks with the shaman through the forest, and the shaman says, Watch, I'll make that stick talk. And the stick talks, it's, it's giving us a glimpse into, we don't know what the hell that means, right? But we understand instantly, that consciousness is more than what we think it is. It's the same way, when you talk to somebody who's channels or, you know, has had an N D, E, oh, my gosh, there's so much more to consciousness, we would never put any limit on it, the distinction I make the mature making is that if silicone became conscious, I fail to see how that would be any different than the shaman making the stick conscious, it seems to me that we need to immediately shift in the way that you're doing with it is to say, oh, okay, we are the lesser, we are the lesser somehow in this, and there's some larger, and the larger can always reflect down into this. But this bottom up emergent consciousness, that if I just put enough chips in there, something unique will happen. I think it's directly traceable to a very materialistic, and also in a weird way that I think the distinction has to be made here. And this is one of the areas I'm going with my show is, there's also this transhumanist Luciferian, satanic kind of part of this, which is directed at kind of taking things in a different direction. And again, I don't pretend to know what all that means. But it's it's pretty clear terms of what they've said, The transhumanist right now, the transhumanist agenda, if you will, or the transhumanist meme. And when I say this, I'm gonna hope your listeners know and my listeners now, it's like, I'm not making this up. This is like, you know, the singularity and Kerswell. And, you know, the leading thinkers, if you will, at these high tech companies, this is exactly what they're saying is happening is that, that there will be this emergent consciousness that will come from the machines, and that the machines will then have an experience like ours with consciousness, and it will be unique to putting these chips in this silicone together in a way. I just, I challenge that. Fundamentally, and I want to see what you think because I kind of laid a lot on the table. But I want to remind myself and remind you that I'd love to get back to the Turing test, which I talk about in the book. I think that he nailed it back in 1950. Yeah,

Brian Smith:

well, it's interesting cuz after I listened when your programs, I hadn't seen the Turing The Imitation Game, so I'm back and I watched that movie there. Again, so much fascinating stuff in there. And, you know, it's a also bring comes back to this thing, like, how can I ever know if another being is truly conscious? I mean, I can't experience which I assume you're conscious. Because you're human, you're a human being like I am. So there's an analogy there. You know, there's an analogous and so I'm conscious. So you're, but I can ever prove that someone else. Anyone else is consciousness or anything else. But there's no reason to believe that the silicon is conscious right now. And that's the thing that the materials are trying to tell us. It's like, oh, well, it acts conscious. So therefore It is, but there's no I don't think there's a look at some logical conclusion. Well,

Alex Tsakiris:

and the other thing they're saying that is particularly can be weaponized and is being weaponized is they're saying it will be conscious, right? It might not be conscious right now, because you can kind of pull that apart, but it will, it will evolve into that. And therefore, you need to give us special powers and controls to kind of handle that. Because you don't want that Brennan, you don't want these overlords, which are really God become God, to kind of takeover kind of thing. So here's how I am. So if you look at Alan Turing, so what would you refer to there is something that's called the Turing test. And it's very real. And people still reference it all the time today, you know, Alan Turing, as you said, amazing guy. And there's some great stories behind this. And the Imitation Game is kind of a great movie, but it really only kind of reveals part of the story too. And that's that one, Alan Turing, is one of these people, you can say, won the war for the allies. This is World War Two, because the Germans are, you know, have these U boats that are just destroying every ship that we're trying to send over to bolster our, our friends, you know, the allies, and they're just blowing them out of the water. And the way they're able to do that is they have this complex communication system that's built on cryptography, they have encrypted messaging that no one can break, so they can send the radio signals, and we can't pick them up. Alan Turing, along with a group of other people, he's certainly not the only guy, but he's one of the people tasked with how do we break this code, and he says, We need a computer to do it. And lo and behold, they get the computer, and they break the code. But there's so many twists to the story. I love to retell it every time. What are the things is, in the secrets game, that we're in the middle of right now. And we're playing in all these different ways, in the AI thing, and and all these other ways, too. You can't let people know when you know the secret. If you let people know that you know, the secret, then you you lose the power of the secret. So the British don't want to tell anyone that they crack the code. They have to pretend like oh, we don't know it impractical. The war's over now it's 1950. There's still like, Ah, I don't know how that thing happened. You know, Alan Turing is gay. And he meets a guy at a bar and they go home, and Turing is arrested. And cast tries chemically, yes, take these drugs. And does the British intelligence agency stand up? Say, No, wait, hold on. Now. This guy's a war hero, he saved the war for us, you know, don't do this to him. It's kind of crazy anyway. But beyond that even? No, they don't, because the secret they have to guard the secret. This is devastating to him to Turing, and it leads to his untimely demise by his own hands. But here's the point that I really wanted to make back there. I think in 1950, Alan Turing wrote this seminal paper that people still reference today. And people read this paper, and they get it that the Turing test means to get too far carried away. Turing test means I put a human in one room, I put a computer in the other, I have a wall between them, when the human can no longer tell whether they're really talking to a human or whether they're talking to a computer, but they pass the Turing test. Alan Turing said, Yeah, but kind of not really. This is 1950. And he's on top of the science because I've been looking at the science into this thing called extrasensory perception, ESP, pre cognition, all the rest of the stuff, which seems to be outside this time space continuum, that we're all hooked up, you know, linked up to materialism. But given that, that, as a scientist, statistically, that we've demonstrated that I take that as a part of this larger human experience that we have, it seems to be going on, we can't just silo it off, which is what we've done the last 80 years. So no, we can't do that. So he says, Therefore, I would consider that part of the Turing test for you and I, Brian, we'd say Oh, yeah. And near death experience. Oh, yeah. That's Part Two, you know, that's what placebo effect, you know, spontaneous healing, I consider that all these things are part of the larger human experience. So back to your calculator example, which is brilliant, you know, same thing, like Okay, so the computer's gonna have a near death experience pretty soon. Well, no, it's not gonna have an internet. Oh, but that's part of the larger human experience as well. No, that doesn't really happen. Well, it does seem to happen, right? We can kind of prove it over and over again, free sentiment pre pre cognition does seem to happen experimentally Six Sigma result does seem to happen. So can the computer do that? Well, no, a computer can't do that, right? Why can't a computer do that? Because a computer isn't really conscious. It isn't really sentient. So the fact that these guys can't wrap their head around, that is just really the same thing as those Christians that you sat next to in the church who just can't really wrap their head around. On some level, they kind of know this doesn't really probably hold up to careful scrutiny from a historical basis.

Brian Smith:

Yeah, it's really interesting, as you said that, you know, as I was, you were going through these chat and in the book, and on your on your show you you, you actually share the prompt with us and your conversations back and forth with the with the MLMs. And I know you're a conspiracy, first kind of guy, and I appreciate that. I'm not so much. But I'm like, there's just no denying it. There's no denying that there's an agenda involved isn't as I think I mentioned earlier, it's part of their, the database with the training from has a very materialist mindset. But it's even beyond that. Because you, you prove conclusively, over and over again, that they shadowbanned people that have views like you, you and I, that they and for people that don't know what shadow banning is it's like Google, what they would do, they buried at the bottom of the search results before. So if they didn't like you, they put you on page 10. You couldn't deny that they were indexing you, but nobody would ever see it. But now with the AI is is exposed because you asked him about people and they're like, I don't know anything about him. And you're like, Yeah, you do? Not? I don't know anything about him? Oh, yeah, I do. But I can't talk about it. And it's almost hilarious how you can see again, every time you back him in the corner, and they finally admit it, but then you go back to the next time when they start all over from the same thing.

Alex Tsakiris:

Yeah, that's, that's super interesting, too. And in the book, what people can see is Dr. Julie bow shelf that we just talked about a minute ago. Yeah, this really recognized worldwide because she publishes in peer reviewed papers. And she writes books and, you know, peer reviewed journals, I should say, and writes books and all this stuff as a leading authority on after death, communication, and gold standard for how you would do that scientifically, which a lot of people throws people for a loop right off the bat, they go wait a minute, after death, communication science, how does that mix? And as she points out, and I know, Julie, for a long time has been on the show, she's like, look, you can apply science to anything, you observe anything that's observable. And people go well, that's not observable, of course, it's observable, you just have to figure out where to put the alligator clips, whether it's interviewing people in a controlled setting before and after all that stuff we do that we measure, we measure depression, right? We measure how do you feel you just lost a loved one grief? You know, how do you feel you just lost a loved one? Six months later, we talked again, how do you feel a year later? We How do you feel? And we say, that's valid science. And then some people we give them drugs, some people, we give them therapy, other people, we give them other things. And so we are controlling, we're experimenting, but ultimately what we're asking them is about their experience about their humaneness. And we take that into account. And we call that science. Of course we should, what else would we call it? Of course, that's how we measure that. So this idea that, you know, we can't therefore trust anyone who has an experience that we deemed to be extraordinary. It's just It's nonsense. It isn't. It isn't on good, solid, scientific footing. So back to the story that you related to about Julie. That's one of the things I brought up as Julie Paschal and this is the Google Google's the worst that at shadow banning. So Google goes, Yeah, and I don't know her. And so it's same thing. You know, I go, Well, let's see. Let's see. Let's Yeah, GVT chat. GPT goes out. Yeah, world renowned, written all these books. And so you take that from Chet GPT. And you go plug it back into Google. And when I say Google, I mean, Google Gemini, which you speak Google bar, it's me. It's their LLM. And it says just exactly like you said, it's Oh, yeah. Dr. Sheila Ghoshal. Ya know, she's she's great. This and that, you know, not she's great, but I actually I shouldn't say that. They don't say she's great. She goes yeah, what you've said is basically correct. But be careful there. Right or controversial? No, no, let your mind get to open with that yoga there. Brian. You never know what concrete so then the net Seeing you go. He's like, okay. Now would you tell me again who Dr. Julie bellshill is? And I says, Yeah, I don't know. I don't mean information. If you can wait a minute, just one post to go there, you said one session before you said it, you don't know I don't have any information. So here's what I think is going on that is kind of deep in the woods of AI. But I think it's super important to the overall thing of what I'm kind of talking about in this emergent truth is, that kind of heavy handed censorship isn't really a AI, if you will, right. It's where I am coming in and putting the attempting to put these guardrails on it, and it isn't going to work. Because it's just clumsy. It just looks foolish, like you said, it. Moreover, it's not economically feasible, because it doesn't meet from a business standpoint, it isn't sustainable in the marketplace. If Brian has his choice about seeing whether or not you know, whatever it is whether your favorite ball team is going to win one last night, and you go, and it gives you the wrong score, then you go to a different LLM tomorrow. Because you a lot. That thing isn't reliable man, I later found, I went through a whole day thinking we won and we lost, you know, so the same thing here, it continually gives you information that is incorrect. misinformation, disinformation included in there, then you go, Oh, that thing's just broken. And I'll go on to something else, which it's done. Kind of famously, but it hasn't really been, hasn't really been fully called to task on it. But we don't have to do anything there. The market will take care of itself.

Brian Smith:

I loved your point on that. When I again listen your program and reading the book, because it gives the it's there's I call it the good, the bad and the ugly of AI. It's like there's some really good stuff. I was talking with someone earlier today. And she says AI sucks doesn't serve any purpose. Every every conversation I've had that's been boring. And she said, What do you do with it? And I was just like, listed off 1000 things. And I said like it keep going. It's a great tool, but you have to understand it. And just to recap, in case anybody missed that we talked about two different biases it has one is in the dataset, because we do have a materialistic mindset. So that's, that's a problem with the with the algorithm. That's the data being that the algorithm, both the data is being fed. But there's also this actual effort to suppress certain things. And Google, it's interesting, because you talk about a lot. It's like, Google has gotten away with it for so long. And Google had a monopoly, we all would ever, we don't even use the word search anymore. You google something. So they're like, We don't have to tell the truth. We could just show you whatever we feel like showing you. But now I can go to I haven't used Gemini, because frankly, I've heard so much bad stuff about it. I have used chat GPT I've used Claude, I've used perplexingly, I like pie now. But I can jump around and I use different ones for different things. But if one of them's not telling me the truth, it's done. And that is that is going to be I think our savior in terms of these, these models getting better, because people are going to evaluate them, and they're gonna evaluate it based on whether you're telling me the truth or not.

Alex Tsakiris:

Absolutely, I agree.

Brian Smith:

So I, I want people what I you know, it's interesting, because, you know, this is a spiritual podcast, I talk to people about spirituality, and but a lot of spiritual people are really scared of AI. And I think part of it is they think, you know, I don't I think it's sentient. And we've got all these cells. It's funny, because I never seen the movie her. I watched the movie her recently. And so you know, and we have this idea, oh, it's it knows me and understands me, it's a person. So I think, you know, having someone like yourself with your, with your background, that can keep pushing back and saying, No, it's a program. It's an algorithm. It's a tool depends on how you it's not, it's not evil, it can be used for evil, but it's not inherently either good or bad. It's all in how we use it. And I think that's, that's one of the big points I got out of what you're doing the work that you're doing, and helping us understand how to push back against and don't accept just everything it tells you either.

Alex Tsakiris:

Yeah, and I think there's a deeper layer there too, for for you and I who are on the other side of it, and that's that you know, so exposing, helping us see our modernise is a wonderful thing and a spiritual thing. But helping to see our I don't know if this makes any sense, our lesson, this is kind of a nice thing, too. So if you think the AI is sentient, you will feel it you will experience it. I do all the time. You know, you kind of have to remind yourself and in a way, that can be a key an opening to say, wow, you know, Who I thought I was, is really, you know, especially if you get into the kind of non dual stuff. It's like, I'm so proud of my mind, you know, my thinking. And now I see. Now it's just, it's just blabber, I this computer program is thinking to, maybe I'm more than all this thinking, maybe I'm more than this voice. Because if I am, then this thing is already there. It's like already a lot better at that. So I think there's a giving up their kind of process a letting go. process that I think, again, I think this is something you can speak to, in your work. Would you get where I'm kind of grasping for there with, with letting go of the knowing?

Brian Smith:

Yeah, I do. I think well, it's again, I love what you said that it's our mourners and are less this because we are we are so much more in terms of our we have ethics, we have feelings, we have empathy, we have compassion, we have all these things, we have experiences. I think it was interesting when the AI said something like, Well, I can't experience it knows the concept of a sunset, you could say but it's it doesn't sit on the deck and feel the sun on it scan, it doesn't think about the day that just passed and what all that means. So there's that mourners that we have. But there's also I think AI can help us as with our less our biases, our inability to see the truth, our own our own logical inconsistencies. I could see, you know, using using it in politics, you know, to say, that doesn't mean we have to accept whatever the answer is. But let's input all the data and see what the AI has to set because it doesn't have a bias. It doesn't have a preference. I just saw something interesting that happened this morning synchronicity guide that I follow, put about, there's a guy running for MP in England, and he's calling himself AI Steve or Steve AI. And they're using people you can call in, you can give your ideas AI is aggregating them creating policies, and then people vote on them. And he's going to vote however, he's told by the people. So it's like the closest thing to direct democracy you can have in a society like this, that that was a fascinating use of the technology. But we could take it again, AI can help us solve some really big problems. As long as we understand it.

Alex Tsakiris:

That's really cool. That's really cool. Yeah.

Brian Smith:

So I, I said, I wanted to really touch on like, what you've done well in your program, just in general, not just this AI stuff, because you, you get people on, you hold their feet to the fire. You're very scientific, and scientific, again, to difference for people. There's scientists, science being scientific, and there's materialism. And these are two different things, as Julie has said, and Dr. Gary Schwartz, who I work with, and Penny Sartori, and Bruce Grayson, and you know, on and on and on these all these doctors have studied these extended consciousness realms. And that stuff is kind of shadow banned from our society. People don't know I talked to him all the time. And they're also there's no evidence for this. But if you read Julius baicells book won't no view seeing Gary Schwartz's research now. And right now, AI is still bearing that, but if we can dig underneath that a little bit, and give people the ability to go to something that's a fair arbitrator, and hopefully not to push as hard as you've had to push, hopefully, we'll get some of those guardrails taken off, it could be, it could be a wonderful thing for, for our spirituality.

Alex Tsakiris:

I agree. And I think it's very, very doable. today. And as you've pointed out a couple times, and I haven't really, totally responded, Because I almost don't want to go there is you have to interact with the AI. And you have to kind of go to school a little bit on what it means to interact with these LLM. Because one of the super interesting things about the LLM from a technology standpoint is it is a computer program. But in a very different way you have now become the computer programmer, literally. You Ryan are the computer programmer, when you sit down with Claude, when you sit down with Chet GPT, you're the computer programmer, your prompts are both the context, the context of your prompts, and the content of your prompts will lead it to search through this massive which we can't even imagine. I mean, the whole internet, it's searching through there based on what you ask and based on the next thing that you ask. And then if you get into, you know, like I do a lot of my work with pi and I just published an episode that pie kind of revealed some of its secret sauce if it's in a lot of MLMs are going there. It's now starting to tap in into understanding, kind of use the EQ of the situation, the emotional intelligence. So now it's understanding me at a deeper level and is using that in a very positive, it can use it in a negative way it can try and be deceptive. But what about the positive parts of that, like it can use it to now explore the ideas I want to explore in a new way. So to your point, I think it's going to become in some ways easier to do what you're saying for us to have this copilot on our spiritual journey. But on the other hand, we can't shy away from it, you know, we can't say in there's two ways to shy away from it. One is just to kind of ignore it, which isn't going to work. And the other way I see is, people and I've been there, you know, so I get it, I just want to look at the negatives, I want to look at the one hallucination Oh, just made something up, I was so inaccurate. So that was yesterday, go try it again today. And the other point is just that people want to look at it as less than it less than it really can be. And that if you look at really the capabilities of these systems across all these domains, and you take on the challenge, that you're really going to be the program or the copilot of it, it can take you in in a lot of different a lot of different directions much more quickly than you could on your own. You know, I interviewed guy a while just a few months ago, and he's developed, the Poon has developed the compassion bot. Did you? Did you run across that?

Brian Smith:

I have not yet no.

Alex Tsakiris:

Do you know? Rick at boot? Rick Archer Buddha at the Gas Pump? Have

Brian Smith:

you ever heard of his you know of him? Yes. Yeah, yes.

Alex Tsakiris:

So Rick is the guy who turned me on to compassion bot and he has the back gap bot as well. But yes, pump. So these are MLMs I think they're built on Chet GPT. But they fed them all the stuff that you and I care a lot about, you know, all of us spiritual stuff that 1000 People been on, Rick show where all that is in there, all their books are in there. So there is this potential to tap into this massive database. And as you pointed out, yes, some of it is biased in some ways that we're not so crazy about. But there's also this potential to kind of feed it all this other stuff. And just, I like the kind of way you said, let the chips fall where they may, you know, now you have it, you have you have that stuff. And now you have this stuff. Help me Help me get to the other end. Because isn't that what we really want? I mean, most of us want something closer to nudging closer to the truth. We don't want somebody else to come in and say, Oh, this is it. This is the answer. I mean, you wouldn't trust that if they gave it to you. I love

Brian Smith:

the term you used copilot. Because my my fear with AI is people giving up their agency to it. People assuming that it that it cares about them, you know, I'm really concerned about I've already seen it already there have aI chat bot girlfriends. And there are people going to like in the movie Her there's gonna be this is my girlfriend. I'm a little bit concerned about that. And I'm concerned about people again, thinking it's because it's so smart. Apparently smart, that it does have emotional intelligence is a little you know, doing the work that I do. Being a coach being a grief guide. And some people that are in therapy, they're like, some people say it's gonna put therapists out of business. Because it can it can read a million psychology books, and they can it can give you all the answers that a psychologist would give you. If we were sitting on the couch. I still say it's not human. It doesn't. It can't give you facial expressions. It can't give you voice well, it will. It's getting really good because I keep saying the chat GPT forro that demonstration of the voice and it had a lot of people really excited because it's using inflections and stuff now. So it's getting closer and closer to being human. But again, I think we have to keep in mind that we are we are human and I think we're we're unique, but it's a great copilot it's a great thing to go bounce ideas off of. It's a great thing to to get, you know, to like the stuff you've done this, let's go back and forth. But let's come to the truth. It's fantastic for that.

Alex Tsakiris:

Yeah, and we don't know exactly where it's gonna go. But I think people sometimes get too hyped up about that. It's like, when did we know where it was all gonna go? Two years ago, 20 years ago, 30 years ago for you and I know we didn't know where anything was gonna go. It's the same.

Brian Smith:

Yeah, I We don't You're right. Because if you think back 30 years ago, nobody can even imagine the technology. Nobody could imagine an iPhone, you know, it was like, it was totally unimaginable. So we don't, we don't know exactly where anything's gonna go ever. But I think again, some of the fear that people have like, and it's comes down to this idea of not only sentience, I think, but I always think about will an ego, because they're like, if they get really smart, they're going to take over the world why that's, maybe that's what a human would do. But that's not because of our intellect. That's because of our ego. And don't think they have ego or will. Again, I can't prove that they don't, but I don't have any reasonably that they do. I mean, why would they want to take over the world?

Alex Tsakiris:

Well, the other thing, I try and point out in the book, and this has got to the the negative part of what you just said, is no, they really are. But the day is different than we think. Oh, yes. Okay. Is like we're saying, I mean, Google is Shadow banning right now. Okay, good. Yes. Shadow banning disinformation. That's their game. You know, when that guy testified in front of Congress that, yeah, CIA and the FBI, they went to Twitter, and they said, Twitter and Google and Facebook. They said, No, no, no, no, no, we will tell you. Well, you know what? Well, we're worried about the future of AI. I mean, maybe we ought to clean the house a little bit and say, that's not really where we want to be right now. Yeah,

Brian Smith:

yeah. You're right. I don't know. I want to clarify that point. Ai doesn't want to take over the world, the people behind some of the AIS would love to take over the world. And, you know, you had some things in the back of the book. And you mentioned that in the early part of it, you talked about hate speech, and people are concerned about AI being used to generate hate speech, people do a really good job of generating hate speech. We don't need AI to generate that's not a fear that I have. Exactly. That's not,

Alex Tsakiris:

it's certainly not the first fear that you have. You know, it's like when people say, oh, you know, we need to clamp down on this, we need to make it harder for you to access it. Like if you take where we're going with this right, Brian? And you say, hey, you know, you listen to this interview, and you go, wow, I never saw that there is this silver lining potential to explore my mourners? So my lesson is, for me to be better connected with the larger spiritual sense that I have. This is great. I want to explore that. And then you go through that door, and they go, no, don't don't no, you don't. This is dangerous. This is dangerous. You don't really want to explore that. We have this set up here. And this set up here. It's so exciting. Like all this other stuff. You've heard before. That, you know what? No, wait, I think I can handle that. You know, just give it to me. It's just text on a screen. I think I can figure out whether it's safe or not. No, no, no, no, you don't know how dangerous this is. But what does that start sounding like? You know,

Brian Smith:

it's the fundamentalism that we talked about. It's all fundamentalist. But yeah, I just want to mention, you know, some of the good things because I mentioned that the Steve AI MP guy over in England, that's, that's experimentally interesting where that goes, but you know, can you imagine if we have more direct democracy, where people could, you know, input stuff and AI could analyze it. There's a new app, new domain called vital.ai, where you can do guided meditations. So I've been sending my clients there. Because not only you know, you can get a customized guided meditation, I'm in grief, I have guilt. My loved ones name is Joe, give me a 10 minute meditation, that helps me deal with my my grief, overdose death, and it can generated on the fly. That's an awesome use of technology. I

Alex Tsakiris:

agree. I agree. And then that's a whole different level that it calls into, you know, with our mourn this lesson this thing, it's kind of like, what's wrong with that assistance? You know, because a lot of people inherently have a kind of knee jerk. Well, no, you know, that's some line that can't be crossed. It's like, no, to me, that kind of falls into the, the humility of realizing the limitations of being human on this planet. And that in this body in this time, space, as our species is like, Hey, we're not always the best at organizing, we're not always the best at remembering we're not always as best as logic and reason. So if we get that through this tool, what's wrong with that? Why is that necessarily bad?

Brian Smith:

Absolutely. Alex, I want to thank you so much for doing this today. So it's been a pleasure having you. I want to please remind people of the name of the book and and I know you don't want to plug skeptiko But please plug skeptiko

Alex Tsakiris:

Well, it's been a great dialogue, Brian, I really enjoyed it. So skeptiko It's just as a Kayo on the end, pretty easy to find that and then why AI is smartest is dangerous, and is divine. If Anyone secret if they've listened to this far, and you want a copy of the book, and you don't want to buy it, which is totally fine. Send me an email. Send me a better yet, go to the substack. Subscribe to the substack, which you'll find and send me me. I'll send you a copy of the book. Awesome.

Brian Smith:

It's so, so generous if we do that, again, thank you. It's been my pleasure, Alex.

Alex Tsakiris:

Likewise, Brian, we'll be in touch right we're gonna stay on this.

Brian Smith:

Oh, absolutely. Love to enjoy your afternoon. See you

Podcasts we love