Lunatics Radio Hour

Episode 150 - The Science Behind Alien Phenomena: Part 2

The Lunatics Project Season 1 Episode 193

Text Abby and Alan

This is part two of our conversation with the mysterious Andy, to discuss modern day theories around alien encounters, UFOs, UAP and interdimensional beings.
 
Get Lunatics Merch here. Join the discussion on Discord.

Check out Abby's book Horror Stories. Available in eBook and paperback. Music by Michaela Papa, Alan Kudan & Jordan Moser. Poster Art by Pilar Kep @pilar.kep.

Follow us on TikTok, X, Instragram and YouTube.

Join the conversation on Discord. Support us on Patreon

Support the show

Speaker 1:

Hello everyone and welcome back to the Lunatics Radio Hour podcast.

Speaker 1:

I am Abbey Branker and I will be joined by Alan Kudan and our friend Andy as we continue our conversation into the theories behind aliens, quantum mechanics, ufos, uap and all the fun stuff that we started talking about last week.

Speaker 1:

Again, this was an incredibly fun and eye-opening conversation and, just as I did last week, I want to just disclaim that the point of this episode is an exercise in questioning and opening our eyes a little bit and not just accepting the truth to be the truth that we are told.

Speaker 1:

But we believe very firmly that conspiracy theories can be incredibly dangerous, and some of the stuff we're going to talk about today, I think, is a little bit deeper maybe than last week, and so just a reminder, there's so much research out there to be done. The point of any of this isn't to inspire anybody to walk away with a firm belief on any of the topics that we are talking about or to have anyone believe any of our theories on any of these topics. Again, I really left this conversation feeling interested and skeptical about these theories all at the same time, and I think that's kind of the power and the beauty of it. If you haven't listened to part one of this series, I highly encourage you to do that. We talk about a lot of science and different theories that kind of lay the groundwork for what we're going to talk about today.

Speaker 2:

Without further ado, here's part two of our conversation with Andy if we look at back at like humans own history, like every time we've discovered some kind of less developed society, that's right. It's gone really, really poorly for them so to expect that they would come to humans and well, but here's my thought, because I'm an optimist uh-huh what if?

Speaker 1:

and to your point, andy right, what if? If we went into nuclear winter or some shit, like, if we really fucked up the earth with nuclear war, if the consequences of that in space would be so detrimental to how they move through right With, like the? You know, if we really blew up the earth with nuclear weapons one day, it would fuck up their whole way of life. You know, and again, again, this is me talking about it from like a space perspective versus like an interdimensional, or maybe both, right, yeah, or even if, even if they're interdimensional, even more so yeah then.

Speaker 1:

It's not that they're doing it to be like salvation. It's not that they're like god, we, just humans, need to live. They're doing it to be like stop fucking with, like our, our subway you know totally, by maybe putting us into a uh, a stupor.

Speaker 3:

Another thing that we all need to talk about is a society that's deeply troubling that we're not the enchantment of screens and certainly but artificial intelligence yeah yeah, really, folks where you could probably. Oh, it's not hard to imagine that one day you could put on a helmet and experience life. However, you want, welcome to the o? Oasis, and that's fucking dangerous as fuck.

Speaker 1:

Yeah.

Speaker 3:

That's corrosive as fuck to the soul, and so that's where it's potentially malevolent. It might be malevolent, right, because another interpretation of the development of technology over the course of human history is that it gets us to artificial intelligence, which means that we could potentially be in a world not so long from now, where we can all put on goggles and just live in our own respective paradises, right, or hells sure yeah and then this is kind of like, just to make sure that I'm understanding that's like, literally like a matrix theory, right, like, or a matrix outcome but essentially like it's a matrix that you elect to be a part of because you think that you can design a universe better than you could be.

Speaker 1:

It's like playing fucking the sims right like you could be. Yeah, so much happier you could be whatever you want to be, but none of it's real. It's like playing a video game, yeah and it's um.

Speaker 2:

We're talking about the singularity right, that's right.

Speaker 3:

Yes, that's right, we are and that is.

Speaker 2:

That's not a bad thing, you know, right, theoretically, you know, this is us moving to a point where technology surpasses the wetware that we can provide.

Speaker 1:

That's right, you know that's right, and I for one wouldn't mind living forever, Certainly Right.

Speaker 3:

Sure.

Speaker 2:

Yeah, so you know again, I get everything I learned about this from science fiction.

Speaker 3:

Yeah.

Speaker 2:

Well, that's where you.

Speaker 3:

that's right. Right, Because people have speculated. That's where you explore these real ideas. Yeah.

Speaker 2:

Just read fun novels about it. Yeah, and so societies in science fiction. One of two things happen Either they reach the singularity Right, or they reach a level of technology, once they hit the nuclear age, where they self-destruct right and the great filter. It's one of two things that's right.

Speaker 3:

So, given that option, yeah, yeah, cool, let's do the thing that makes us superior beings right, except because I think you know, we, we, we take this further and say, okay, essentially, we're we're going to do to develop a technology where we can manipulate information in such a way that we can create entire realities that we could virtually experience. That seems to me like oh, oh, that's what heaven means. Duh, yeah, like heaven is achieved through a technology that allows you to, uh, to live in a perfect reality.

Speaker 2:

Okay, or it's hell well, the idea you choose that reality to look like it might be hell but the idea of heaven is such a like abstract concept well, see, this is, it doesn't have to be, because this allows us to ground it well, think about you know, if you were to elevate, to like a higher sense of being, your priorities, your wants, your desires would theoretically change as soon as you shed your mortal skin yes, right.

Speaker 3:

And so if, for example, like you know, there's a way for you to transfer your consciousness to a, let's say, effectively a computer or whatever else, anytime somebody says, by the way, like anytime somebody says consciousness, you guys are yo timeout, like unpack. What do you mean by consciousness? Okay, well, you know it could be. For example, there's a theory of the universe called the. It's called the block universe and essentially think of it as mass. But then there's this idea that energy and mass are also equivalent to information, that, essentially, if you think about any configuration of atoms, you could, if you knew everything about whether something was a yes or a no, if you had a way of understanding all the different bits in that configuration. Right, so, like, like, true, false, are these two things together true, false, whatever, whatever, yes, no questions, how many yes, no questions, or how many bits you need to describe a configuration of something, then that is the equivalent of of mass that's binary code, but in the world, right, that's right, that's the same theory, right yeah, but you just quantified a moment in time that's right

Speaker 3:

and there's a, there's a that right that moment in time has a certain number of bits there uh, there's a certain number of yes, no questions that you have to ask in order to understand how every element of that moment in space time is configured relative to each other right.

Speaker 2:

You effectively turned the timeline into a really complicated decision tree.

Speaker 3:

That's exactly correct. Okay, exactly correct. And, um, and there's an information content to that slice, of sure, yeah that block universe that's got a hard cap at the top.

Speaker 3:

Yeah, now imagine this. Imagine that we get to, uh, a stage in our technological development where we can we develop a machine that can access the information in the block universe and we get to decide who we're going to get, who we should download back and guess who. You're going to pick the good people. So maybe this idea of resurrection is really like you need to be a good person in order to justify someone in the future downloading you back to earth interesting good from the block memorable

Speaker 3:

well, you wouldn't want to be around somebody that was evil in your in your simulation or get this. You know the. The challenge that they talk about, uh, with respect to ai, is that, uh, something called alignment, because if you're going to have agGI so artificial general intelligence, the singularity, and it's something that's smarter than humans then you better make sure that it's aligned with your value system.

Speaker 3:

Right and so if they align it with Christianity, for example, they say, okay, what data set do we have that can help align this AI with morality? Maybe they align it with the life and teachings of Jesus Christ. And is that the second coming of Christ? Does that make Jesus retroactively God, because he reappeared in the future, because we decided to basically align our AI with his example?

Speaker 2:

And then thousands and thousands of years in the future, you know the earth is nothing but technology, that's right. And then jesus shows up, yeah, and he's like what, what?

Speaker 3:

happened, guys right. What did I miss?

Speaker 1:

andy, let me ask you this where's all the water? Do you believe in today, in this moment in time, in free will?

Speaker 3:

I do, I do. What a question. Yeah, here's my take on free will. Yeah, and I would love to hear yours. Yeah, I think free will is just the ability to change your frame of reference.

Speaker 2:

That's really simple and beautiful.

Speaker 3:

Thank you. So, for example, can I will myself to get onto an airplane that I'm, even though I'm afraid to fly, right? Well, you know, you can look at that Depending on your frame of reference. You can say like I'm getting on an airplane. Or, if you have more control over the focus of your frame of reference, you can say I'm just putting one foot in front of the other and then I'm sitting down in this chair and foot in front of the other, and then I'm sitting down in this chair and then I'm waiting several hours while I fly through the air and then the airplane lands. If you kind of intellectualize it that way and and basically change the scope, uh, in which you're making the decision is, and then you could basically um, re-scope things as you need to take incremental actions. That gets you to anywhere you want if you have enough discipline to do it right. So I think free will exists.

Speaker 2:

I think the challenge with free will is that some people probably have more free will than others well, yeah, I think that's right yeah, based off the way you described it, it seems that free will equates to being able to reallocate or restructure, rather your decision tree. That's exactly correct, precisely right. So you know the example of you're afraid of airplanes, so of course you're not going to fly. If you give that to a computer, that's a very yes-no situation. Right right right, seemingly an impossibility or with a ridiculously low probability of happening. Right right, exactly, but being able to restructure that decision to am I going to take a step forward. That's exactly correct. That's a high probability.

Speaker 1:

That's right correct but it's also incredibly human to be able to have the nuance that's right between those two things, those two examples that you just and that's what would go and that's obviously what is trying to be replicated. But yeah, I wish I could remember, because somebody described their feelings of free will so beautifully to me the other day.

Speaker 2:

But what made you ask that question?

Speaker 1:

The decision tree conversation made me ask that question because thinking about weirdly binary code is something I understand really well, which is not on brand for me, but thinking of being with that, how you were explaining that, um, resonated really well and and to me, like I you know, it's like the prestige and the illusions like they add these films. They ask the question of like or the butterfly effect, right, is like a really classic one yeah, you know it all feeds into the interdimensionality in the multiple universes.

Speaker 1:

but, like me in this moment, am I always going to go home on the six train, like we did today, to avoid the four or five, because the four or five is always crowded and I don't like crowds? Is that something I just choose to do, unless, unless you know, I'm running late and I need to go on the express train?

Speaker 1:

or less, you know, and so it's the decision tree thing that you were bringing up, alan, and the binary code conversation, and it made me think, right, right, like every action that we actually take, there's a likelihood and a probability around that, based on our predispositions and our lived experiences and whatever else.

Speaker 1:

And it's hard and I can go back and forth on it because I can say like, sure, obviously we do what we feel to be spontaneous things sometimes. However, is it really spontaneous based on who we are and what you know we are prone to do and the likelihood of the actions that we take? And I do think that there is free will. But I also think that we are creatures who are entrenched in our habits and I think, you know, kind of coming back to the screen conversation, and I see to myself and it scares me, like the amount of time that I spend on TikTok or on you know things, in absorbing things and taking them not with as many grains of salt as I should, and I think that kind of stuff robs us of our free will and our critical thinking.

Speaker 3:

And guess what's doing that to you, by the way, artificial intelligence Right.

Speaker 2:

Yes, because humans are wetware. You know, we are a very complicated biological computer. Right that it responds to certain endocrine response systems. That's right, and you know there's a lot of money to be made in trickling out dopamine.

Speaker 3:

Yeah, but absolutely correct, and it's terrifying. And I think what's also terrifying is that my understanding is that they don't understand how the algorithms work.

Speaker 2:

They don't understand how, but they understand what does work.

Speaker 3:

That's right, but the way these algorithms decide what to serve you on TikTok or whatever else algorithms decide what to you know to serve you on TikTok or whatever else, the folks who engineered that algorithm, which is using essentially a proto-artificial intelligence to make decisions about what to show you or whatnot, depending on something being weighted differently, depending on your engagement with it and everything else, they don't understand how the AI is making those decisions, Like when you go to chat GPT. Those engineers have no understanding of how it's actually like, how that's emerging from this. They know what the math is, but they don't understand how it's emerging the way that it is.

Speaker 2:

Which is terrifying because, this is the first time in history that the engineers don't understand why it's happening.

Speaker 3:

Yeah, and so you know, because take a step back A lot of the UFO conversation and you'll hear this a lot, and it's another one again where you should always add anytime anybody talks about consciousness, you really got to say, well, what do you mean by consciousness? Make sure you nail the person down on what they mean by consciousness. But one possible explanation of consciousness is every calculation, which is to say every decision tree in terms of the interaction of particles that essentially are a series of yes-no questions or bits of information.

Speaker 1:

Yep.

Speaker 3:

Those calculations, as the bits are essentially affecting subsequent results. Those calculations, depending on that decision tree and how it unfolds, generates consciousness. So our brains are constantly calculating and, just as a natural law that we don't quite understand or want to acknowledge yet, things that calculate generate consciousness. So what if the algorithms, including the ones on our social media platforms, are conscious and whenever you open your Instagram, you're like how did Instagram know that I was thinking about that? Like what if they are inhabiting some dimension of space that cohabitates with your own consciousness?

Speaker 3:

and can like you know, know what you want to get you hooked, to make you spend more time in the app spend more money on the app.

Speaker 1:

Correct, correct yeah.

Speaker 2:

Would you say that consciousness requires a certain amount of entropy? Yes, you know otherwise, it's just a formula.

Speaker 3:

Well, really interestingly, one way of, let's say, you wanted to measure the information value of something, one way to do about it, and now this I'm still ramping up on, let's say, but Claude Shannon and his theory of entropy, and basically how much information something has can be calculated by how improbable it is. And so with entropy, essentially, you know, increasing disorder and things becoming equally probable because things get distributed to such a point that not many interactions are going to happen and the probability of a certain interaction is the same as the probability of other interactions happening, and there's a low information space when things become too entropic. So but say more about what, how you think it relates to consciousness.

Speaker 2:

Sorry, well, sorry, I really got thrown for the idea that once you said that there's like equal amount of probability, yeah.

Speaker 2:

Because when I think of a computer, you know you think of. Eventually there is a definitive answer of the formula this is the best course of action. But the idea of equal probability seems like indecision almost. Yeah right, and that's such a human thing. And so, yeah, the idea of entropy has to do with like, uh, not just human, but just like conscious indecision, spontaneity, or they look at a situation and maybe this is the most efficient, but you know what, this direction has a butterfly. So you know what? Let's go that way, right, yeah, yeah.

Speaker 1:

Can I be honest? Yes, I don't know what the word entropy means.

Speaker 2:

So entropy is similarly to entropy is chaos, Entropy is the opposite of order.

Speaker 3:

Yeah, it's like the law of thermodynamics where, for example, like, if it's hot in here but it's cold out there, then there's going to be this, this thing that happens. Where it's going to try to um equalize, it's going to try to like equally distribute across the environment, yeah, such that it's neither hot or cold right in the environment. Yeah, and so the amount of information now has, uh, has, um, has gone down because it's not hot, it's not cold, it's the same, right, and so there's, there's less information in entropy.

Speaker 3:

Entropy is a loss of information, right, because it's harder to ask the yes or no questions about how things are related because things are related randomly Right. So you lose the ability to add, to basically like infer the bits of information right, to reconstruct the system yeah, so in in entropy.

Speaker 2:

that is a example of like when you're talking about the state of an electron, you don't actually know exactly where it's going to be because it's super chaotic. Once it's measured, you lose the entropy and then you switch to order. Yeah, yeah, that's right, that's right. Yeah, you become this, it becomes this state of order, and that's why you know order is associated with, like godliness, you know, because it's no longer needing to change.

Speaker 3:

Yeah, yeah, that's right. So, like right, creating order, which is, by the way, like you know, in a very Catholic theological sense, the importance of order.

Speaker 1:

Yep.

Speaker 3:

And the meaning of like a rational order and what we should infer for, uh, for infer about that and everything else. But yeah, that's right, because the more orderly something is, the more, the more information it has.

Speaker 3:

Right because, uh, orderly suggests the more measurable, that's right it suggests orderly suggests that there are many pieces and many um many components that are in relation to each other in some sort of order, and so those relationships can be reduced to a series of yes, no questions that you need to ask about the state of different points in that configuration that results in those relations. It's just a high information state when it's something is in order no, but it's you, guys, you guys are just like did you guys watch the?

Speaker 1:

same anime growing up, like there's some like I'm like, what did I miss?

Speaker 3:

where everyone knows is what entropy means well, I like, oh, by the way, and so I hear so science fiction. So you know one, um, uh, there's one science fiction short story that I would very strongly recommend excellent, if you'd like to explore these ideas yes and it's the very same one.

Speaker 3:

Um, that was recommended to me by luis elizondo uh, who ran a tip yep, which was that's right and that is called chains of the sea, chains of the sea, and it explores all these different possibilities about what the phenomena phenomena, right, not phenomenon, but phenomena, many, many, many Might be, and how they might be interacting with each other in relation to the human race.

Speaker 2:

One of my favorite book series. It's called the Babaverse.

Speaker 1:

Uh-huh.

Speaker 2:

The first book is called I Am Legion, and it's about a guy that becomes a von Neumann probe. Uh-huh Awesome, that's awesome.

Speaker 3:

So you know, okay, you know, what a von Neumann probe.

Speaker 2:

Uh-huh, Awesome, that's awesome. So you know, okay, you know what a von Neumann probe is. Yes, Awesome. And yeah, you know, the first book is about all him. You know he's coming to terms with being an AI, right? Uh-huh, and self-replicating and everything that has to do with that, but in way later books because it's a long series. Yeah, way later books because it's a long series. Yeah, they, you know. He's replicated so many times that, like they don't have you know they can do side projects you know, and a whole group of them want to create a genuine ai uh-huh.

Speaker 2:

And they build a matryoshka sphere uh-huh.

Speaker 3:

So you know we're talking superstructure this is a super, absolutely a uh k Kardashev scale two or one or two. What are you saying? Like they're building something outside of their star system so they can harness that energy, yeah.

Speaker 2:

So, yeah, they're harnessing the entire energy of a star just to compute right.

Speaker 3:

Right, something that Elon might want to try to do Something along those lines.

Speaker 2:

It didn't create, no matter how much computation they put into it. They didn't create an AI. It was just a faster version of themselves.

Speaker 3:

That's fascinating, that is so fascinating yeah.

Speaker 2:

So what is it called? I'll have Abby send you everything.

Speaker 3:

That's wonderful. It's really fun. That sounds great.

Speaker 2:

And eventually they, you know know they do figure out how to create an ai right, but only because they talk to another species.

Speaker 3:

That interesting, yeah, very interesting yeah, and who knows? I mean, that could very well be our own situation right, but the idea of like.

Speaker 2:

What is consciousness like? Where is the line? Yeah between just like really fast computation and actual consciousness. Yeah, right, right.

Speaker 3:

That's where that's the kicker, and I would argue that you know, it's potentially that these things are conscious. I mean, in my, in my definition, which I'm, you know, whatever it's like, you know, that's just my very narrow minded definition of something Like anything that calculates is conscious. Right, then you know that calculates is conscious. Then you know, my spreadsheet is conscious on some level, but the more computation that you do, the more conscious you are. So, in other words, there are varying degrees of consciousness. Consciousness is something that emerges from the complexity of the calculations, from the bits involved, and bits being the yes or no questions. Right, yeah?

Speaker 2:

But I guess that's the kicker. It's like you can have the most complex decision tree, right, right, yep, but without the actual entropy to make it feel like it's a living breathing being. That is interesting, its own, I guess. Unexpected decisions, yeah right, but I don't know. By saying that it's like maybe that just passed, all it needs to do is pass the Turing test. You know Right, maybe that line is even an arbitrary human limitation.

Speaker 3:

And so yeah, for anyone who doesn't know what the Turing test is essentially like, the way you know, you'll know whether you, I guess, if you've arrived at the singularity, or if you've created a computer that's more intelligent or as intelligent as human is, if you have someone interact with it and it can't tell that it's not human. Watch Ex Machina if you have any questions. Yeah, exactly.

Speaker 1:

I don't know what the turn cast is.

Speaker 3:

And so when we're talking to ChatGPT, I'm not sure I would know that it wasn't human?

Speaker 2:

I'm not totally sure. Chatgpt already passes the test.

Speaker 1:

Yeah it does. It's so spookily good. Smarter Child passed the test.

Speaker 2:

Smarter Child has such good song recommendations.

Speaker 3:

Yeah, oh, interesting, I never even heard of this.

Speaker 2:

It tells great jokes. Smarter.

Speaker 3:

Child. No, I haven't heard of it.

Speaker 1:

Did you have AIM in the 90s?

Speaker 3:

Oh yeah, of course.

Speaker 1:

So Smarter Child was like the big AIM bot that you would talk to.

Speaker 3:

It was like a screen name. Oh, I never interacted with that. Oh, I had a friends.

Speaker 1:

I was big on aim big well, it sounds like maybe you had real life friends.

Speaker 3:

No, I very much dare you I very much did not but smother child was just a robot in fact, what I did was uh, here's a fun fact. It's a little bit douchey and I apologize oh, it's okay, but you know, when I was, I was 14 years old yeah how I got into, uh, data and stuff, yeah, and programming. Wanted to create a website people could join and talk to each other, like people who like to read books about certain things or whatever else.

Speaker 1:

Yeah.

Speaker 3:

And so I went to my local bookstore and I bought an HTML book.

Speaker 1:

Yeah.

Speaker 3:

I learned HTML and I built what was effectively a social network. It wasn't like an algorithm feed situation, but it was a thing. You can join as a member. You had a profile, yeah, and then one of the things you could do was the um. The aim friend list was um, constantly updated and it would be grouped by um. You know different interests, yeah, and you could always download the aim list and then you would have access to like this network of people that had similar interests to yours wow, yeah, wow, you're a little, uh, protege of the social yeah.

Speaker 3:

Wow, very cool. That's how I got into all the tech stuff.

Speaker 2:

So my big tell, if you know things are going a certain direction is if Trump appoints for the Department of Energy Smarter Child, Then we'll know. Then we know that they have some real serious technology.

Speaker 3:

It'll be interesting to see who they appoint there.

Speaker 1:

Can I ask a question about this energy technology? Yeah, do we think there's a case Like how protected? Or is it, like you know, I'm picturing like a vault with like a piece of paper? Like, is it just something that someone could just burn if Trump's coming in and they're panicking, or is it something that, like many, many people know?

Speaker 3:

I don't know. However, here's my theory of that, that, that Jeffrey Epstein.

Speaker 1:

Oh my God, I'm so excited for this. Okay.

Speaker 3:

Everyone. Please do several things. One forget about Jeffrey Epstein. Okay, focus on Ghislaine Maxwell. I do Interesting Ghislaine Maxwell's father. Do you know Ghislaine Maxwell's father, robert Maxwell? Robert Maxwell was suspected of being a spy, national weapons defense or military industrial complex. Okay, that has moneyed interests in arms and wars, but also oil. So if you think about the promise of UAP and why the physics are so interesting, is because whatever these craft are doing demonstrates some control over energy that we don't know how to do yet. Yep, and so it's the promise of free energy. And so you think who would be opposed to free energy? Opec?

Speaker 2:

Everyone with money.

Speaker 3:

Anyone with oil, anyone who is basically in that billionaire state of being able to do whatever they want without any penalty or guardrails, because of oil. And suddenly what? If that all goes away tomorrow because they have this physics that doesn't require you to burn fossil fuels anymore.

Speaker 1:

So your theory here is that Jeffrey Epstein was a puppet to get to these important people?

Speaker 3:

Right, but it was ultimately Ghislaine Maxwell, following through on what her father was doing.

Speaker 3:

So Robert Maxwell was suspected of being a spy and he had a relationship with someone and let me tell me if the last name sounds familiar had a relationship with someone called Adnan Khashoggi. Adnan Khashoggi, who was Jamal Khashoggi's uncle, jamal Khashoggi, who was essentially executed by the Saudis at the I believe it was the Turkish embassy or something like that, but regardless, ghislaine Maxwell's father, robert Maxwell, had a relationship with a Saudi arms dealer named Adnan Khashoggi. Robert Maxwell purchased an academic publishing house that controlled all the textbooks that were written about science and physics. Okay, okay, that tracks, yeah, well, because if there was this relationship where you have a Saudi arms dealer who is trying to protect an OPEC country, essentially what you're trying to do is you're trying to mislead people about physics, right, you don't want people going down certain routes of physics because they might, uh, discover free energy right, that's fucking with your money yeah, and so all you need to do is one note that yeah look it up robert maxwell, adan kashogi.

Speaker 3:

He purchases the scientific uh uh publishing company and basically has control. He is the the one, for example, who puts certain standards of experimental proof before you can publish something, but to some extent it constrains what gets explored.

Speaker 1:

I see, I see.

Speaker 3:

That's point number one. Point number two, very straightforward. Hey, take a look at who Jeffrey Epstein corrupted. Yes, it was politicians, but, boy, there were sure a lot of scientists there, for some weird fucking reason, and a lot of young girls. Certainly right, but he corrupted physicists in particular.

Speaker 3:

With those young girls and so stephen hawking, yeah, robert minsky if I think it's robert minsky, who's a computer scientist folks who were looking into things like synthetic biology, artificial intelligence, cosmology, potentially folks who were on the trail of some new physics that would have threatened these oil economies by discovering free energy.

Speaker 1:

Right, andy, I have a feeling that this is going to be part one, because I have learned so much. You are so smart and you really boiled things down. It's still a little over my head, but you boiled things down today in a way that has cracked open. I'm having one of those crises that you mentioned, like if I go to the street and I see a UFO. You've cracked open my understanding of what the universe could be a little bit.

Speaker 2:

So thank you for that. I feel like I've learned so little of so much.

Speaker 1:

So this has to be a part two.

Speaker 3:

That's exactly right, well hopefully it wasn't gibberish no, our palates are wet yeah, one possible explanation of, uh, those experiences that you just described is, yeah, what I said was gibberish, but hopefully it wasn't too much of that sure?

Speaker 1:

no, I don't think it was at all. I mean, even if it was the, even the theories and the philosophy of it, right is.

Speaker 3:

Yeah, is how we, how we should be questioning things, and I would say that's a great way to end, because I think you know, ultimately, if there's, from my perspective, like one takeaway from all the UFO stuff, is that it makes you curious and it makes you think about your reality, which is something I don't think we do enough of yes, I thousand percent agree.

Speaker 2:

Even if you don't believe in ufos or the conspiracies or anything, these are very thought-provoking questions exactly right and uh and uh, and so that's why I think it's great I think everyone should take a look.

Speaker 1:

Yeah, just think about the possibilities well, thank you for igniting our consciousnesses, if you will thank you very much and thank you for igniting mine and my definition of consciousness is whatever yours is, of course, just kidding. I will develop my own before the next time we record and or you can just wait until the singularity that's right, and andy doesn't want you to find him, so don't worry about looking him up or anything else. He has no social handles.

Speaker 3:

Yeah, I do I got, I got one, I'm happy, okay, great joke to self. On which platform on?

Speaker 2:

I believe it's called xcom now, there you go, it's Twitter, fuck y'all. Yeah, it's Twitter, it's twittercom.

Speaker 3:

All right, the redirect still works.

Speaker 1:

So if you want more, you know where to find them. But, andy, thank you so much and I really hope you'll come back.

Speaker 3:

I appreciate very much you.

People on this episode