Love and Philosophy Beyond Dichotomy

Nature and Clarity with Mike Levin

December 23, 2023 Andrea Hiott Season 1
Nature and Clarity with Mike Levin
Love and Philosophy Beyond Dichotomy
More Info
Love and Philosophy Beyond Dichotomy
Nature and Clarity with Mike Levin
Dec 23, 2023 Season 1
Andrea Hiott

Andrea and Michael explore the complex relationships between dichotomies, the environment, robotics, and science. Their conversation delves into philosophical and scientific perspectives on how to understand and work with nature and technology harmoniously, aiming towards a nuanced understanding of ecological and technological terms and their impact on collective and individual well-being. The discussion also touches on personalized medicine, the implications of biological modeling, and the philosophical implications of competition and cooperation within systems. The overarching theme addresses the importance of nurturing ecological and technological relationships that promote clarity and understanding of our world and ourselves. Photo @tuftsu

Support on Patreon and You Tube. Listen anywhere you find podcasts.

#michaellevin #clarity  #nature #robots #andreahiott #loveandphilosophy #beyonddichotomy #medicine #individualizedcare

00:00 Introduction to the Conversation with Mike Levin
00:20 Exploring Dichotomies and Nature's Influence
02:02 The Significance of Art and Nature in Personal Life
04:04 Bridging Ecological and Technological Perspectives
05:40 Understanding Bioelectric Modulations and Natural Systems
07:06 Redefining the Natural vs. Technological Dichotomy
11:45 Envisioning a Healthier Relationship Between Ecology and Technology
16:02 Expanding Perspectives: From Anthrobots to Cognitive Landscapes
23:34 The Role of Language and Perspective in Understanding Biobots
27:52 Exploring the Multifaceted Nature of Biobots
28:02 The Diverse Perspectives on Biobots and Their Cognitive Properties
28:19 Biobots vs. Proto-Organisms: A Cognitive Science Perspective
28:25 Understanding Biobots: Goals, Preferences, and Experiments
28:37 The Machine Metaphor in Medicine and Its Implications
29:12 Perspectives and Empirical Frameworks in Understanding Complexity
29:22 The Role of Frames in Interpreting and Controlling Systems
38:08 Personalized Medicine and the Future of Biomedical Engineering
42:30 The Philosophical Underpinnings of Personalized Medicine
52:50 Concluding Thoughts on Suffering, Evolution, and Personalized Medicine

Mike Levin's Lab: https://drmichaellevin.org

Michael Levin's research channel: / @drmichaellevin

The papers discussed:
https://www.nature.com/articles/s4146...
https://onlinelibrary.wiley.com/doi/1...

Michael Levin at Tufts

Google Scholar

 https://loveandphilosophy.com/


Support the Show.

Please rate and review with love.

Love and Philosophy Beyond Dichotomy +
Support Love and Philosophy ❤️
Starting at $3/month
Support
Show Notes Transcript

Andrea and Michael explore the complex relationships between dichotomies, the environment, robotics, and science. Their conversation delves into philosophical and scientific perspectives on how to understand and work with nature and technology harmoniously, aiming towards a nuanced understanding of ecological and technological terms and their impact on collective and individual well-being. The discussion also touches on personalized medicine, the implications of biological modeling, and the philosophical implications of competition and cooperation within systems. The overarching theme addresses the importance of nurturing ecological and technological relationships that promote clarity and understanding of our world and ourselves. Photo @tuftsu

Support on Patreon and You Tube. Listen anywhere you find podcasts.

#michaellevin #clarity  #nature #robots #andreahiott #loveandphilosophy #beyonddichotomy #medicine #individualizedcare

00:00 Introduction to the Conversation with Mike Levin
00:20 Exploring Dichotomies and Nature's Influence
02:02 The Significance of Art and Nature in Personal Life
04:04 Bridging Ecological and Technological Perspectives
05:40 Understanding Bioelectric Modulations and Natural Systems
07:06 Redefining the Natural vs. Technological Dichotomy
11:45 Envisioning a Healthier Relationship Between Ecology and Technology
16:02 Expanding Perspectives: From Anthrobots to Cognitive Landscapes
23:34 The Role of Language and Perspective in Understanding Biobots
27:52 Exploring the Multifaceted Nature of Biobots
28:02 The Diverse Perspectives on Biobots and Their Cognitive Properties
28:19 Biobots vs. Proto-Organisms: A Cognitive Science Perspective
28:25 Understanding Biobots: Goals, Preferences, and Experiments
28:37 The Machine Metaphor in Medicine and Its Implications
29:12 Perspectives and Empirical Frameworks in Understanding Complexity
29:22 The Role of Frames in Interpreting and Controlling Systems
38:08 Personalized Medicine and the Future of Biomedical Engineering
42:30 The Philosophical Underpinnings of Personalized Medicine
52:50 Concluding Thoughts on Suffering, Evolution, and Personalized Medicine

Mike Levin's Lab: https://drmichaellevin.org

Michael Levin's research channel: / @drmichaellevin

The papers discussed:
https://www.nature.com/articles/s4146...
https://onlinelibrary.wiley.com/doi/1...

Michael Levin at Tufts

Google Scholar

 https://loveandphilosophy.com/


Support the Show.

Please rate and review with love.

[00:00:00] Hey everyone. This is part three. Of my conversations with Mike Levin. I am back in Utrecht. Spring is trying to. Come. Although it's rainy. And I'm thinking a lot about these ideas today. So I wanted to just go ahead and post it. For those of you who haven't listened to part one and part two it's. Absolutely. Okay. It stands alone. This one, we go a different route and riff a bit on how we can really understand. Dichotomies either or. While holding them as important tools at the same time. And we talk about nature and the environment and we talk about robots and what that word really means. And we talk about healing and how. In a sense, trying to do something or give something or open some path that is healing. Is really the point of all this, um His work and his very scientific engineering, practical approach and me and my more philosophical, theoretical. Approach. And I think [00:01:00] we get to a place that's really nuanced and important and. This interstitial space between all of these. Words and the way we use them. 

Ecological, technological robots beings. People get turned on and off by those in different ways. And we try to look at that a bit here and understand why these terms are so meaningful and important and also can be so divisive and difficult. And also just what the big picture is, as I said, Which again is maybe best understood as healing experiencing this earth and one another and all these amazing beings in a way that can nurture us. And them and still be exciting and motivating while at the same time as practical and takes into account, the messiness of it all.

So I won't go on and on about all that, but this is another research conversation with Mike and I hope you enjoy it. 

And I'm really glad you're here. And I hope you're having a beautiful day out there. 

Andrea: Hi, Mike. Nice to see you [00:02:00] again. 

Mike: Yeah. Good to talk to you. 

Andrea: Yeah. I thought this time, instead of talking about specific ideas and papers, I wanted to sort of try to zoom out a little bit and think about, This orientation of beyond dichotomy , um, and to get us going, the question is how important is art and nature in your life? 

Mike: In my personal life. 

Andrea: Yeah. Daily life, everyday life, the way that you kind of, yeah. 

Mike: Yeah, nature. Nature is hugely important.

Um, I spent a lot of time outside. Um, I do, some nature photography, mostly that's a kind of for me, it's almost a kind of meditation because it keeps you just busy enough to be doing something with your hands and so on but really, um, you know, it kind of lets the mind lets the mind go. So yeah, I'm I like sunrises a lot.

Um, I usually, I'm usually up before sunrise and by the time the sun is coming up, I'm outside. Um, walking around and so 

Andrea: really, yeah, I saw on here, you've had quite some beautiful pictures. So I guess you don't think of yourself as a photographer, but I mean, [00:03:00] or do you, I don't know. 

Mike: I don't think of myself as anything specifically.

Really. Um, I just do stuff and I think I, I enjoyed a lot. I that's time when I you know, when I can do a little bit of thinking before the, you know, the rest of the day starts up with all the, you know, the meetings and the discussions with people and so on. Um, but you know, I, yeah, to me, nature is very important.

 I do a bunch of gardening. I grow stuff. Um, Yeah, and I kayak, you know, things like that. I, yeah nature is huge for me. Um, art, you know, um, I enjoy art. I appreciate it. I'm unable to make any art I help 

Andrea: of AI. You can, 

Mike: yeah, that's 

Andrea: true. 

Mike: That's true. Um, yeah, that's true. But but really, um, yeah, I enjoy art.

I know what I like. I don't have any, um, any training in it or any expertise in it. 

Andrea: Yeah, I was just talking with Karen. I think you, you've, you know, Karen from the Meaning Code. 

Mike: Oh, yeah. 

Andrea: Yeah, and she was talking, she was telling me that, you know, we can all do art. I think she started when she was in her 50s or something.

So she sort of scolded me that I shouldn't say I can't do art. But, so I'm sure I [00:04:00] should scold you too. I'm sure you could do art if you had time for it. But I think you are in a way. But the reason I bring this up is because, um, often in these kind of circles, Um, I'm working in a lot of different worlds and I know you are too.

And sometimes it can seem like the ecological and the technological are at odds. Um, I know we don't think they are, but, at the same time, there's a lot of terms that are used in both those worlds. Worlds that are very similar, something like regeneration, for example, um, you know, there's the whole regen movement where it's all about building the environment to co evolve with natural systems, for example.

So there's actually a lot of connection, but, in my experience, it's quite often there's a kind of defense against technology or AI or manipulation on one sense, um, as not being natural somehow. And so not being ecological. And I feel like that's. Um, so I guess just to start, I wonder if you think [00:05:00] about this in a broader sense, because when you're doing things like depolarizing surrounding tissues to kind of, um, what does it increase optic nerve?

Um, you're manipulating a certain ecology, right? Or an environment or you're changing it. I don't know if manipulation is the right word. Um, and you're also often doing this internal external kind of stuff. So I guess, like, just. I wonder, you know, like, how that strikes you first, and then I want to ask about if you think there's a kind of continuity in the way we, that you often talk about scales and continuums, um, in terms of these like bigger groups of people doing work in ecology and technology.

Mike: Yeah, um, a few things there. So one is when we do these, um, bioelectric kinds of modulations where we make eyes and limbs and, you know, fixed birth defects and stuff like that, the reason it works is not because we're so smart. The reason it works is because. Though we are exploiting an interface that cells [00:06:00] offer to us.

The reason they offer it to us is because they offer it to each other. This is how cells hack each other's behavior normally during development, during evolution. None of the things we do would work if they weren't endogenously meaningful to the cells, the signals, the reason that we can make, let's say an extra eye out of a bunch of gut cells is not because we know everything there is to know about all the micro.

components of being an eye, we actually provide very little information. We say, build an eye here. It's a very simple signal. It's the system itself that's smart. And once it gets that signal there are all the processes that build an eye and then crucially they stop when they're done. So that's a lot of, and there's other competencies there that I could talk about, but, um, that's why all of this works is precisely because.

The system is poised to receive inputs like this. I view all of this as on the spectrum of communication and collaboration. I don't think we're micromanaging the cells. I don't think we're forcing the cells. Um, we are, um, making suggestions literally because sometimes they take us up on it and sometimes they don't.

And all of that understanding of that [00:07:00] way really helps them drive new discoveries. This is not. Just like fanciful painting these things onto a mechanical system. That's well described by mechanical events because it's not um, and so that's an important piece to this to say that this is a natural Actually, if it wasn't natural, none of this would work It's absolutely natural because that's what cells are doing and tissues are doing to each other now going one step deeper I actually don't like this natural technological distinction at all.

I mean, I don't like a lot of binary distinctions, but that one in particular, I think is pernicious be for a few reasons. Um, one reason is that everything in biology is hacking everything else that is the natural world. Every cell, every, um, every organ, every parasite, every mate every predator prey, they are all.

Trying to find the signals that make the world around them do what they want them to do We are also a product of that so called natural world, you know So we are also a product of evolution and the fact that we are now able to do this in a self reflective rational way, whereas other, [00:08:00] um, bioengineers like wasps who make galls out of plant cells and so on you know, they do it at a lower level of cognition, but we are part of that same chain, you know, this technology, um, is not, um, in any way alien.

This is exactly what biology does all the time. And we have to, I think, really be clear in in what we're saying, if. In, in, you know, in the pre scientific days, the common view of this was that there was some sort of a a benevolent grand intelligence that arranged things in the best possible way.

And then the message is pretty clear to the scientists, don't mess it up, everything is great. It's how it's supposed to be. Don't make it work. That was the message. So now we have to decide. Um, if we don't believe that, and if we believe that the laws of physics and the kind of evolutionary processes are what gave rise to the natural world, you have to then realize that those processes do not select for any of the things we value.

They don't select [00:09:00] for happiness. They don't select for intelligence per se. They don't optimize for, um, for meaning in our lives. None of those things that we value are what evolution optimizes for, um, and so, and so we have to ask, you know, if you're really into natural things and preserving what's natural, you have to ask yourself, what is privileged about the outcome of a bunch of random mutation and then selection events that left us susceptible to bacteria and to lower back pain and to, you know, having the optic nerve come out the front of your retina so that you have a big blind spot that your brain has to fill in and all this junk, like all this stuff was no, nobody planned it so that we would be happy.

It's just where evolution happened to have left us. So I'm really stumped as to how a lot of people who claim they don't believe in, um, some sort of creation story. I think that the product of the natural random meanderings of evolution are privileged over what we could do with our rational mind that has these values [00:10:00] and can put ourselves to rationally pursue things like happiness and, you know, fulfillment of potential and all these kinds of things.

I don't know why people prefer things that come out of a random process than ones that come out of a process that was thoughtful. Now, some people may say that that the standard story of physics and evolution isn't actually true and that there is some sort of benevolent design behind that, then you should come out and say that if, you know, if you think that's what it is that's fine.

And I'm not arguing one way or the other, but we should make it, we should make it explicit because otherwise what you're saying is you'd rather have the process of a random, Um, random walk through fitness space than one that was thoughtfully modified by a human intelligence. 

Andrea: I think that's a little different.

I that's like saying, okay, what is, this is something I hear, I mean, I know is there very much a, an assumption that what is natural is what is good. And by natural, we mean what, what is as it is, which is weird, um, because, but it also seems to imply that humans aren't natural somehow, or that. [00:11:00] This process that whatever is going on here with the fact that we can now become aware of our own cognition and we can create new tools and things that's somehow not part of that natural flow.

I don't know. I mean, that already feels a little bit complicated. But also, um, I guess what I mean by natural is not the natural way things are so much as looking at, um, environmental concerns, um, the way that the climate is changing. It doesn't, we don't need to say anything more about who's responsible, but the fact that we're all part of this system, call it natural.

Um, but I don't mean by that, that it's right. Just that it is what it is. Right. And somehow maybe we can find ways to, for that system to be healthier. It seems to me that the technology and the ecology, however big of a scale you want to put the ecology. The best relationship is the one where they make each other healthier, not where they're at odds.

I don't know. Do you know what I mean? Like, [00:12:00] um, not just the idea that what's natural is good. Let's bracket that because That's definitely not, I mean, that I think you've well explained, um, just in what you just said. But when we just think about bigger, so you're dealing with these certain scales of environment and, um, machines, or if you want to call them that, you know, beings, um, and we could kind of go up and look at something like the way we build a city, you know, um, the way we manipulate that environment.

Um, in concern to us as cells or something, I'm being very general, but, um, I do wonder if you see some kind of continuity there in terms of the way that you can, um, I mean, in both ways, right, with the Anthrobots in the way that you can release individuals to be something different than what they've imagined or been told they are.

You know, like a cell, um, but also in the other way, like with the embryos, where you start to [00:13:00] understand how much collective action is important. So I know, I mean, this is about speculation, you know, just like trying to see, I just wonder if you see any continuity between these different scales of kind of problem solving.

Mike: Yeah. I mean, I suspect that some of the same tools are going to be important in understanding the scale up of cognition at many levels. You know, I mean, obviously, environment is a critical issue. I think one of the 

Andrea: issues, 

Mike: one of the problems there with, you know, um, just saying that we're gonna bidirectionally make each other healthier is that, that, that needs to be unpacked because you could imagine an ecosystem that's like a viral paradise, right?

This, you know, this virus is infecting everything and the virus, like, it's just, it's, you know, it's a great ecosystem. It's very good. Good for the 

Andrea: viruses, right? Or we may not 

Mike: enjoy that very much. Right. And you can think about, um, what happened in the early history of the earth with the oxygen catastrophe.

When oxygen first showed up, it killed off, you know, made the atmosphere toxic for a lot of other [00:14:00] creatures. Right. So that was that, you know, that was a natural event but nevertheless, you know, the ecosystem changed radically. So I don't, you know, I think that. the whole, um, even beyond the kind of more primitive notion of of natural that we both I think crossed off the list in this conversation.

Just the other aspect of the ecosystem as a singular thing where it's obvious what it is, and we just need it to be healthy. Like, I think we really can't help, um, having to lay down some values over all of it. Because, you know, I mean, there are people that believe that animals naturally eating each other is a problem and that we, it is our moral duty to engineer them in some way so that, you know, they don't do that.

I mean, there's some really kind of nasty examples with various parasites and so on. And that may not be in our vision of an optimal ecosystem or it may so I don't think there's any consensus or alignment on that among people and so. That's, you know, I don't think, I think a lot of people see natural as a way to gloss over having to make the hard choices.

Um, about what [00:15:00] do we actually prefer? And you can see that a lot of people focus on what they don't want. So people write all about all these horrible things that are going to happen. I don't want this. I don't want a future where we, this and where we, that it's much more rare to see a positive statement.

What do you want? Because you know, a hundred, 200 years from now. Are we still walking around with the same susceptibility to viruses and bacteria as we had been, or has that been fixed, or what, like, what's going on? Are animals still, you know, eating each other? Are wasps still eating caterpillars alive and this kind of thing?

And so we have to make some hard choices. Um, because in, you know, in some sense, we're the adult in the room now for the first time, like, we actually have the capacity to make these choices. And papering it over with this idea that let's just let the natural world be natural is, I don't think it's going to do it.

I think we have to step up and, you know, and face these issues. 

Andrea: Which is kind of what, like, part of just realizing we're responsible. That. I mean, for me, if we really start to understand that there are [00:16:00] continuums, I'm not sure there's one continuum. I wonder what you think about that. Um, you know, there's not dichotomies, there's a continuum, but there's maybe continuums that we have to deal with, you know, in all directions, multidimensional.

But we can at least start to understand that. We are not like separated from that. And so therefore we can either take responsibility for this stuff and ask the hard questions and deal with it as like, almost like as an extension of it, understanding that, you know, that we're an extension of it or, you know, and then, Um, and then we have to really kind of redesign in a way, um, not only things like landscapes, like, cognitive landscapes, you know, in terms of law or ethics or whatever, um, but I think also the built environment, right?

Um, and it seems to me that we might do it in a way, something like with something like what you're doing with the Anthrobots, I mean, I really mean thinking kind of wildly, like what you're doing [00:17:00] now, if. Twenty years ago, people would have just said it's sci fi. Some people still think it's, you know, on, somehow on the edge and I mean, maybe we can blend all those categories, but what if we thought about buildings like that?

You know, bridges, um, all of that. I mean, people are already starting to do that a bit too. And I just wonder, you know, um, I really mean it literally, if the kind of stuff you're working on could literally be used to build built environments at some point, if you think that. 

Mike: Yeah. Yeah. I mean, I'm not an expert in that aspect of it at all.

Although like the first author of that Anthropoc paper, Gizem Gomushkari. Gizem, 

Andrea: yeah. I want to talk to her, actually. 

Mike: You should. She seems 

Andrea: really cool. 

Mike: I mean, she 

Andrea: studied landscape design or something, right? I saw in the notes. 

Mike: Yeah, she's an architect. And so she thinks about that stuff all the time, you know, growing buildings and whatnot.

So yeah, you could definitely talk to her. But um, I think that, uh, the part that I am able to focus on and is this notion that. We there are other [00:18:00] minds all around us. Um, they're not like human minds, but nevertheless, there are minds all around us. We are very bad at noticing them. Um, we are bad at knowing how to relate to them.

We, um, we are going to be creating, we already are, but much more so we're going to be creating them without understanding what we're creating. Part of that aspect of it to be part of having a good relationship with ecosystems, with our environment and all of that is understanding the, at this point, hidden agency in a lot of it.

And I, that's one of the things about AI that I'm excited about, because I actually think if done correctly, it can offer tools to help us see. Those other minds and help us like, like almost like a, um, universal translator kind of. So we have some ideas, um, in the, that we're working on some stuff in the lab for using AI as a translator tool that can help us.

Um, take the perspective of other kinds of minds and also to recognize them and to relate to them ethically and functionally. And I think that's an exciting [00:19:00] use of that technology because if we don't understand, it's like any other kind of engineering, if you don't understand the competencies of your material.

You're not going to have a good, um, a good interaction. 

Andrea: Yeah. For me, it's that would be, I think of that too, of how AI might, and not all kinds of not like just large language models, but large action models, large world models, all these, you know, kind of things could really, Or virtual reality combined with that could actually give us a way of thinking like a bat in a sense, you know, like experiencing the world from a different position in a way, just understanding the continuity of that trajectory and understanding we don't understand the whole thing, but that we could get a little more experience, um, of that.

Mike: I mean, it can get really weird if you think about it long enough, like, okay, what's it like to be a bat fine, but. What's it like to be data in a self sorting array? What's it like to be a number? What, you know, what do like when we publish these papers, um, both [00:20:00] on biological model systems and computational like models, right?

Paper is always from the perspective of the, Um, a scientist with a couple of exceptions. So there's a couple of there's a couple of people that have actually done some really neat work that, that's different but mostly we just write here. Sorry, I have 

Andrea: to stop you there. There's people who've written papers from positions that aren't human.

Mike: Well, so for example, um, there's a paper called the cognitive domain of the glider in the game of life. And so that paper asks, looks at this, um, it's a Randy Beers work and it asks okay, so here's the, this pattern in the game of life. What do you see if you're a, if you're a glider? What does the world look like?

So there are people who have started this kind of thing, right? Um, but back to the 

Andrea: weirdness. 

Mike: Yeah. Yeah. I mean, we need to get really good at this and really asking what does the world look like from all sorts of weird, um, perspectives that we are not used to. In fact, things that we don't think, um, can have a perspective.

That's the problem is that we are so blind to some of that stuff. I'll tell a quick you know, just the story popped into my head. [00:21:00] Oh, and this relates to, you know, you said, um, over over time, we've, you know, things that seem sci fi. One of the things that I find quite amazing is that. People, I think nowadays either don't read or don't remember sci fi because a lot of the issues that we're dealing with now, it's all been dealt with, you know, for the last 120 years science fiction is a lot of stuff that people bring up to me.

These, you know, these kinds of these kinds of problems I'm like, that's the, that's been gone over so many times. And then the imagination is really stifled a lot by not, but you know, by not, um, by not knowing those that that, that feel right. Because 

Andrea: a lot of it actually leads to. promising research areas in a weird way.

But people don't want to say that because it sounds like it shouldn't, but it does. I mean, it has. 

Mike: And it stretches your mind. Like one of the, one of the blog posts I did recently was a collection which I wish a lot of people on Twitter contributed to, which was a collection of sci fi stories about love and relationships between radically different alien species.

Oh, wow. 

Andrea: How did I miss that? 

Mike: Yeah, like, so it's a list of I don't know, 4040 books and stories or [00:22:00] something. 

Andrea: Okay. 

Mike: Like, that's a really good intuition pump, because people say, Oh, this machine, I'm a real organism, like, okay, but when you find an alien, it's kind of shiny, and it's kind of metallic, and it gives you a poem, like, okay now you're into the hard stuff, because you can't just Say one or the other because that's how you feel you have to actually figure out how do we tell these things right and so so you know who can you love with that's not just your own kind right I think that's super limiting so so anyway so so those kinds of things have been dealt with in that you know slightly more right brain kind of way right yes yeah 

Andrea: I've thought of that before of um how certain things that I've read or watched to um sci fi things where you come to love you fill something like love for a being that's so different from you, that it expands your capacity for love.

I think. Um, weirdly, and when I was watching some of your videos of the anthropods, I felt this kind of like, do you know, you know the one where they're in the Petri dish and they're all like, Going [00:23:00] around and around like for me. It felt like the Charlie Brown Christmas Thing, you know when they're playing or something I don't know There's a few moments looking at some of those where you have a kind I have a kind of tender feeling or something for these things 

Mike: Yeah, I agree 100%.

I do too. When I see things, it's amazing. And, you know, we wrote about it a little bit with, um, with some of my colleagues about, um, expanding the cone of compassion as you know, as you try to expand your cognitive light cone. I mean, yeah, I think absolutely. As you learn to recognize other sentient beings.

In, in novel guises it helps you it helps you expand that. 

Andrea: Is that part of what you're trying to do with your word use sometimes in your language because or not only you, but here's a place I could probably start with it. Um, the biobot when did that's kind of an older term I gather not older, but 10 years or something, but I wonder when that kind of started.

And cause a lot of times people hear robots and they think metal, you know, or like, some kind of material that's not biological. And, um, Of course, you're creating [00:24:00] synthetic biological robots, but let's start with Biobot. What, where did that come from? And then, so Anthrobots and Xenobots are kinds of Biobots, I guess.

Um, you often talk about them as machines. I just, I wonder about your language use. I know for you, it's not a, It's not different but like, yeah, we'll start with the biobot. 

Mike: Yeah. So let's take a step back. Um, and part of it is that different, um, different words are more or less shocking in different communities and they're right.

They subvert the expectations or they seem perfectly reasonable to different communities. Right. Good point. That's kind of 

Andrea: what I'm trying to bridge here with the 

Mike: Yeah, um, let's one, one fundamental philosophical perspective that I have that I that's important to understand how I use these words is that I don't think any of these words are claims about the system themselves.

So I don't believe, you know, so some people say, okay, is it a robot or is it a living organism? And I don't think those words are about the thing itself at all. Those words, what those [00:25:00] words indicate are your reference frame and how you plan to interact with the system. They are engineering protocol claims.

So Okay, 

Andrea: that's really interesting. So it's about your orientation, your goal of why you're looking at this. 

Mike: It's about a perspective, which is like a collection of, um, commitments to what you're going to ignore, what you're going what you're going to use, how are you going to think about this thing.

So 

Andrea: it's not the thing itself. You should always realize you're not. Yeah. Okay. That's okay. I don't 

Mike: think any of these things are claims about the system itself. I think what, when you use these words, what you're saying is I have taken a particular stance towards it. And that's, let's understand that stance is going to.

Facilitate certain things I can do, and it's going to block me off from other things I can do. And we should be able to, um, move between stances as needed, and we should have a mature science of, um, how do you pick more or less appropriate, um, stances. I don't I think 

Andrea: that would solve, or that would be so helpful if we could just learn that.

I'm, I'll, [00:26:00] I'm gonna skip over it, but I just had to say that, but yeah. 

Mike: Yeah. Yeah. I know. I think it's critical to understand that this is not something you can assign to a system. It's how you see the system. Um, I'll give you an example and then we'll talk about the bots.

Um, 

Andrea: that's something reading helps us do too, by the way. Right. Taking on other positions and understanding that they're that's what we're doing. But anyway, 

Mike: Yeah, so so like here's a super Simple example, um, you know the paradox of the heat, right? Let's define what a pile is and well I take off a piece of sand and is it still upon so so if you think that if you think the word pile refers to the sand You've got a philosophical conundrum, and you've got this paradox.

The way to get to this paradox is not about the sand. It's if you tell me you've got a pile and you want to move it, what I want to know is, am I bringing a spoon, a shovel, a bulldozer, you know, dynamite? What am I bringing? And that's it. It's all about the sand. 

Andrea: So the terms are the spoon, the shovel, whatever, that they, yeah, is that what 

Mike: you mean?

Yeah I so when you, if you spend all night, um, wrestling with whether it's a heap or not, it's waste of time. What you [00:27:00] should have been thinking is, okay, so it's an object that is amenable to which tools? Which frame am I going to use for this? Right. So I guess what I'm 

Andrea: asking is robot, those kinds of words, machine, are those akin to something like the tools that are being used in this?

Precisely. 

Mike: Precisely. So here's, okay so let's get to the bot. It's not the 

Andrea: thing. It's not the heap. It's what you're using to do what you are doing. Yeah. Correct. 

Mike: So when I call something a bot, What I'm drawing attention to. I'm not making any claims about what this thing really is. In fact, I don't, I'm not sure there is any such thing as what it really is.

Essence. 

Andrea: We can leave essence behind. 

Mike: No, not even from its own perspective, right? Cause, cause these systems, they also have their own perspective on what they are. And they're 

Andrea: always changing. I mean, 

Mike: always changing and they have multiple ones. Okay. So when I call something a bot, what I'm signaling is that there is a set of interactions that we can have.

that will let you control its form and function. So for the biomedical perspective, so the bioengineering perspective, I'm saying, look, this thing is a bot, a biobot, because we will be able to [00:28:00] control its behavior and make it do useful things. When I'm talking to a collective intelligence or, um, cognitive, um, you know, cognitive science crowd, I would say, and I haven't done this yet because our work on the cognitive properties of these bots is not yet published.

So I don't make any claims yet about their you know, but once it's all peer reviewed, like you'll be seeing all this stuff. Then it's a completely different story because this thing's a proto organism. as well. And it's, and why? Because you can take a stance where you say, forget programming it as a Biobot.

I want to know what does it want? What kind of memories can it form? What preferences does it have? What goals does it have? That's a completely different stance that lets you do very different experiments. Then if you have a bot perspective, but both of those are useful, and we do this all the time in medicine.

If you have an orthopedic surgeon who doesn't believe that your body's a machine, 

Andrea: you're in 

Mike: big trouble, all right? Because what he has is chisels and hammers and screws and things like that. And if he doesn't understand that your body's a machine, you're not gonna have a good orthopedic outcome.

On the other hand, if you have a psychotherapist who thinks you're a machine, You're also not going to have a [00:29:00] good outcome. And so 

Andrea: that's a very important, what you just said. don't think a lot of people understand that nuance in your, 

Mike: but people argue all the time. Are we a whole conference is devoted to the fact of whether we are machines.

Andrea: And 

Mike: my point is no, nothing is anything. It's about what you're proposing of a perspective. You're proposing an interaction frame. And then we all get to find, then that's what makes it empirical. So that's what I love. Action 

Andrea: frame, right. If you want, 

Mike: once you've proposed it, we all get to find out how well that works out.

So if you have a deep psychological frame for looking at a mechanical clock, what we're going to find out is you're wasting a lot of time. But on the other hand, if you have a frame of rewiring that's appropriate to mechanical clock and you're applying it to to a complex organism or to or frankly, to a robot or possibly even to a language model, You may find that you're leaving a lot of stuff on the table that you're not understanding.

So that's what makes it That is really 

Andrea: important, Michael. Let me just, I mean, let me say it again and see if this is right. So if you're coming out of, if you're in the [00:30:00] lab and you're going to do something like what you've done with the Anthrobots, that's programming in a way, so they're bots and they're machines.

Um, but when you're just walking around taking photographs in the morning, early or whatever, you're not like thinking of everything around you. As a robot or a machine or something. 

Mike: For sure. Because in that interaction, I'm not looking to control anything. Right. So the bot, um, the thing that the robotics.

Perspective is really good at is facilitating control. So if you want to understand how to control a system and make it do something it doesn't already do that's a machine frame That's a robotic stream that works great for a lot of things And in fact, it should be used more for things like cellular constructs and so on right great But there are many other scenarios in which your goal is not to control, your goal is to have a bi directional interaction.

And with your friends, with your spouse, with whoever, you're not in that frame because the reason that you take a, you know, a [00:31:00] human person with you when you go to Mars for 30 years, rather than the Roomba, Is because you want a two dimension, a two directional interaction. You want to be vulnerable.

You want to have, you want to benefit from their agency. So on that side of the spectrum, it's a lot less about control and a lot more about bi directional connection. So there are many times when I think about, um, Zenobots all of these things, and I don't think about them as biobots at all. What I think about is what do they want?

You know, they're what is it like to be a biobot where, You are out of the normal body or your, um, a lot of your priors are now, um, being rewritten. What is the, you know, um, and we do experiments in this, for example, we're checking the stress level of these bots in different scenarios, what are they stressed out about?

What are they, um, you know, what are the goals that they can pursue? And again, none of this is published. I haven't made any claims on any of that but those are the kinds of questions that. So these, and certainly when, you know, when I'm outside in nature or when I'm talking to other, um, other beings that's usually not the, um, the frame I'm taking.

But when I'm thinking about when somebody calls [00:32:00] me and says, um, I've got a spinal cord injury or a peripheral neuropathy. And what are we going to do about this? I'm thinking, yeah, can I program the Anthrobots to go in and fix that neural scar, which they seem to, you know, they seem to be able to to do some of that.

Andrea: So it's really a mode of action. It's not as you've said again, but I think it's important to say you're never characterizing whatever it is that's, um, That, that's either asking you the question or that you're looking at in the lab or whatever. It's more for you a mode of action of how to best problem solve or something or best be in that situation if you're not problem solving.

Mike: Yeah. It's, and it's just fundamental humility. I mean, like, how can we, how can anybody really think that they can say what something actually is once and for all? I mean, we are all finite beings. We are all observed the universe through. A very thin keyhole that was shaped by evolution.

And we have not just senses, but, um, a spatiotemporal scale and a cognitive system that was shaped for a very, you know, in very particular circumstances. 

Andrea: Do you 

Mike: think that we have the apparatus to once and [00:33:00] for all nail down what something is in and of itself? I don't. And so what's left to us is to come up with practical empirically defensible frameworks.

And they can all disagree with each other and that's fine because they will all be useful for different purposes. 

Andrea: Yeah, they don't even have to disagree with each other if we actually take that seriously and understand that it's, um, for each kind of trajectory that you're on or whatever the goal is that you're, whatever situation you're in, the situatedness will always demand a slightly different, um, orientation.

I mean, just, it has to, I mean. We're all, yeah, 

Mike: It will suggest, you know, it will facilitate different perspectives. Um, I want to emphasize one other, one other thing because some people, um, take this view both in good and bad directions as anything goes and you can take it too far 

Andrea: to, to like relativism or something.

Mike: Well, some people say, you know, could we, um, could we think of the sun as a great [00:34:00] intelligence, or the galaxy as a great intelligence? Um, you can make the hypothesis, but then you have to show, What does that get you? So the difference in right between like old school animism, where you just say, there's a spirit in every rock and that's great.

Like, okay, the next question is what did that buy you? So I show me the beneficial interaction that, that let you have, and maybe it did, and maybe it didn't. And so I want to be really, um, really clear that my view isn't just. You get to paint whatever level of agency on anything you want. My view is, make the hypothesis, make a clear specification of what you think it's going to do for you, and let's all find out whether that worked out for you or not.

Andrea: I think that's why I always try to locate, like, whatever the position is or the agent in a particular landscape, in a particular trajectory. I mean, you kind of, you do that too, you're all in a different way, um, but, because that's, it's never some kind of, in the same way that you don't try to, that you're expressing now that you don't name the thing.

living system, [00:35:00] um, or pretend to know what it is even, but you're studying it. Um, there's always an orientation through which you're coming to discuss something. So to make some generalized statement that like the sun is intelligent doesn't make sense. Doesn't mean a lot. I don't know. I mean, it would have to be like what's the issue that we're discussing and then how does it, you know, what does the intelligence mean?

And yeah, as you said, what is it doing? Um, but a lot of this, yeah. You have to make it 

Mike: testable, right? So like I, somebody said to me once, well, you know, these, all these diverse intelligent views, you might as well say that the weather is intelligent too. Well, I wouldn't just say that, but you could make the hypothesis.

Have you tried training it? Like, is, are you sure that hurricanes don't habituate or that there isn't some kind of, um, sensitization process that if you had the tools now, obviously that's very difficult empirically, but like experimentally it's hard for these large systems, right?

But the thing is we have a behavioral science where you get to find out what the level of problem solving is of various things. 

Andrea: And 

Mike: That's the [00:36:00] experiment you should be, you should do an experiment if you think, um, and a lot of these are outside of our technical capabilities but at least in principle, if you thought the sun was intelligent, you would specify what problem space you think it navigates, what what, you know, you'd make a hypothesis, and you would do a perturbative experiment, you can't do this with observation alone, you would have to do a perturbative experiment and say, Oh, look at that.

It actually it actually did, um, you know, get from here to there in this problem space, even though I placed a barrier and we do this all the time with wacky systems. Like we apply this to a very simple things where you would say, Oh, that's certainly a dumb kind of mechanism. It doesn't have any intelligence.

We found problem solving capacities in a slime 

Andrea: mold or something. What do you, 

Mike: way, way simpler than that. I'm sorry. Like computer 

Andrea: bubble 

Mike: sort six lines of code. People have been studying it for decades. Everybody thinks they know what it does. it has novel, um, novel properties that are not in the algorithm itself at all.

And it can do something, I mean, simple primitive but still like you wouldn't know if you had mass and, um, and similar to [00:37:00] gene regulatory networks, we find six different kinds of learning, including Pavlovian conditioning in very simple pathway models, deterministic, no, you know, no stochastic effects, even.

Already they can learn like, no, we just super simple. So my point is you have to do the experiment. You have to make a hypothesis and do the experiment and see what evidence of, um, for example, problem solving you find. 

Andrea: Yes, I think that's the engineering approach and it's crucial and of course I'm working in more theoretical and philosophical so I have to be even more careful, but I do think there's ways in which, for example, having studied neuroscience, working in psychiatry and so on, you can start to, you can actually see if these ideas help people with certain issues or like, that's why I started with the environment in cities, because I've worked in motoring a lot or how we design our cities.

And you can understand that the way you explain, um, something to people can literally change the openness to which they're willing to participate in a new kind of design, [00:38:00] for example. So I'll just bracket that because it would be a whole other discussion. But as you were talking, two things came up that I didn't I want to talk about before we have to go.

And one is the idea of like personalized medicine. Um, but to get to that, I want to start with the idea, something that's very striking to me and can seem dichotomous about the Anthropot paper and the Embryo paper, the two recent, more recent ones, which I'll link to, because we don't have time to discuss.

Like go into what it all is, but, um, is in a way they're almost, this is beyond dichotomy, so it's not a dichotomy, but as I was reading both of them, I was thinking in one of them, in the Anthrobots, how you're, it's about like individual freedom. I'm, you know, But in a sense you're kind of, um, you're look, you're releasing the cell and like seeing what it can do when it's not stuck in this like inertia that it's, you know, stuck to the expectations, um, of others, so to speak.

And then in the embryos, you know, we're getting this [00:39:00] idea of like the one embryo can't Frog embryos, by the way, right, um, can't do the task, can't survive by itself. So you get this group think idea where it's actually the collective that is the powerful thing. I wonder, yeah, I just want to bring that up.

And I, because I think it's very rich, um, in the power of the, of releasing the individual, but the individual also always being part of a wider system. I wonder how you see it, like as you were doing the research with those papers. 

Mike: Yeah. Yeah. I think this is fundamental to biology and it isn't magic.

I think we can build systems. We haven't yet, but I think we could build systems with that architecture but in biology every level has a degree of goal directedness. They're all trying to solve certain various problems and they're all hacking each other with like horizontally within the level and up and down.

Um, when we were doing that the frog embryo communication idea. Yeah, I'm interested in this notion of a hyper embryo and, I mean, we show [00:40:00] that these groups, large groups of embryos have their own transcriptome, they have their own gene expression that small groups don't have.

Andrea: Incredible. 

Mike: I mean, 

Andrea: both papers, totally incredible, mind blowing stuff about what's possible, I think, but. 

Mike: Yeah, and this is another example. Until you check these things you don't know. 

Andrea: Yeah, and who imagined, I mean, there's probably so much we don't know. I mean, I'm sure you've said that a 

Mike: Yeah.

Would you? Oh, we're definitely just scratching at the surface of this, but already, you know, if you have this, the right mindset you can do the experiments and this is why these frames are important because if you have a particular frame you do or do not do certain experiments. You know, people.

Um, often say like, okay, that was great biology data, but what's with all this for like, get rid of all this philosophy and just do the science. Well, guess what? We wouldn't have done the science. Why did nobody else do that? Because right. And that's the thing, you know, um, like some people for the xenobots.

Some people said, oh, well, they're embryonic and they're amphibians and we know those cells are plastic. This is just a piece of developmental biology in a frog. They're animal caps. People said [00:41:00] you shouldn't call them xenobots, you should call them animal caps. Because that's an old, um, developmental biology term for those cells.

Andrea: Okay. 

Mike: And the thing is that, okay, but if you call them animal caps, guess what you will never do? You'll never make anthrobots. Because you see this as a piece of developmental biology and fraud. You 

Andrea: oriented yourself, you, you've put certain parameters that you assume can't be changed. 

Mike: Yeah from that perspective, this is a piece of natural developmental biology in centipedes, and you wouldn't look for it anywhere else.

Whereas if you're thinking of this as a biorobotic system, or you're thinking about this as a plasticity system for collective scale up of collective intelligence, then of course you say no, all cells should be able to do this. Let's get as far away from. That's why we went human because we wanted to get as far away from embryonic and amphibian as we could and what's the furthest is an adult human patient and and the way that leads to you know, you mentioned the personalized medicine, right? it's because these are constructs that are made from your own body cells That they're not genetically manipulated. They don't have any viruses. They don't have any weird nanomaterials there. Um, [00:42:00] they don't require immune suppression.

If they were to go in your body to fix things, they would, um, they would not require any immune suppression because they're your own cells and these cells share with you all the priors about what health is, what diseases, what inflammation is, what, you know, what cancer smells like all of those things.

You don't have to it. construct them the way that you would with um, convention. 

Andrea: It's already embedded. I mean, it's coming 

Mike: from me. They already understand that you but we have to learn to take advantage of that. We have to learn to communicate with them through their interfaces. 

Andrea: Yeah. I've been reading about personalized medicine for, I feel like, As long as I've been learning and it feels like it's the way to go and in medicine and psychiatry and everything, but how far do you think we are from having the infrastructure or the mindset or whatever?

First of all, what do you think it takes to get there and how far away are we from being able to do it? Do you think speculation? I know, but 

Mike: yeah, it, I mean, it depends what you mean, right? Because there are low hanging fruit and then there are kind of like this Well, it's already 

Andrea: changed a lot, but I guess what I mean is really [00:43:00] being able to.

Orient certain kinds of medicine to the individual through, yeah, what you're doing in a way, um, letting the body speak for itself. 

Mike: Um, the kind of the first tier aspects of this are already here, and they're going to be here within this decade, which is basically all the incredible, um, number of new sensors being developed, and 

AI to process that information and to say, you know, for you, this is the pattern that's going on with your whatever physiological measurements. Like that stuff is already here and it's going to be, you know, um, exploding in the next few years in the decade. Um, the slightly deeper thing, which I also won't take that long, um, is to go beyond again, okay, you're still a clockwork, but now we're measuring a whole lot more parameters.

Which is great but the next step is, yeah, but you're not just a clockwork and all of these things have preferences and goals and competencies, which is why a lot of drugs stop working or don't work at all or cause side effects because you have And you're 

Andrea: changing too, so I guess that's what I [00:44:00] mean too, how sampling constantly and kind of itself a dynamic system, the personalized.

Mike: Yeah, I mean, a lot of it, a lot of it is actually, it's funny, um, the more you get to the more you can take advantage of these cognitive capacities in your cells and tissues, the less you have to worry about the noise because they're already very once the cells have shifted to a new homeodynamic pattern, meaning that their goal state has changed.

They're already good at maintaining that goal state when different things happen. That's the, that's what, you know, that's what life does. And so you don't have to be there and saying, Oh my God, the pH is slightly off. My drug doesn't work anymore 

Andrea: because 

Mike: that's not the level you're going to be, you're going to be working at.

And that's, what's going to be truly personalized medicine is, um, not just personalizing it to the state of the body, but actually communicating with the individual um, goal directed set points. Well, to get the buy in of the cells, like what, you know, with the except nowadays, with the exception of [00:45:00] antibiotics and some surgeries, we don't have anything that permanently fixes anything.

Right? You take these drugs they affect, they target the symptoms, if you're lucky, they do a good job. The minute you stop taking them, you're back to where you were, or worse. Right? They don't actually fix, like, we have very few things that actually fix anything. And that's because you're micromanaging a very, um, complex goal directed system, where the cells try to fight you.

That's where all these crazy side effects can come from. Is that you haven't you haven't changed the set point so that the cells are working with you to maintain a new set point. You're just, you know, you're trying to prop it up, um, bottom, bottom up. 

Andrea: Yeah. And there, there would be so much.

What I also hope is that we can connect trajectories between, um, generations too, but that's getting into a whole other thing. But I do, um, when you were discussing this, I was thinking about something I've read in different parts of your work where you talk about, I think you were talking about a new book that just came out about the struggle inside the body of the cells struggling.

It was in German, but I think it's been now Harvard [00:46:00] published. I can't remember the guy's name, Wilhelm something. 

Mike: Wilhelm Ruch. Yeah, it's actually a very old book. He's a classic embryologist that just got translated. 

Andrea: Yeah. And so I mean, I just want to ask, you know, throw it out there. Do you think competition has to be about struggle?

And if yes, does struggle have to be about suffering? Um, are these linked? Because it's, there's this, you know, you've used the term hacking a lot and there's this kind of, um, aggressive sense that's very real. Um, but then also you're often talking about working together and this collective action and so on.

These again aren't either ors, but they can also often sound like it. So I want to open that up a little bit. Like, so the book is basically saying about the struggle between different, Um, Parts of the body, right? That's part of life or you can describe it better. But I wonder how you see that if it's really about struggle and if struggle, there's something in your writing where I think you say there's something wonderful about fragmentation and struggle.

If you look at it from a certain perspective and that [00:47:00] doesn't have to mean suffering, right? So I don't know Well, how have you thought of those things if you've thought of them? 

Mike: Yeah, um, so so I think of it as two sides of the coin which is as a persistent agent in this world, whether Naturally evolved or engineered or whatever if you're a successful, um agent in this world you are going to be homeostatic, meaning that there are going to be some states that you prefer in whatever space you live in.

There are going to be some things that you prefer. And for you to survive, you are going to have an innate stressor where you are far from those states. If you know, if you don't, if you really don't care about what state you're in, then you're a photographic film. You're not a, you're not a being, you're not an observer.

Is 

Andrea: that like connected to movement? I mean, when you're talking about that, I think about the difference between something like an anthropod and an organoid. 

Mike: No. So this is important. I'm glad you brought that up. 

Andrea: Okay. 

Mike: Organoids, I think okay so here's the thing. People are too fixated on, on three dimensional space.

So a lot of minds and a lot of agents live in other spaces. So they [00:48:00] live in anatomical spaces, they live in transcriptional spaces, physiological, metabolic. All of those things do not look like intelligence to us. Because we are fixated on with our visual system and so on, on medium sized objects moving at medium speeds through three dimensional space.

That's what intelligent looks like to us. There's a tiny sliver of what, um, the worlds that agents live in. And so we chose anthropods precisely for this reason, because people are not yet comfortable viewing these other spaces, but we are doing all kinds of work and we're building visualizers so that you can actually see.

Um, why your liver and your kidneys, for example, are an intelligent agent navigating physiological space. We need a visual. 

Andrea: Wonderful. 

Mike: We can't see ourselves. Um, if you are if you are any kind of a successful agent, that means that you have very strong preferences about what's going to happen.

And that means that if you are deviated from those, you're going to be unhappy and you're going to, you're going to have stress and you're going to have some depth of suffering over it. And of course that can get very [00:49:00] complicated. But, um, I don't think you can get away from that, at least in the embodied state, I don't think you can get away from it, but, um, you know, the issue of, again cooperation and competition, it's different frames that you take to observe these things.

And we both of those things are really valuable strategies for cognitive glue. They're both, um, strategies that evolution uses to make these agents. So yes, there's a bunch of competition within this. Yes, there's a ton of cooperation with this. Um, It's up to you as the observer, what frame you're going to take and what does that do for you.

So if you look at everything through a competitive lens, you're going to see certain things and you're going to be blind to other things and vice versa. And so I think those are two sides of the same coin because when you try to, so look, as a, as an agent like this, signals from others are coming in.

And you have a choice to make, you can fight them, you can resist them because, let's face it, if you go with every signal you get, you're going to very quickly [00:50:00] disappear, you're going to be, um, parasitized and the environment is constantly trying to kill you in that sense, right?

So if you're completely open to everything, you're done for. On the other hand, if you are completely invulnerable to those things, if you never change anything based on signals, you get then you're a rock. You're not, you might be persistent, but you're not very interesting. 

Andrea: So 

Mike: it's face this dichotomy of when do I view this interaction as a hacking?

And when do I view it as a pleasant interaction that I'm benefiting from? And you have to learn difference. It's, it doesn't come pre labeled and, um, it's the difference between learning and being trained. Like, am I the boss and I'm learning, I'm going around and learning things? Or wait a minute, is there another agent and I'm actually being trained when I'm learning, right?

What's Do you 

Andrea: mean that in like a system one, system two sense of 

Mike: Um, well, and then, well, you don't know 

Andrea: process kind of 

Mike: So as a creature, you're in the environment, you don't know how much agency there is in your environment. You have to infer that, you have to make a model, and you have to say, um, okay, I live in a dumb mechanical universe, I'm the agent, I'm the [00:51:00] boss I get to learn whatever I want to learn, right?

Or you might say, wait a minute, I think there's a pattern to what's going on here. I'm a neuron in a brain. There seems to be, yes, there's a local environment, but I, you know, I'm getting some evidence here that there are like global patterns that are in charge of this. There's another agent out there that's in control of my environment.

And that might be training me for what, for who knows what the agenda is, 

Andrea: right? Right. But of course, often the body is the model. I mean, there's a difference between the body being the model, aligning, shifting its parameters in this very immediate sense, then something like us, where we reflect on this and then choose.

Um, or is there, 

Mike: Well, we so certainly we have an extra extra capacity to reflect on things. Even very simple things, well below the level of whole cells, build models of their environment. They're not self reflective models, at least we don't think they are, it's hard to know actually, but the simple models are um, are, it's still the case that anything that persists over any significant amount of time has to have a model of its environment.

Otherwise, yeah, this [00:52:00] would we 

Andrea: don't have time to go into that because I think it's like a representational debate where I really think like if you're changing your biology, it's, and that is modeling the environment, but it's not that you've, like, built a model and you're looking at it.

I think this gets very confusing. But anyway, just continue with the point because that would be a major, 

Mike: I think that's it. I think we can do that maybe next time is but yeah, but Making a binary line between just being a model and being able to, um, apprehend that you are actively consulting a model, that's I don't think that's a sharp line either.

And I think it's 

Andrea: definitely not. But there is a difference. There are some that are models, modeling as part of their activity, and there are some that are aware of their models to a different degree. I don't think there's a sharp line at all, but I think it gets all lumped together and confused and really confusing.

fascinating ways. But again, that would have to be another conversation. I want to hear you finish this point. And I also, just before we go, want to get a little bit to think about, um, death and, um, because you said something at the beginning about, um, [00:53:00] that evolution as doesn't care about happiness and so on and so forth.

And I know one of your main motivations is helping people who have a lot of suffering, which is kind of why I brought up this hacking and stuff because people are suffering and we don't say that's natural and right. And the, you know, the sense that we bracketed at the beginning. Um, yeah, 

Mike: I get emails. It drives me up a wall, but I get emails you know, and mostly these are young, healthy people who haven't had kids and don't, or haven't had any medical issues.

Don't understand. 

Andrea: Which, I mean, that's just their trajectory. Part of this is them learning about that, we hope, you know, but they send you emails saying, 

Mike: well, uh, uh, wild mix of emails but some people say, Hey, look this isn't natural. Let's not do that. And, you know, and meanwhile, they've got their, you know, they've got a bicycle and they've got some glasses and they've got all this other stuff.

Like, I think 

Andrea: they're just scared. They're just scared that. The equilibrium will change and they don't understand, you know, the suffering but I just, you know, we have to go. And I just, I do want to like, would you really think if we take ourselves, if we bracket that whole, everything is natural idea, let's like [00:54:00] leave that away.

We're part of this living system and we're becoming aware of ourselves in our own models and so forth. Um, do you see a role for not happiness? I mean, that's too generic of a term, but for at least like, finding ways to help one another. deal with our suffering because there are, as you know, there are some points of suffering where you can't just relax and sort of go with it.

Um, and fighting it doesn't help either 'cause it makes it worse at this point. So do you see that as part of the motivation of, um, of what we're doing? of evolution too? 'cause we're part of it? 

Mike: Well, a absolutely. So look, I am, I don't have any expertise to be talking about you know, psychological suffering or so society or anything, any of that.

Um, I just know one thing that, um. All of that stuff is very difficult to think about if your embodiment is distracting and terrible. And so I basically, the only thing I really know is that there are many beings who through, through no [00:55:00] fault of their own have been thrown into a really suboptimal environment.

You know, and there's this notion of the Buddhists have this notion of an inauspicious birth. And I sort of think every right now, every birth is an inauspicious birth, because we have these limitations born that are driven by genetics by, you know, a stray cosmic ray by drugs that were around all kinds of stuff.

And step one is to is freedom of embodiment is to fix all that. So that, and then we can turn our attention to, you know, more deeper and profound things, which other people talk about. I don't, you know, that's beyond my pay grade in many ways but I think that, yeah, absolutely none of those deep and interesting things are going to happen while people are suffering in chronic pain and, you know, with all the limitations that that we're born with.

Andrea: Okay. And last question, patterns, are they important to you? I wonder if you have trouble getting out of linear thinking when you're thinking of scales, do you often think from high to low? Because I find myself still doing this. Have you, or are you able to think multi dimensionally?

Mike: Wow. I think that's a separate conversation. Um, yeah, we can talk about that at a different time. 

Andrea: All right [00:56:00] michael. I hope you have a wonderful day there. Yeah. Thanks. 

Mike: Thanks you too. Yeah. Always a good conversation.

Yeah, 

Andrea: it's fun. I'll see you next time. 

Mike: Okay. Bye