The Strange Attractor
The Strange Attractor
Exploring Practical Tech Ethics with Mathew Mykta and Nathan Kinch from Tethix | #3
In this whirlwind of an episode, we sat down with Mathew Mykta and Nathan (Nate) Kinch from Tethix.
In this episode, We took a deep dive into Tethix's SMILES framework (the Symbio-Memetic Interwoven Language Embodiment System), which works at the intersection of transdisciplinary theory, reflective practice, and a deep ecological orientation.
We ventured into the realm of Elemental Ethics, a nature-inspired language and framework that helps make ethics engaging, exciting and practical.
We learned how to be a responsible firekeeper by embedding and enacting your ETHOS. We explored the terrain of the Tethix Archipelago, a place in time and space where folks can engage in serious play through pixels and code that weave together a more beautiful way of connecting and relating within Gather.Town.
In this playful conversation, we also touched on new forms of governance, how our stories shape our reality and inadvertently alter our ways of being, seeing, doing and relating to technology (and the world at large) and much, much more.
Keen to explore more? Check out Tethix here.
Still Curious? Check out what we're up to:
Or sign up for our newsletter to keep in the loop.
This experimental and emergent podcast will continue adapting and evolving in response to our ever-changing environment and the community we support. If there are any topics you'd like us to cover, folks you'd like us to bring onto the show, or live events you feel would benefit the ecosystem, drop us a line at hello@colabs.com.au.
We're working on and supporting a range of community-led, impact-oriented initiatives spanning conservation, bioremediation, synthetic biology, biomaterials, and systems innovation.
If you have an idea that has the potential to support the thriving of people and the planet, get in contact! We'd love to help you bring your bio-led idea to life.
Otherwise, join our online community of innovators and change-makers via this link.
you. Hello and welcome to another episode of the Strange Attractor. This week we sat down with Nate Kinch and Matthew Mikter from Tethics. So Tethics is a collective of practical sociotechnological ethicists yes, that is a thing and they are all about trying to make doing tech ethics fun and interesting and trying to ensure that the technology you are creating and designing contributes to human and planetary flourishing. So we love these guys so much so we actually have Nate on board as our resident tech ethicists. We think that a lot of the work that they are doing is really exciting and invigorating, and we are actually actively embedding it into the way in which we organise ourselves. I do apologise for the somewhat whack intro and throwing you in the deep end. This is all new for us learning how to co-create a podcast. So apologies for the lack of an introduction, but we kick things off with an informal discussion around new governance structures. So apologies for throwing you in the deep end, but you will figure it out. I trust you.
Speaker 2:Interestingly, there was a political party here in Australia called the Flux Party. They helped set them up and this was like 2014.
Speaker 1:It's like a photo of Heraclitus.
Speaker 2:No, but they are kind of underlying ways of addressing some of the problems that we currently see in democracy was to use liquid democracy or fluid democracy.
Speaker 2:So it's essentially like if there's a bill being put before Parliament that's on AI technology and I accept that I just don't have the expertise to understand the implications of this bill. I've got a vote that I can delegate to you because I believe and trust in your judgment on whether this bill should pass and because epistemic difference like this I feel I'm not confident that I know enough, I'm going to give the delegate the responsibility of dealing with this problem. I know I fix in the pipe in my bathroom to the plumber. I mean I could do it because I know how to do that stuff. I just don't have the tools to do it. But that's actually another part of the process.
Speaker 2:You might have the theoretical tools to deal with a problem that I don't have the theoretical tools to deal with. So I'm better off delegating my power and agency to you and like, if you think about that in that network sense, I mean what Flux was doing was saying we're going to get people into Parliament and they did. They form. We got formal political parties in Western Australia, new South Wales, victoria and the idea was that that representative that got voted into the Senate. They didn't have a position. They were obligated to make a decision that reflected their constituents.
Speaker 1:I felt like I heard about this through FYA, yeah.
Speaker 2:And what happened recently was they got involuntarily disbanded as a party with a change to the legislation on political parties that the Morrison government put in, and this was like my mum got how this was better than what we have in place now.
Speaker 1:Right, but the perverse incentives of the current system were just like well.
Speaker 2:This means that we can't centralise power, so yeah, it was like a way to give power back to the people. Like you know, the whole tagline was like let's upgrade our democracy, you know, and there's a whole technology platform that was being built around this that like involved like a digital identity component so that you could prove you with a person that was delegating the vote to each other.
Speaker 1:This is the one where you could vote right, Like, like it'll be like okay, you have to vote on this by this time and anyone can log in at any time and do it within that time bracket. And then that was prior to the parliament. Who's sitting right Like so it's no different to like texting a mate or like using Instagram, but it's actually shaping the direction of democracy. You hear these things and you're just like yeah, absolutely, like fuck, yes, why do we not? Why do we not have that happening already? And then you hear that now it's legislated against at federal level. Yeah, you know.
Speaker 2:So, yeah, why do we need some old foggy that has no understanding about on the ground realities of you know, or, in tech, has an understanding of technology, making decisions on behalf of a nation, a set of constituents that they're meant to represent? They're just not competent. They're not competent Like these institutions are, don't? They may have worked in the past where we didn't have a very educated, informed, literate public, right, you know, and that's why they were probably relevant 150 years ago. Even then I would question whether they were relevant then.
Speaker 2:But that's more about, like, distribution of power and social structures at that point in time in history, like, but now we have a literate population. Most people can, like get even abstract stuff, but probably make some judgment that, like, I'm not, like I don't know about what's the best form of education for primary age school children, as is Bill going through parliament, so I'm going to delegate it to Nathan because he knows he knows this stuff better than me, like, and most people will get that. So, like these, these types of approaches to upgrade our kind of civic engagement processes, like they're absolutely relevant right now and this it is the next phase of our evolution. It's just like, how do you overcome, I guess, the the basin of attraction that pulls it all.
Speaker 1:Yeah, kind of gravitating around at the moment.
Speaker 2:But from a systems perspective as well, like who's set to lose? Absolutely, politician that's on a fucking super cushy wage, that like sweet retirement yeah, even once they're gone, they still have all these absolutely ludicrous benefits that they get. They're set to lose as a result. You know, and it's the same as like the other set.
Speaker 1:This is sorry to cut you off there. It's like they're set to lose on one level of the scale, right. But if you, if you take a wider boundary perspective, they're winning, because we're all winning. So it's like it's having to having to also have that, that reframe that I think we were talking about.
Speaker 3:It's like a metaphysical reorientation, 100%. A lot of this stuff is interesting though, because, having advised many different government departments, supported various different processes within the public service, you've got a huge number of smart, motivated, caring individuals. They often enter the public service very early in their adult life, they get hired straight out of university, etc. There is a particular sort of like deep narrative associated with that, so it exogenously contributes to their motivation. It supports the way that they're already thinking when they go into these organizations.
Speaker 3:I think there's many different arguments that can be made about the dynamics of the organizing structures etc. But one sort of concrete example is let's say, you have an MP talking about a particular issue, right. In many cases and I'm going to make a generalization, but I think it's okay for these purposes In many cases, the MP basically reads like a TLDR summary of a briefing document. Right, there might be hundreds of hours put into a particular briefing document. It depends on the context, time sensitivity, etc. Often it's like shit. We've got to turn this thing around in like 24 to 48 hours.
Speaker 1:But you might still have like 100 or 200 hours worth of huge working collected.
Speaker 3:Absolutely. And then something very basic gets put together and the MP, who may, in their own right, be highly intelligent, knowledgeable in certain ways, etc. Is engaging in public substantive and consequential discourse based on three to five dot points. It's preposterous, you know, like so, at every level of this system that we sort of caringly and compassionately, yet critically, interrogate, the stuff breaks down, you know. So we've got to be somewhere near an inflection point where the types of stuff that that Matt's referring to with flux or any of the work around deliberative democracy, huge amount of really interesting stuff going on in the EU, you know stuff are in Colorado around quadratic voting, like there's lots of cool experiments being conducted. We've got to be, it feels, like we're at an inflection point.
Speaker 1:It definitely does feel like we're we're pushing up against the boundary of how functional the current operating system is, which is why you're getting this like this, this divergence, out into all of these experimental domains, and then I think it's just going to be a matter of time until that, like the CUNY and I guess, revolution in way of thinking, will change and switch and then hopefully it switches to one of those and not just a further consolidation of power and we have that like techno feudalism or just like outright collapse, which are both potentially more possible than this, depending on less preferable, less, less less preferable, but potentially more probable at the moment, unless more people are made aware of what we're talking about or a less more people feel the desire to participate and know that they can. I feel like so much of this just comes down to the fact that so many people are so busy doing whatever they're doing, contributing to society in whatever way, shape or form they do, that they don't have the time to even consider or reflect on new ways of structuring society.
Speaker 3:It's an interesting parallel here. So Matt and I led a body of research.
Speaker 1:By the way. We have started, so I just can't be bothered with a formal intro like that.
Speaker 3:So we did this body of work a number of years ago with the Consumer Policy Research Centre. It was titled A Day in the Life of Data and basically we were trying to understand in this particular case, like how does advertising ecosystem work, how do e-commerce ecosystems work, etc. Like fundamentally at a socio-technical sort of systems level. One of the areas that we explored was how do everyday folks interact with this presently and how do they interact with it or how might they interact with it if, throughout the process of interacting, they are made aware of what's going on under the hood.
Speaker 3:And there's this really sort of interesting mental model that emerges. So people feel powerless Organisations that use my information. They do like I don't have a choice, they just use it however they want. And so that sense of embodied visceral powerlessness leads to kind of like a mental state and an embodied state of apathy. And then the apathy plays out behaviorally in all different types of ways and we came up with this thing called the agreement bypass bias and the basic gist everyone does this, we all do this, like we're signing up for an app and, let's say, the onboarding process is designed to decrease our time to value. So give us something that we want really, really quickly.
Speaker 3:And as part of that process, baked into a typically hidden away layer. However you want to think about it is in order to get the value that you've come here for. You've got to sign your life away and you know it's like disconnected. It's all this like complex legalese. It would actually take you an hour to get through if you took the time. So it's misappropriate in that specific context of use etc. So people just go screw that and they move on. That's actually a fairly rational thing to do If you embody the belief that you are completely disempowered, right, and then we observe some other things like so, through this process, people then start almost stepping back and, like post hoc, rationalizing the corporate behaviors. So, instead of getting angry and going like, fuck this, like this information tells an intimate story about who I am, where I've come from, what my preferences are, the type of desires I have, the type of person that I want to be, how I want to animate my life force in the world. However you want to think about it, they kind of go, oh yeah, but you know, I suppose corporations kind of have to do this, don't they? And like, and it plays out consistently across context.
Speaker 3:And I think a similar sort of thing applies more broadly to what we're talking about here, because people don't know that different is possible. And in that particular context, we would share the spectrum of possibilities in privacy, enhancing technology, et cetera, and folks would go, wow, so I can have the thing that I want, all of this great utility, and my fundamental rights and freedoms can be protected, respected, et cetera. Of course I want that. I want the positive, some outcome, and I think in the context of how we organize society, how we relate to one another, how we connect, collaborate and coordinate our efforts towards something purposeful, good et cetera, out there in the world. If we don't believe that anything else is possible, of course we're going to be apathetic, like all we have is that vote every couple of years, and actually that means sweet FA. But that's not all that's possible. So I think that positioning around possibility and then enabling purposeful exploration towards those possible futures, so that we direct our efforts towards that which is preferable Holy shit, we do that. We can change games.
Speaker 1:Reminds me of. Is it like Snowden who talks about the narratives. We want more narratives like this, less narratives like that, and that sounds like. That's kind of like a good framing for the sort of speaking about there. When you're dealing with complexity and this sort of stuff is like okay, well, and if you present that to people it could also make sense. They're like oh yeah, that does make sense. We probably want to have more stories where you feel empowered and where you feel like you can access the tech without any of the downside, and I'm sure probably even the people making the tech can probably even understand that from that perspective as well.
Speaker 3:Totally, which brings us kind of I think, somewhat naturally to where we are today. So we've been focusing for 12, 13 plus years on this idea that when people are building socio technologies, any instantiation of how would you define that.
Speaker 3:Well, you know socio tech, technical theories is whole field right. But like, if we allow ourselves to like, summarize and simplify, like technology is not some sort of abstract artifact that we have full control of, that exists only in service of us, et cetera. You know, sociotechnical theorists try and look more broadly at the system and they go well. You know, like technologies are developed in a socio-cultural, socio-political, socio-economic context. They're not values neutral. We imbue our values in these technologies and when these technologies go out into the world and we start interacting with them, they change us in really interesting ways. Sometimes we're aware of those changes, other times we're not. And you know Daniel Frager sort of refers to that through his framing of ontological design. You know you've got the folks at Concelion's that have done some really interesting work in this space.
Speaker 1:Also co-authored a paper about designing for coexistence, making direct reference to ontological design Nice. So yeah, very, very savvy with that space.
Speaker 3:Yeah, so it's that sort of gist. It's just this recognition that the things that we design are an extension of us, in fact, they are us in some way, shape or form, and that they're part of the living world, and that humanity, technology and the rest of the natural world exist in relationship. Now, you know, you could argue that we exist in an unharmonious relationship today and that one of the you know the teleologies, the purposes that we probably want to orient ourselves towards is creating contexts and conditions where humanity, technology and the rest of the natural world of which all of that is one, you know, we are nature, we're not just of nature, we are nature Can exist in a more symbiotic relationship 100%.
Speaker 1:Well, that was the exact premise of so, yeah, I guess, my section of the paper that there was like three key points. The whole concept of the paper was very transdisciplinary and it was so. The co-author was Oli Kosftis from RMIT and then Nina Williams from ANU, I think. And yeah, the premise was let's bring people together from different disciplines and places and explore what it means to design for coexistence. And from our context it was actually making, which is a bit meta, because it was making reference to the fact that we were exploring at the time what it would be like to have a philosopher in residence, which we now have, hey, which is great.
Speaker 1:But yeah, the premise that I was drawing on was when we're looking at designing and creating technology, it's like it's not human-centered design. That's part of the problem. Still useful, don't get me wrong Like it's a wonderful thing, but it's again, it's expanding, it's a narrow lens, to like a habit. We take an ecological design thinking approach. So it's like not only is it desirable, viable, feasible, but is it like ecologically sound and sustainable and does this thing leave the world better off or does this thing abide by the natural patterns and principles embedded in, I guess, a living systems worldview and that fundamentally shifts whether or not you have to bring a technology, whether you should bring a technology into this world. So it is bringing in that ethical framework, but bringing the ethical framework in from a ground like grounded in a biophysical reality and grounded in a relational approach based in how living organisms or how nature functions, which I think actually is somewhat akin to how you both communicate as well. I believe you also use a lot of the living systems metaphors and language.
Speaker 2:Yeah, and there's, like, I guess, an important part of this and just before I dive into that, like the different scales, like when you're designing, thinking about OK, well, designing for the human, and, like you know, taking this human-centered design, caveat this with the fact that the way that most organizations use human-centered design is actually business-centered design. It's not actually about the human. Put the human at the center and exploit them. Let's extract value from them as best as possible, but we want to deliver value back to them.
Speaker 3:It might be worth just like sitting with that for a moment, because we've done many presentations within organizational settings, guest lectures, all this different type of stuff where we explicitly call out that sort of like the oxymoronic nature of the framing and it seems it's deeply uncomfortable for the audiences at first.
Speaker 3:When they step back and sit with it and be with it, it's like shit. That is actually what we're doing. We operate in service of the corporation, which kind of has one mandate, which is to maximize shareholder value, which is kind of a weird thing for a designer, because a lot of these folks are deeply socially and ecologically oriented, like they look at design as this way of seeing the world and interacting with the world, so that you know our contributions, our efforts, can do some type of good. And yeah, just like sitting with that realization is really powerful. Sorry, mate.
Speaker 2:Yeah, I mean like when you're able to, when you give the time and you create the emotional cognitive space in a social setting, like in a workshop sitting down with people yeah, they're in an organization, but when you can get to those deep levels of recognition, it's like, yeah, shit, we're we're. This is an aphorism, this is not like what we know. We're not actually designing for people and to like designing for their needs and empathizing with them and that's quite. They're powerful like realizations to be able to get to, but it takes a bit of time.
Speaker 2:Some like to actually work through that in conversation and examples and sometimes you know there's a, there's a a reaction that comes from we're not willing to accept that we're doing this and that's a. You know a defense mechanism, that's a willfully mate and that's because you know their jobs rely on it. Like they get a wage, like they have to serve the higher order value of shareholder primacy. And you know and you know, referring back to Naomi Klein's work from the nineties around you know corporations are just psychopaths, like if, if they're being given legal rights as individuals and in most, at least Western jurisdictions under law, that's basically what they have. It's like it's an abstract institution that's been given the same rights that an individual has under law. So it kind of because it's operating with minimal accountability.
Speaker 2:Yeah, with minimal accountability.
Speaker 3:Consequences if your everyday human does something significantly wrong.
Speaker 1:Oh, you go to jail, but an organisation's not going to jail.
Speaker 2:Yeah, you know. So it's a, yeah, we've got these kind of paradoxes in a way that we, we think about this. But I mean coming back, I guess, to the you know the living systems approach and, um, like trying to help people develop an understanding that we, as Nathan was saying, we, we are nature and it's it's. It's difficult because, like we, we live and we've been educated in most, most, most people in Western countries, like here in Australia, um, through, through, you know a model of education. It's about its dualism, it's a separation. It's kind of evolved from a Western Enlightenment period. This forms of education really started out in the industrial period to kind of ensure that people have the basic literacy and numeracy skills.
Speaker 3:Well, which is born out of Prussia, which is, you know, basically like smart enough to work in a factory and no smarter to question it.
Speaker 2:Yeah, Walk in front of a bullet when I tell you to, you know, be able to kind of read instructions, operate machinery and things like that, so that, like you, you can see that it's, it makes sense, like when you reflect on it, and most people will tend to get it, but some people don't. Purely, you know, in this world views, there's ways of understanding religious beliefs, cultural upbringing that shapes the way that people might be able to comprehend, acknowledge and come to a moment of realisation, like, oh yeah, this feels like an absolute truth, that like is deep inside me. But we kind of look at this as going okay, well, firstly, like from a biomimicry perspective, going what are the? What are the patterns that we observe in nature, in, in you know, for want to better term, in nature's design where there's a designer or not, is maybe part of a discussion that we could get into after that that help us to understand how we can do things better.
Speaker 2:And if you look at relational aspects in natural biological systems, everything lives in relation to each other. In itself, it's a whole in its parts and it's that that's how it's evolved. Yeah, this is hundreds of millions of years of evolution, you know, from that kind of viewpoint, that have led to that, those patterns being optimal for adaptability. But we can. We can learn from that, but also in the process of like, using that understanding to design, like our approaches that we're using in our, our company and building of our technology, but also drawing attention to it through language and and using nature based metaphors and things that are more organic and moving away from mechanistic views and language. That is probably much more common in the business world than most people actually consciously recognize until you actually draw attention to it. Get the better of sport metaphor.
Speaker 2:Let's just get the ball rolling, the ball rolling, or you know, like the, the, you know examples of this is like you know, data is oil, right, and it's like that and that's become this meme over the past like 15 years. And if you think about that metaphor, you know, and there's organic component to it, these oils, like this cum resource that we extract from the ground. But this, this has implications for the way that we we actually develop technologies and the way we use data. So the language kind of shapes our perception.
Speaker 3:And it's metaphysical, like and you know the value system here. So, like, our burning of fossil fuels has enabled this, like incredible appetite for energy which has fueled Big P progress. Right, and Big P progress has been good. You know like, when you start applying that logic, that highly literally, and you know sort of figurative, extractive logic, we get surveillance capitalism, you know like. So so all of these, all of these deep narratives, are related. One thing I'm interested in here, though so if, if we, just if we step back for a moment and we do a little bit of like problem framing, I think that will help animate why the work that we're doing on Tefix has such relevance to to where we are today.
Speaker 2:Yeah, and it's like bounding these problems sometimes is is a somewhat of a challenge, but let's bound it and and kind of frame it. So we've got this context, you know, drawing on this, this view, that technologies aren't developed, they don't exist independent of our social relations and our culture and our values and belief systems. We're developing technologies in a way that reflects this more dualistic nature of or way of seeing the world, you know. So they're there from a utilitarian perspective, to serve some purpose within society, in the case of the legal construct of a corporation, a higher order value of profit, capitalism, we might say and that shapes the behavior, the views, the language of people that are responsible for developing these technologies, you know, be that UX designers, data scientists, software engineers, analysts, project managers, product managers, what have you?
Speaker 2:You know, and this is kind of in a digital technology context but people are human and they, you know, consciously, unconsciously, recognize that they have values. Most people, most of the time, will want to express what they believe is good. They don't always know how to define what they believe is good, but they want to act in alignment to what they believe is good. In an organization. You'll sometimes define those as like these are our values, these are the principles in which we'll guide our action, but there's a gap between those statements that get made and the behaviors that take place in the context of actually developing, building, deploying technologies.
Speaker 3:And just quickly. So the way that we see in the behavioral sciences the intent action gap framed this lack of relationship often between what we seem to want or say we want and what we do, and explanations of such a phenomenon, I think, are typically grossly oversimplified. But push that to the side for a moment. We refer to this in an organizational and institutional setting, as the ethical intent to action gap, and there's actually a body of research that was published by a group at MIT in 2020, led by Donald Sull, and what they explored was the relationship between a corporation's stated values and its culture and the behaviors that emerge from that culture, and the basic conclusion, if you will, of this body of research was that there isn't even a statistical correlation. So this ethical intent to action gap that we're talking about, it exists at sort of all levels of an organizing structure, a formal organizing structure, and it's exhibited systemically. It's all over the place, and the problem that we're effectively trying to solve is how do we close that gap?
Speaker 1:It's a big delta. And again going back to so, whilst you were talking about all this, it made me think of I think it's Marvin Harris and the concept of infrastructure, social structure, superstructure, which we could loosely sort of define as tech, organization and culture, and thinking how, even though the organization might say, ok, our culture is this we believe in. So I don't know, from our point of view, our values might be I should know this, they're on the website but repositioning science is one of our values, or redefining our approach and essentially regenerating the planet. And I think there's one other one that's probably profound that I'm forgetting, but anyway, and we have those and they are our values. And then you know, and then something might happen. I'm a prime example would be this is a really strange example but, um, uh, acoustic paneling, right, so we would love it to be all upcycled, recycled. We would love it to be this, that and the other. And then, when it comes to it, um, it's like there isn't anything within a, but like it's, because we don't do full cost accounting, it's way cheaper to buy something that's like petrochemical based or something of the likes, rather than finding a biobased alternative or solution, and there's literally nothing out there Like we could, could grow our own fungus to make it, but we don't have the like and that's something that we're looking at doing, but we just don't have the infrastructure to do it. So, even even though that might be what you want to do and there is the wheel and the intent to do it, there can be a lack of infrastructure to allow the creation of things which are biobased, home, compostable and based on green chemistry.
Speaker 1:So there's that level of it. And then there's also the organizational level. So, like from the the, the legal perspective and from an economic perspective, there isn't full cost accounting, so you're only paid for the cost of extraction rather than what it actually costs the environment or society at large. So you privatize gains, socialize the losses. So if we, if we did take that approach, or if we switch things up, then these things would cost way more than getting some old clothing and recycling and purchasing it from uptext, which is so we are going to look at collaborating with someone locally who are actually doing that. So and you know it's circular, but at the moment very expensive, hopefully in the future. You know these things can be legislated in support of with incentives so that they can bring the price down, instead of incentivizing and providing discounts for people to use petrochemicals or whatever else.
Speaker 1:So it is interesting, I guess, and how that kind of relates to what you were sort of saying is that I can see how that delta can appear essentially as someone who is actively trying to run an organization based off living principles and striving to be as regenerative as possible and giving more than we take and basing ourselves like an ecosystem, an organization as an ecosystem. But there's only so much you can do. It's like if you're in a sick environment, right, if the whole thing is fucked and you're a tree and you're trying to grow and there's no mycelial network to support you, there's no nitrogen fixing bacteria, there's no rainfall here anymore because we've just had desertification, because we've removed the top, so like you're screwed, yeah.
Speaker 2:And this is the bounding problem as well, like, like, why, you know, if you go, why does this gap exist?
Speaker 2:Why is this delta so large across organizational, institutional context? You know it's not an easy. It's not an easy problem to bound because it's so entangled in every level of the system dynamic, yeah, like our political systems, our mindsets, how we produce things, like you know, and that it's a wicked problem, um, and like, we, you know, acknowledge deeply that, like you want to have the most minimum amount of dissonance between what you truly value in the work that you do and actually what you do in the work you do, because, for lots of different reasons, the more dissonance that you experience, the less productive, the less creative, the less innovative you're actually going to be able to be, because the less well, yeah, the less well, um, you're able to be. So, you know, even from that perspective, closing this gap, um, like just making an economic argument, you can say, well, like we can increase the productivity of our nation just taking this economic perspective by decreasing the amount of cognitive dissonance that people face at work. You know, like, and that's probably not the right.
Speaker 1:No, I think it's not your only framing but it is an important contribution. I think it's essential to frame it like that for your psychopaths and sociopaths. Yeah.
Speaker 2:The big encounters of the world.
Speaker 1:For sure, but it's acknowledging it's all interconnected and it's like, yes, it makes economic sense, but doing the right thing will usually make economic sense, will usually make social sense, will usually make sense for the individual and will also make sense for the ecosystem or biosphere as a whole, depending on your framing and how you look at our system. Yeah, absolutely yeah.
Speaker 3:Like. One of the things I'd like to build upon here is in terms of this ethical intent to action gap. Uh, just to again, for folks that are interested, start making it a wee bit more concrete. So at the moment, let's say, you have an organisation that wants to communicate some type of value system. It tends to be done in a real, uh, a fairly sort of like paternalistic way. It's very top down. There's some type of mandate. Uh, it'll tend to come from those that are responsible for corporate governance, the board, you know, the CEO and potentially some other executives will interact with it. There will often be some type of external contribution. You might bring in a professional philosopher or ethicist, uh, you might bring in some type of services firm.
Speaker 3:Uh, one of the big mistakes that corporations make is they often bring in one of the big four. Uh, we'll just glaze over that for now. And then these values, this value system, these principles are communicated and, um, somehow, magically, they are then supposed to be operationalised. But it's, it's this really disembodied approach that fails to reflect the reality of the people who are actually doing the work. It's not reflective of the tools, the practices, the socio cultural context, the rituals, et cetera, et cetera. And so what we're trying to do and we're we're we're attempting to intervene in the system at sort of like different levels, and I really like, um, you know this, uh, uh, sort of like iteration that some of the folks at Griffith have done on the iceberg model of systems theory and they've used, you know, I've sort of got it on my phone here, yeah, Give us a look, you know the you know the oh that's beautiful.
Speaker 3:Yeah, more of more of a kind of like a fragile branching tree, absolutely.
Speaker 3:You know, which represents a sort of mycorrhizal networks and whatnot, and so we're trying to make these different contributions through, and we'll we'll talk about what these things are in a moment Smiles, elemental ethics and ethos, which is a sort of product suite of practical ethics products and and operate at different layers of the system. You know, smiles is a language system, symbiomimetic, interwoven language embodiment system. I'm not so mouthful. You know, elemental ethics is this new type of ethics framework, this alchemist inspired sort of ethics framework that operates at the level of how folks actually develop products and services in the real world today, and it meets them not just where they are sort of socioculturally, but where they are ritualistically, the actual tooling that they use to get their work done. You know, it's reflective of their workflows, their collaboration practices, et cetera, and I think it's, you know.
Speaker 3:So we're trying to kind of take this systemic approach and maybe at this point it makes sense to talk a. Let's talk a little bit about Smiles, a little bit about elemental ethics, a little bit about ethos, just how they fit together. And then you know, sam, the thing that I showed you. I think it it helps reflect how, instead of just doing the thing that for the most part is done, which is operating at the level of event, symptoms, what we're told, et cetera. Like we start at values and mindsets, which has the capacity to sort of influence. You know, behaviors, spaces, interactions, practices, patterning, et cetera. And I and these are just models, right, they're simplifications of the real world, that are approximations maps, not the territory.
Speaker 3:But even though all models are wrong, we feel like this one's useful.
Speaker 1:Absolutely, and maybe to circle back to what we should have done at the beginning, but I'm very nonlinear as a thinker, so that's fine by me. Do you want to let us know what ethics is?
Speaker 3:Maybe just like a terrible idea.
Speaker 1:Let's not do it, or do we just have to figure it out and piece it together like one of those movies that you just like?
Speaker 3:a Stanley Kubrick movie yeah exactly, yeah, yeah, this is tenet, this talk is tenet.
Speaker 2:Yeah, look, I mean, what is ethics? I mean we're a, we're a kind of a social venture that is essentially trying to help People reimagine and develop technologies that support and enable human and planetary flourishing. Ethics, you know, is, is a representation of its founders, as you would imagine. You know it's an expression of our belief systems, values, systems in that sense. But we're building. We spent the past three years almost in like R&D, kind of really falling in love with this problem that we're solving. And that's a symptom that's at the top level of the system, like the ethical intention gap is a symptom of all these underlying structures, these patterns, these mental models, these belief systems, these ways of working and being and learning that are really rich, tapestries and interwoven. It's really hard to kind of get at that. So, no, we're not like we're not solving some simple problem. And, you know, likewise with the work that you do.
Speaker 2:You can kind of acknowledge that. So we have to come at it from lots of different angles and perspectives and use lots of diverse fields. You know we're transdisciplinary in our approaches inherently and we're building. We've spent a lot of time focusing on learning and how people learn and looking at how, how do you actually close this gap? There's a skills and knowledge element to this. There's an ability element in terms of the behavior within an organization. There's a cultural change. There's an authorizing environment from the board level, as Nathan was talking about top down, bottom up, middle, in and out.
Speaker 2:But there's also a broader social stuff that is you can't like, you can't separate it, because why a product manager is you know, is you know that might actually be responsible for prioritizing a team's time in a cross functional product team is not able to follow through on the intentions or these commitments that they've been making as a team is, is interpersonal and it's it's. It's not separate from his personal context in life, because the next feature release might actually be related to his KPI, is OKRs and that's tied to promises around. I don't know buying his daughter a pony or something like that, maybe not pony bit, you know going on holiday with his family, right, and promises that he's made that sense. It's like oh crap, if we, if we take more time, you know, in this next feature that we're developing, oh shit, that's connected to my KPIs, my performance bonus.
Speaker 3:There are critical past dependencies, many of which are imaginary or intersubjective. Yeah, yeah.
Speaker 2:So I mean roundabout way, you know where we're, you know we're helping to, helping people to, to understand tech ethics in a way that makes them smile, make it fun and accessible and playful and relatable. You know where. You don't need to die. You don't need to be like someone that reads and dives into ontology and epistemology and moral philosophy and understands complex systems and like you don't. It's helpful. That's great.
Speaker 1:It's kind of like traditional storytelling, at least from what I'm aware of in the Indigenous Australian context is the stories and the meanings of fractal. So you can listen to it, you can read through it and you can take it at the literal level. You can listen to it and take it at the metaphorical level, you can listen to it and take it at the spiritual level, and then it just keeps getting deeper and deeper and deeper, depending on how you interpret it, given the framing or the teacher or the, the, the medium, like it. Just it constantly is changing and it sounds like what you're expressing. There is kind of similar here. It's like you're gonna, you're gonna be able to offer and meet people where they're at and and try and find a way to, I guess, embed this, depending on what level of developmental journey both people and the organization are at. Is that like a?
Speaker 2:Yeah, that's a good way to to situate it because, you know, acknowledging that, like you know, we're all diverse, we've all come from different backgrounds, we've all got different cognitive abilities, we've all got different interests and curiosities and questions that we might ask, some people want to dive into the detail and, naturally, curious, they're like oh yeah, wow, okay, I got this at the service level, the, you know, the surface level, great, like, give them an opportunity to go more, to go on a journey to, to kind of dive down Wombat holes and come out like a new. And that's important because, you like, I think, naturally there's, there's an innate curiosity that he, like we, we learn, we people love learning, we're just not provided with the context to learn in ways that tap into our innate curiosities. You know, and again, not everyone's gonna want to delve into Wombat holes. Yeah, wombat holes, right, and sometimes just don't have the time. Right, there's a, there's a kind of utilitarian aspect to this that's quite important, right, we don't have the time to kind of, you know, it's like, you know we're having this discussion the other night, right, like, survival first, science second. Right, like, if you don't have the, if you're worried about, like, your income, and you know, I don't know, meeting the needs of your family or what have you right? You don't have time to sit down and spend three hours engaging in deep philosophical inquiry. That's just not. But that's also a cultural thing.
Speaker 2:Referring to indigenous people, like deeply, as part of their interactions and rituals and stuff, wasn't. You know, it's the storytelling and you can go at different layers. There's these different levels, these fractal patterns. How deep you want to go is dependent on that context or the time or the story. Having the yarn around the campfire and we've got this concept of getting around the campfire. We use elemental ethics as like air and earth and fire and water, and these represent different skills and skill sets that are relevant for teams in building technology. And we come from a background in building tech like where we're deeply practical but we've got a reasonably solid philosophical background and set of experiences that help us to abstract away some of that deeper complexity and kind of surface it in a way that's actually meaningful to people.
Speaker 1:Like kind of through like a miso-poetic framework, yeah absolutely.
Speaker 3:Why don't we dive into that a little bit further? So our sort of like collective philosophy gave rise to smiles, right. So our smile sort of gives rise, to some extent, to elemental ethics. Elemental ethics in practice is sort of like embodied, operationalised, if you will, through ethos and the suite of products that are integrated into everyday tools. Let's talk a little bit about that stack and how that operates at different levels of the system in order to hopefully and very positively contribute to different conditions from which we build these socio-technologies that contribute to human and planetary flourishing. Yeah, it's a lot to unpack there.
Speaker 3:Easy question. I'll give you 31 seconds, go for it.
Speaker 2:I mean, firstly, there's always this framing problem that we come because it's not like a linear process but for the purposes of the conversation here, let's try and make it as kind of layered and linear in this kind of vertical stack as possible, because there's an emergence, like one of our principles that we've got is like playfulness and following our curiosity and letting things emerge through a process, and that, like dealing with ambiguity, is also in this kind of co-design principles on our website. Following through on these, like dealing with ambiguity is hard, sitting with ambiguity is hard, but actually that tension and discomfort can actually bring about a whole bunch of things.
Speaker 3:You've got to trust in the wisdom of emergence to some extent.
Speaker 1:And I think, especially if you take it in the context of serious play, because that makes it almost like an antidote to the volatility, uncertainty and ambiguity that you're surrounded by. And you see this even in like this is strange, I guess, loop out. But even in like warfare, or if people in context like especially I know the British they'd always crack jokes about this, that and the other, and you always hear this in the stories about like in the trenches, it would just be making jokes about the context and the situation and the setting, and I feel like it's almost like a natural evolutionary response for humans to do that.
Speaker 2:But because our context is like well, you can't make jokes about that or you can't be playful in this situation, it's too serious and you're like that's why we yeah, I mean those beautiful paradoxes that we face and it's core to, I guess, how we approach stuff, because we acknowledge the benefits of serious play and how that just activates different areas of our cognition that are really helpful for innovation and enabling that emergence. Because, like you know, smiles is emerged actually more out of a reflection on. You know it was probably, you know it was there as a something that was influencing us and I'll kind of, I guess, try and explain that as this foundational layer. We so in our R&D process we moved from like traditional courseware type stuff to much more social and organic ways of exploring challenges and we used a framework that came out of Apple classrooms of tomorrow, today program, which has actually been running since the 1970s, but the second version of that was run in the early noughties or came out of that was called challenge-based learning. We took that and you know and we've got a background in instructional design and learning experience design and you know building curriculums there's, you know there's a richness in diversity of our skill set knowledge that we've drawn upon. So we took that and we looked at the literature and how it's been used and you know it's got this great kind of mapping to the, actually the way that we build technologies. Well, it's like start with this big idea, let's call it tech ethics. Let's explore that, let's kind of make sense of what that means, you know, and then you kind of investigate it, you dive in, you know. So it's like, oh, we've got this big I don't know this problem, this feature that we want to develop. Oh great, let's actually explore it from lots of different angles and start defining it. Oh great, let's actually practice it like and actually put, like, put it into implementation, but reflect on the process.
Speaker 2:So we took this, we took this, this learning model, to, you know, and we ran a pilot program in March last year using this model. But to bring it to life, we, you know, we actually drew on inspiration from serious play and serious gaming and thought, okay, forget, like game mechanics and things like that. A lot of it was about like, let's, how do we create some epic meaning Like, how do we have stories that drive this and what we? One of the things that we'd learned through a lot of our experiences, like the, creating a bit of separation between the seriousness and the playful by creating the space, the story space, for that is always really helpful. It enables people to actually deal with ambiguity much more easily than if you deal with serious stuff where they're prefrontal cortex and default mode. Networks like actually does like control and draw on this existing mental models.
Speaker 1:That's such an important sorry to kick you off there, but that's such an important framing. Like you see this a lot in like Ian McGillchrist's work about the left and right hemispheres and about creativity and it's you need to have that default mode network interrelated and interconnected and in conversation and dialogue with the executive functioning, and so much of our society is focusing on the executive rather than the default mode, sort of, I guess you could say, the master versus emissary, if you're going to use his language. And it's really fascinating thinking about yeah, how do you create contexts where you can have more coherence between the two brain regions so that you can come up with more novel solutions that aren't just bounded by one way of looking at a context?
Speaker 2:Yeah, it's like you know, integrative kind of lens that you're trying to put on things and like what. So, getting, I guess, to the point here, what we did is we created this representation of a learning world, like this world that you explore, and we called it the archipelago, and these places and spaces, we had stories related to them. So there was Zoria and Zoria's Slovenian and Slavic mythology and one of our co-founders earlier she's from Slovenia. She's brilliant, love her to be in this conversation, but she's in Slovenia. Timezone definitely didn't work for her, and you know. And then there was this will the land and this is a place of exploration and research and and practicalia, which is this place of practical implementation and co design and sage aisle, right.
Speaker 2:So we created this fictional world and and that resonated with people, like it enabled this different approach to learning and smiles kind of later defined it was this, this kind of kind of observation, essentially that once we started anchoring like, firstly, to a physical space, right, you did like an, even coming up with the stuff, you dealing with logical inconsistencies that you need to dress, and it's like you actually have to deal with physics, yeah, actually have to deal with what is a natural system, what's going to make sense to people?
Speaker 2:You know, and this kind of out of this process emerged all this, this, this way of you know, this language system, like this metaphors and stories, and you know, and inherently we come from a place of acknowledging a deep sense of relation to the natural world. So that's infused in our approach. Absolutely, it's not. It's not like oh yeah, we, we imposed it was actually caught to our being, but smiles was then defined as like what is this thing? And this came, you know, through me. You know it's sitting in reflection drinking some mushroom tea, but, you know, maybe that helped. We'll just attribute this to to the line fungus.
Speaker 2:This is just like, and you know it's like, what is this thing? Is these patterns that are emerging, that have been emerging in our work, like, can we, can we like, name it right, give it, give it? You know, because when you name something, you kind of give it this, you give it life, you know, and that's kind of core to the, the, the threat, the term itself, because we see languages, symbion, we see languages, this almost it's, it's got its own life in its use and relationship to us as a technology, is one of our first technologies, as as a species, you know. But it's also, you know, it's got this mimetic nature, you know, and kind of like, going back to the, the idea is, you know, kind of ideas, words, concepts, framings as these having this mimetic nature that actually evolve over time through usage, and the fitness of the meme is dependent on context, you know. So it's like symbion, symbion, mimetic, interwoven, because it's interwoven in the way that this language comes about in the system and it's about embodiment. It's embodied firstly in the natural metaphors and language that we're using. So it kind of creates this experience that, like people, people can much more readily get something that is tangible, that they've directly experienced, and there's almost like a sense, a basic sense of the complexity of this is when we use that natural metaphors, you know, so that this acronym kind of emerged, that through the, through the process essentially of going what is this? What is what's happening here? Yeah, sure, thank you, thank you, fungus, for supporting me in that process.
Speaker 2:What was the OS system system? It's a symbion, mimetic, interwoven language embodiment system. So it's a way to embody this language that helps us to recognize our the symbiotic relationship between humans, technology and the natural world, but also the symbiotic relationships that play out within organizations, within our teams and these environments. So that that kind of that was in elemental ethics already. We just kind of managed to find a way to kind of name it and start actually going. Yeah, there's all these things that we're actually drawing upon, like from cognitive linguistics and cognitive sciences to cultural theory, and like there's a written, there was a richness that actually came and emerged through us going yes, this, this is a thing, and actually that accelerated a whole bunch of our process. As soon as we were able to go yeah, this is a thing, it was we were able to kind of see it right and language has that role, like as soon as you got the language to explain something, you can start. It works with you in a different way.
Speaker 1:It's like making the unconscious conscious and then, from knowing it's conscious, you can work with that as you can. You have a, it's got a position, you can understand where it is, you can triangulate it, you can locate and then work with. Yeah, that's super exciting. I had no idea. I was in my mind. I'm like which mother vodka came first? The smile, elemental ethics. So so from that, in that context, the smiles emerge from elemental ethics, Like to some extent like.
Speaker 3:I think smiles emerge from this process of exploring, building, experimenting and interacting with all different folks throughout the pilot and you know it builds on decades of our work prior. But and this is to Matt's point that there isn't a clear linearity here, because elemental ethics to some extent helped give birth to smiles. Smiles also helped clarify and give you know, like, give rebirth to elemental ethics. Yeah.
Speaker 2:And there's there's kind of feedback loops between them, between them all right, and I think there's like and that's helpful for us, even just in that framing, because we see this thing as living, like it, you know, and that's that in and of itself is, is something that we want to bring to life in how this applies in solving the problem that we're very passionate of falling in love with around the ethical intention to action gaps. So elemental ethics is is this embodiment of smiles. Essentially, that works with these four elements, with with air representing communication, collaboration. It's like you breathe, you know it's the flow of air through our processes when we're trying to actually communicate, build shared understanding, create psychological safety so that we can talk and express and work towards shared objectives. Like we need that. Those skills are essential for developing technologies in a responsible way, you know. And then we've got earth, and earth represents research and exploration. And like to do you need to draw on diverse perspectives, not just internally but externally. Talk to the stakeholders in that are going to be impacted by decisions, incorporate, you know, their views into the process.
Speaker 2:Look at you know what's been done in the past. You know that's just good. You know, you say that that's just good, like product design, and, but it's not, it's not done, you know. So we've got these elements fires, technology and practice. Water is pause and reflection and you know, and these are represented in the archipelago as well. Like you know, there's NPCs and like these, you know, great vision stuff, but you know, pragmatically, building the tech, it's, you know, start, you know, think big, build tiny, which is another one of our kind of principles, you know. So you've got this. This is about balancing these elements. If you've got too much fire and you're too much focused on the tech and the practice and build, build, build, build. This is the move fast, break things, mindset right.
Speaker 3:Leads to incessant fire fighting.
Speaker 1:Or you have a wildfire it breaks out of control and burns it down. That's what I would love to double click on that. So the positioning of tech as fire is a really interesting sort of thing. So, rather than like earth, where you're like, oh, this is the grounding of that, you've chosen fire, do you want to double click on that or unpack that?
Speaker 2:Yeah, I mean, firstly, you know fire as a, as a metaphor, like you know, yeah, literally just say it's like you know, first, technology, you know, is discovered in that sense because fire always existed before, before humans, in terms of a chemical reaction, but it represents warmth and lights and but also extraordinary power and danger, and it's like it's this fascinating thing and when we think about fire as, as as technology in our practice and interaction with technology, it is this kind of magical thing you know, like you can like relate to, like yeah, wow, I'm playing with fire, like I'm playing with generative AI, and it's like, oh, great, you know, and you know so the choice of these, these things, and like how that emerges, and like just like, oh, yeah, going, oh, this makes logical sense, which is to do that. It emerged through a whole bunch of our process and talking to people and figuring out what means work.
Speaker 1:I think it's a beautiful, beautiful summation and also it intentionally or not it reminds me a lot of the Emerald podcast that came out recently, the AI podcast that he brought out. Thank you so much for bringing that up, yeah.
Speaker 3:So what's really interesting there is. So when, when we listen to the Emerald podcast and then I shared it with a bunch of folks, matt and Ali are included we're all like he's talking about what we're doing 100%. This is uncanny. And then center for humane take. And we were just, you know, on a call with them earlier today. You know they ran a session recently, like a virtual session with the host of that podcast and it's, it's, it's.
Speaker 3:You know, the embers aren't like smoldering, like that, like they're, they're alive and well and there are folks gathering around the campfire right now in the tech community and so what it's what it, what it feels to us like we're doing, is we are breathing mythopoetic life, to use that framing, if you will, into this and what's more than just the, the, the power, the richness, the connectedness, the embodiment of the narrative, we are making that deeply practical and relevant to the folks who are connecting to it.
Speaker 3:So it's kind of like, if you listen to the Emerald podcast and you go, how do you? I haven't thought about it that way before. That is amazing. I really connect with that. What do I do now? I think we have an incredible answer to that, or at least have begun answering that through again collectively, decades of work, but more concretely in this context, the last three years and and everything that's come out of that. So to some extent I think we're unsurprised that that podcast has been so popular, but in other ways it's kind of surprising because it's so far outside the normal distribution. It's crazy.
Speaker 1:I mean for context, for those who don't know, the Emerald is a podcast from Joshua Schrey. I don't actually know how to pronounce your name, josh, I hope that's right. So it's exploring like a myth story and imagination through a mythic lens. So very interesting podcast. I might have to pause here and take a leak. How's everyone else going? Cool with the mission? I could easily do that too. Yeah, let's, let's bathroom break, bathroom break, nice.
Speaker 3:Did you send out that email yet?
Speaker 1:Sitting in my house. I'm sending it out with a. There's like a. We were trialling doing like a weekly update, so it's that. And having to put our prices up because CPI inflation, all the fun things We've tried. We've tried to put it off for the last like two years, but it gets to a point where it's like I can't live on minimum wage, so we're going to have to pass that on, yeah, which kind of sucks a bit. I would suggest moving these slightly closer so that you don't have to lean in as much. Oh yeah, it's pretty. They're pretty, they're pretty chills. They can, they can take a bit of a beating. I feel like I'm looking a hot, like a whole lot of phallic stuff right now, but phallic Did we capture that on video Cause.
Speaker 3:That's epic.
Speaker 1:Yeah, unfortunately, that that actually is there. I didn't think about that, to be honest.
Speaker 2:So much phallic in tech, let's start there.
Speaker 1:That's, I mean that's. That's an interesting concept.
Speaker 3:Maybe let's arrive there rather than just randomly start in there. So let's like, let's see where we yeah.
Speaker 1:Phallic. So I believe we're now back from our break, are you sure? No absolutely not. No, no, no, I have no idea. So I, I, I pushed, pushed pause when we were just talking about the emerald podcast and how it relates to your work. Feel free to jump back into that.
Speaker 3:Maybe something I'll say really quickly and then I'll hand to Matt, cause it might be worth him telling a little bit of a story about how some of his time presenting at intersect, which is kind of like the preeminent financial technology fintech conference here in Australia. So one of the things that's been really interesting with the language system and elemental ethics as a framework is that when we utilize this framing this, the language system, when, when, when we, when we embody smiles in our interactions, the ease and seamlessness with which folks pick it up and naturally, just as part of the interaction or conversational dynamic, play it back to us. So I guess thus far has just been like overwhelming, and I think, matt, you know some of what happened yesterday is a really great way to describe that using a use case or like a real life, lived example.
Speaker 2:Yeah, and I guess, like again, this is kind of like languages, like this living thing you know part of how we communicate as a species, that like it. It, you know you introduce a word to help convey something and give us some communicative intent in its use and you know the receiver, you know it's kind of it's in the mind and it's in the working memory and it kind of gets, it gets used. But if it's kind of sticky as this kind of meme that's got this contextual fitness for one of you know like to use that framing around medics, yeah, it gets used, you know. So you know, yesterday at Intersect and I was doing this talk as part of the consumer data right, which is like an area of economic kind of competition technology reform that's been going for several years but interacting with people during the day and just catching up with people on you know what we're doing at Tethics, because I was in a role from a policy perspective in that area for the past year that I've kind of recently stepped away from updating them on. You know, and using these metaphors immediately in some of these conversations people like they get it, you know, and it's referring to. You know talking about elemental ethics and like playing with fire and like naturally the terms start being used back in the conversation. And that's like we know this, like that's core to why we're doing it in this way.
Speaker 2:You know, and part of the underlying theoretical grounding, like incognitive linguistics, is called conceptual metaphor theory and conceptual blending and this is, you know, the blending of words and you've got like a source. Source for us is like nature. You know, target is like business acumen and technology ethics. You know, and you create this conceptual mapping and that, like it works, it sticks right. This is how we use language. Most language is metaphorical, particularly when we're dealing with abstract concepts, so when we relate it to something that's concrete and physical. You know, and there's various like arguments and in the cognitive sciences around the concreteness of terms and abstract versus concrete. And you know we're not going to delve into all the academic literature debating all this stuff, but you know there's a high, there's a big body of evidence to demonstrate that this stuff works.
Speaker 2:And another example, like I guess from a workshop that we ran around generative AI. And you know, generative AI is like it's this fire, it's like stolen from the gods, it's like we're attracted to its warmth and its power but also cautious of its like destructive nature. You know that we're holding this in our hands and we use this metaphor in the workshop with, you know, and exploring lots of different things, but when we got to the kind of interaction activities and the discussion, like people using the language, right so it's. It has this utility in helping people to understand stuff, but it also creates a new frame of reference for people and that's a critical focal point for us because we're trying to actually break down existing mental models that people have and if you operate at that language level, it becomes much easier. You know, and in that particular workshop setting, it was, you know, people talking about, like firefighters do we need now firefighters? And they made the conceptual, they did the conceptual, blending themselves with the metaphor, talking about regulators as firefighters, you know. So there's this, there's this organic process to language that's like really interesting.
Speaker 2:You know, there's a lot of really solid evidence to say this is like something that works and you know, relating this back to the Emeralds podcast, right like. There's also these underlying things that you know culturally through lots of different exposure points, through media storytelling. You know mythology and fairy tales as kids, like. There's these things that, like, like, resonate with our being in terms of stories, you know, and every. There's cultural nuance to these things, of course, because, like you know, indigenous cultures have got different mythologies and methods of telling story. That's like, you know. Ironically, you know, the western scientific mind will realize oh holy shit, they've been actually being kind of right all along.
Speaker 2:It's the same damn thing that we're finding out now. Oh my God, you know. So there's this fascinating part of using language in this way that, for us, is actually operating and you know, nathan, you know, referring to this, this, these layers of the broader system where the language smiles is operating at this, this kind of values, beliefs and also the practices, because language is something we use as well and that has these sometimes very imperceptible results. Ultimately, we've got to figure out how to measure this stuff and like keep track of it and, you know, find the right quality of quantitative data, kind of to, you know, validate, you know test our hypotheses and assumptions and things like that. But yeah, it's really fascinating.
Speaker 2:The Emerald podcast is, just, like you know, really powerful recognition that even the deeper, mythical ways of telling stories like that resonates with people, and there's probably lots of reasons for why. At this point in time, that is attractive Because the current mindsets that we're using to kind of actually understand, like what is this role of us? And this new tech like generative AI, large language models we're lacking the language to be able to really relate to it and understand it and whenever you elect a language from something, you tend to go to the mythopoetic, yeah.
Speaker 1:ways of relating and talking about something.
Speaker 2:Metaphor, allegory, simile. You know deeper stories that just like they tap into, you know that, that part of our being, that part of on, you know ways of trying to understand and make sense that like a pretty ancient fish.
Speaker 1:I mean, I would, I would play with this, I guess I would say and say that even with them, science as a concept somewhat has become an ism in its own right and it is like there is a religiosity.
Speaker 3:Oh, we're going here.
Speaker 1:A fervor and say everything, which is just fascinating. So it does like it doesn't? There's always going to be religion, even when you remove religion. The religion of atheism is a religion, you know what I mean. Like even the religion of no religion is really like. So it's fascinating to think about how it's always going to be there and rather than eradicate and then just have it manifest in some unconscious way in which we relate to the thing. It's like calling it out and going, this is really useful framework for making sense of things. How do we work with this in a generative way, consciously acknowledging how it will have iterative effects on us and then, knowing that, going out with intention into the world with it?
Speaker 3:There's a couple of things to pick up on there I wanted to just touch on really quickly before we get into kind of like the metaphysics, philosophy of science and stuff. Who knows where that's going to end up. You know, wittgenstein said the limits of my language are the limits of my world. I actually think the qualities of my language are the qualities of my world. Like I think it's that slightly different framing can be quite powerful. I just want to leave that shit in the air for a moment. Like I deal with this because we're building this thing ourselves. We will actually go to the market for funding sometime soon. So if you're interested in funding something like this, hit us up y'all. So we do other stuff. We have a diverse portfolio of interests. Sam alluded earlier to having a philosopher in residence and that is now yours truly. But like so there's lots of things that we're doing.
Speaker 3:One of the challenges that I deal with a lot in my work is working on like really practical, like embedded approaches to ethics within like a very large organization. Like how do we, how do we bring these value systems to life in a meaningful way? And you know, a lot of that draws on like a sort of landscape of moral theories, or formally framed as, like, a pluralistic approach to moral theory. One of the things that's tough for me, though, is folks have these value systems, and they don't really know where they come from. So when we're trying to productively work with tension, as in why does one particular party, you know, to this particular dialogue, prioritize one value system over another, we necessarily have to get into metaphysics. Now, there are many ways in which you could do that, from the most esoteric to something that's actually like, super accessible, right, and I think that's something that like has, I think, significant socio-cultural implications. We have these axiological, ontological and epistemological beliefs about what is what can be known, about what is what we ought to value, and often these sort of remain in the unknown unknowns, and there are many philosophers in it.
Speaker 3:There's been a resurgence of metaphysics over the last two to three decades because it very much fell out of flavor, and I would argue that there is a huge amount of power to scientists and science communicators and those that are interacting with different scientific disciplines or fields to also spend some time trying to healthily and humbly relate to metaphysics, because there are lots of folks that Bjorn Ekberg, for instance, does a really good job of this, describing the relationship between cosmology and metaphysics. Now, that makes some scientists very, very uncomfortable, but I think he makes some very compelling arguments. So like, yeah, not that we have to necessarily get into scientism per se, but I think there is an opportunity for us to open ourselves up to different ways of knowing, being, seeing, thinking, doing, feeling, et cetera, drawing from these diverse perspectives. And we recently communicated our philosophy, what grounds our philosophy, and I think it's a fairly interesting and concise piece, and you would talk about this sort of oscillation, this dance between critical realism and metamodernism.
Speaker 3:These are just these labels and these build upon theories that have developed as a result of or in response to other theories and their strengths and weaknesses, based on different orientations or value systems. And we recognize again that the map there is not the territory, but we're still interacting with that stuff and I think it's really important. So we can be scientific whilst being somewhat mystical or mythological. We can be mythological whilst being deeply philosophical. You know, these things don't have to be direct contradictions.
Speaker 1:They can in fact exist healthily in relation to one another To an extent they actually are needed to enhance one another and if we actually want to progress further than the point at which we have currently got to, there is going to have to be this integrative approach where you transcend and include, which I guess harkens back to that sort of metamodernism call that you just sort of made there of the, rather than throwing the baby out with the bathwater. When we look at modernism and postmodernism, it's like how do we take the postmodern critique with the modern progress narrative and you know all of that sort of stuff, and how do we find, you know, sit with the tension of that and then Put that shit into?
Speaker 3:the Vitamix.
Speaker 1:Yeah, Right and just yeah, and then-.
Speaker 3:Drink it all up.
Speaker 1:Exactly bring out like a really really nice thick shake, yeah, Of-.
Speaker 3:With some dried mushrooms, of course. Yeah, yeah, shhh.
Speaker 1:But it's no, it's an interesting space. So I mean, did you want to? Did either of you want to double click on that a bit further?
Speaker 3:That could be an Alice in Wonderland type on that whole that would be challenging to come back to.
Speaker 2:I think it's important to just, yeah, just reinforce that, creating the space for both ends and acknowledging the unfixed nature of a position.
Speaker 2:You know, that's kind of postmodern, but it's also like we're dealing with, like, as you said, like you know, the modernity, the narrative, progress and these grand narratives and you know, and that's like they're helpful for some ways of knowing and understanding, but also concentrating yeah like you know, kind of going okay, well, there's no kind of fixed truth around this and these narratives have created all these problems for us and acknowledging those at the same time, but enabling that tension to actually to be a force of creation and a force of learning, you know, is quite powerful.
Speaker 2:And again, like coming back to like us going, we have to name this, like you know, draw the line in the sand, put ourselves in the box. However you want to kind of frame it in that sense is that, you know, it's like you know, personally, I got diagnosed with bipolar when I was 19 and I kind of like the label itself was like something I kind of struggled with. And when you, you know, and to some degree it was helpful, you know, it's like well, I've got this thing that I can now play with and go, I've named it.
Speaker 1:What are the enabling constraints of this thing? Yeah, being boundaryless and nebulous. It's like, okay, great, let's work with yeah.
Speaker 2:Exactly right.
Speaker 2:And you know you name an object, you name a thing, you kind of give it like, you animate it. It becomes something that can be worked with in your conscious mind much more readily than if it's like there's that that's over there, like that, or I'm experiencing all these things. Now I can put it in a container and deal with it. It's like a visualization process is even you know, and it's a bit of a tangent, but like therapy sessions where you're, you know, you put your experiences, your emotions, your feeling, you know, and you might kind of visualize it like putting the stuff into a box, or like it's that chair over there sitting across from me, and they like even that distance, like sure, all this stuff is entangled and it's like you know, kind of wave particle dualities and again, when we create that separation, it enables us to kind of explore and that's helpful in so many different ways for us and we don't actively try and use this understanding of how we kind of work cognitively to support us in like improving ourselves, learning to just be with ourselves.
Speaker 2:It's an, you know, underexplored area. It's definitely definitely useful.
Speaker 1:Like, just speaking from personal experience, that's actually what you described as also a practice in metameditation. So if you can then imagine and position someone opposite you that you love, or you start maybe with a puppy, something which you can just, there's no complicated feelings and you just manifest love for that thing, and then you switch it in for someone that you, you know, maybe a parent or someone that you are somewhat antagonistic to, but then still share that love with them. And then you, you know, flip it and you put yourself there and then start trying to express that to yourself and, my gosh, like I'd realized that I'd never actually sent myself goodwill or love ever. And I've. And I realized this in this moment of first practicing metameditation and actually, like, teared up a bit, I'm like, oh, wow, cool, like, so I guess what I'm the reason why I'm saying that is I have direct felt experience of what you're sort of referring to, where you can do these.
Speaker 1:I don't know, it's almost like magical tricks with phenomenology of the interior sciences and go, you know, just put that there for a minute and you're like, oh, gosh, okay, and sometimes, yeah, there's a lot of power there which, for the most part, we're just coming to through cognitive science and these other ways of relating, but are very, very healthily developed in wisdom, traditions like Taoism, confucianism, buddhism, all of the, a lot of the Eastern stuff. I mean there was actually quite a lot in the West before we kind of destroyed it all.
Speaker 2:But yeah, you hear that Well, dominant kind of preferences or preferences for particular ways of being knowing. You know we're popularized and they become. You know that it's hegemonic force, the dominant force in shaping the narrative. You know, and I think that's an interesting point there just around, like we caricaturize the Western viewers is something that was absent of a lot of these things that were relevant to wisdom.
Speaker 1:Have a look at the Celtic tradition. Yeah, exactly, have a look at the.
Speaker 2:Druids man, there's a deep history there that tends to be, yeah, dismissed because of these caricaturizations. That is helpful to kind of be aware of. As much as that, you know, we kind of thought the Eastern, Western, the North, the South, it's. You know, we create them to be able, again, it's like we name them, we create these simplistic representations so that we, you know, with our small monkey minds, oh, it's bounded rationality.
Speaker 1:It helps us easily function quickly rather than have analysis paralysis. So they're useful, right, it's just. Everything can be useful in its own way. It's just understanding where the edges of that puzzle piece that fit into the puzzle you're trying to make with different ways of looking at the world, and which piece to pick up and use at any one time. Well, lens is probably a better metaphor. What do you think? I feel like you're thinking about something over there.
Speaker 3:Well, I'm trying to be respectful of those who have taken the time or invested the time to tune into this traversing of many beautiful and interesting topics, and I wanted to suggest that, although sometimes there's a beauty in just leaving shit in the air just like drop the mic and walk off, I do think we should respect the investment that's been made, if people have gotten here, and maybe just offer some basic practical guidance. So, if you're based out of collabs and you're trying to explore how do we better animate our values, how do we live in closer alignment to them, right like to the best of our ability, what might that mean If I am working in a cross-functional product development team in the context of a digital product, and so what are the types of things that I can take away from this discussion and use to help sort of animate my way of being in that context of life?
Speaker 1:Sounds like a good way to wrap it up.
Speaker 3:Yeah, I do like a tight conclusion, definitely.
Speaker 2:Yeah, I guess the starting point for this, like practical recommendations, I mean acknowledging that, like following through closing the delta, you know, and like really grateful for the acknowledgement and context that you shared, sam, just around the materials that you're using for kind of soundproofing, like you're dealing with your constraints, like acknowledge that this is challenging and, particularly for businesses, that already there's this core beliefs, there's things that you're like doing, like from a kind of social entrepreneurship perspective that that's powerful but there's gonna be a gap. And acknowledging that it's kind of a process you're gonna work through. So we've got the concept of pledges. You make these statements and commitments that are pretty concrete and bounded by time project feature. What have you? Or to whom? It might be internal, it might be to your kind of end users, stakeholders, customers, society, future generations can create lots of different scopes there. When you create those, you know, acknowledge that you might not be able to follow through right for whatever reason budget time, crap, we can't find a supplier right. It's not binary, it's a really important point.
Speaker 3:Following through is an active living, breathing process. You know, and just to clarify where to where to principle see, in lots of ethics frameworks there is a purpose of teleology that helps sort of animate some organization's purpose for existence. There are values that which we believe to be good and there are principles, action-oriented statements that encourages, based on our values, to do that which we believe to be right. And pledges take it to another level and this comes from our colleague, alia and her colleagues work on responsible tech and it's a wonderful framework, you know, deeply baked into the architecture of Tefix, and it encourages, rather than this loose interpretation of principles, something that is so hard to do Like. We see this in so many different organizations around the world and we've had very privileged exposure to many different types of organizations across jurisdictions, industries, contexts, et cetera.
Speaker 3:Pledges take things to that next level. They encourage you to like deeply connect with one another, collaborate towards describing like what it is you really want to animate in the world and then share in the process of bringing that to life. You know, and then you monitor, you assess, you may communicate the progress that you're making because you want to have visibility of tangible progress. You know you may communicate that externally as part of like a real embodied commitment to like radical transparency, right. So there's so much that can be done within an organizational setting and that doesn't matter. You could be a two-person startup you really could or you could be a global multinational. Like many of the corporations that we've worked with throughout our careers, you know, pledges are a way to concretize, make tangible the typically abstract, often unrelatable, fairly disembodied principles that many organizations tend to overly rely on and that, in part, actively contribute to what we're referring to as the ethical intent to action gap.
Speaker 1:So, for context, this the pledge is different to a principle, is different to the values, is different to a vision, is different to a vision. So you're sort of saying it's an additional level that can be added on to add a deeper context to how you function as an organization.
Speaker 3:Well, it can be. You could call it a pledge of principle. What actually matters is the substance, right, and you know. So the purpose, values and principles framework that we're referring to there comes from a body of work Dr Matt Beard and Dr Simon Longstaff from the Ethics Center back in 2018, but it's a common sort of framing and you know we argue all of this type of stuff within the professional philosophy community, even though morality and ethics have somewhat of a similar etymology, but obviously coming from different sort of regions, cultures, et cetera. You know we tend to draw distinctions in many cases. Other times we say no, no, they're basically the same thing.
Speaker 3:So there's lots of stuff around language, lexical, semantics, et cetera, but what we are saying here is, let's say, you have a mission which could be likened to a purpose, a reason for existence, right, and that's cool. And then you have values that help describe what you believe to be good. You know this is sort of you know, formally drawing on axiology or whatever it may be, but this could come from anywhere. Then you have principles, which start describing how do we orient our actions based on what we value, right? So these are sort of driven by verbs, if you will and should be. Sorry, yes, really important. They're often not. That is a really important clarification. They're often not, which is one of the problems. So can you just restructure the description of your principles through a genuinely diverse, inclusive and highly collaborative process and then try and bring those to life Totally? That's totally valid.
Speaker 3:But you could also take the work on pledges, and there's a very distinct structure there and that forces a concreteness, it forces an action orientation. It's bounded in sort of like a socio-technical reality, if you think of it that way, a socio-cultural reality because there are rituals that you bring to life, there are monitoring frameworks that encourage you to kind of like check in, assess progress. You relate the pledges to the rituals, to the actions, to the things that you're actually developing, to other sort of metrics or matrices or value systems that you're working with. So yeah, again, there's lots to it. So what's the domain for pledge works? What is the actual domain that? So?
Speaker 2:responsibletechwork Work. Yeah, so that's a creative commons framework. It's a kind of community-based project that Alia started with one of her partners, daniel, and pledge works. Is this kind of framework that they developed to really, as Nathan said, animate something in a practical sense, to make, like, here's a pledge, it's a commitment that you're expressing with very action-oriented language that can be bound, monitored like, reflected upon and, quite honestly, much more practical, particularly for people that are focused on building, building stuff.
Speaker 1:Yeah, no, that sounds fascinating. I mean, I know we've got everything but the pledge. So it's fascinating to hear this and I guess, potentially, maybe that's even something that we can explore Absolutely With the ethics team. It's like, okay, well, given this is our mission, vision, values and principles, what do you think are some logical pledges that can come from this? From our point of view, that's just like a fascinating thing to have anyway, because when you're trying to measure impact and regeneration and all of these things that are, so much of our work is qualitative and everyone loves numbers, so it is an interesting space. So I'll definitely double-click on that another time.
Speaker 3:Just a super quick note on the measurement stuff. We're certainly not against measurement. We have research collaborations with leading organizations. It's really important. It's an important way of knowing, assessing progress, contributing to the world, speaking the predominant language of the culture from which we largely emerged today.
Speaker 3:But a quick note here that I think is important we have this overwhelming bias and propensity towards counting everything, and what I would encourage, even in that specific context of your pledges, is that you consider both qualitative and quantitative data across attitudinal and behavioral dimensions.
Speaker 3:It gives you a fuller picture of not just what's going on, but why might that be going on and how does it make people feel?
Speaker 3:How does it make you feel so, just that slight sort of evolution in terms of orientation. It's not just what we can count, it's qualitative and quantitative data across attitudinal and behavioral dimensions, and a lot of folks that do user research are familiar with that, because that framing is quite popular and was communicated by the Nielsen-Norman group, nng, back in 2014 off the back of Tim Rower Christian Rower's work describing the landscape of user research methods. It's really helpful because it basically says you're in this particular context, there's some stuff that you don't know. What are the questions that you're trying to ask as a result of what you're trying to answer. Okay, here's how to direct your focus, and so I think, if you combine that orientation with pledges as these concrete commitments towards doing what you believe to be most good and most right, you are in a really great place, but a really practical place to start moving closer towards the future that you envisage as a result of the contributions you make in your work or everyday life.
Speaker 1:Right, so Would the pledges be.
Speaker 3:I wanted to drop the mic there.
Speaker 1:We can. We can. If you want, we can actually drop the mic there. It was not going to be anything crazy important. It was probably just me wanting to keep double-clicking on that at Infinem. So yeah, is there any other points that we might want to mention before we wrap up? Maybe like where can people find you?
Speaker 3:What are you guys? I'm not giving away my home address just yet. That's a little assumptive. So tefixco, t-e-t-h-i-xco, matt and Alia have their own blogs. They publish really awesome content there as well. You can find us on LinkedIn, and you know we're trying to be. You know we have this really clear-stated intention to be amongst it and contribute to these conversations. So I think you will see a greater volume and density of content from us over the coming weeks and months, and we have some incredibly exciting projects An upcoming pilot with a bunch of organizations, commercial and research context, lots of different types of content. Some of them may be in a sort of like harder form, like that Won't give away too much, but yeah, so check us out on LinkedIn. Head to tefixco. If you've got comments, questions, queries, if you want to collaborate, we will welcome you with open arms and big hug if you're into that sort of thing.
Speaker 2:Yeah, I'd drop a mic.
Speaker 1:Perfect, now that works for me, my gosh. Thank you for joining us on that absolute adventure of a conversation. Hopefully, these podcasts are getting more and more nuanced and professional as time goes on. Who knows? It's a fascinating art form, learning how to share communications and conversations. So any feedback, any thoughts, any conversations you'd like to see here on the Stranger Tractor, please let us know, and we look forward to seeing you next time.