The Human Code

Utopian or Dystopian? AI and Future Scenarios with Daniel W Rasmus

Don Finley Season 1 Episode 33

Daniel Rasmus on AI, the Future of Work, and Scenario Planning

In this episode, the host welcomes Daniel Rasmus to deeply delve into artificial intelligence, the future of work, and scenario planning. Daniel shares his journey from a poet to an IT professional and his involvement with AI in its early stages. He discusses the creative aspects of his work that AI has yet to replicate and explores various futures for AI, such as the 'meh' future, transactional future, healing future, and the ideologically bounded world. The conversation also touches on the potential legislative impacts on AI, the importance of long-term thinking, and scenario-based innovation. Daniel promises to return for another episode on the Serendipity Economy.

00:00 Introduction and Guest Welcome 

00:25 Daniel Rasmus' Journey from Poetry to Technology 

02:00 The Creative Intersection of Humanity and AI 

02:43 Future of Work and AI's Role 

04:07 The 'Meh' Future of AI 

04:39 AI and Autonomous Vehicles 

07:08 Scenario Planning and AI's Potential Futures 

16:34 The Role of Legislation in AI Development 

24:24 Concluding Thoughts and Future Discussions

Don Finley:

Welcome to The Human Code, the podcast where technology meets humanity, and the future is shaped by the leaders and innovators of today. I'm your host, Don Finley, inviting you on a journey through the fascinating world of tech, leadership, and personal growth. Here, we delve into the stories of visionary minds, Who are not only driving technological advancement, but also embodying the personal journeys and insights that inspire us all. Each episode, we explore the intersections where human ingenuity meets the cutting edge of technology, unpacking the experiences, challenges, and triumphs that define our era. So, whether you are a tech enthusiast, an inspiring entrepreneur, or simply curious about the human narratives behind the digital revolution, you're in the right place. Welcome to The Human Code. In this episode, we're excited to welcome Daniel W Rasmus, the founder and principal analyst. That's serious insight with a rich background in scenario thinking and strategic planning. In fact, he even teaches it at the university of Washington. Daniel has been a thought leader in AI and the future of work contributing to renowned publications like Harvard business review and fast company. He has also been an advisor author speaker at various prestigious events and institutions. Today, Daniel and I will share insights into the future of AI through various potential scenarios and how businesses and individuals can prepare for these diverse outcomes. The importance of preserving human creativity and individuality in the development and implementation of AI technologies. As well as the ethical and regulatory considerations, crucial for guiding AI development to ensure it benefits society while minimizing risks. Join us as we delve into these engaging topics with Daniel W Rasmus. This episode is packed with valuable perspectives. That will inspire you to think critically about the future of AI and its impact on our world. You won't want to miss it. I'm here with Daniel Rasmus, and we're getting ready to talk deeply about AI, serendipity economy, and the future of work. Daniel, it's been a pleasure getting to talk to you early on. and I just want to welcome you to the show, but also ask you, what got you interested in the intersection between, Humanity and technology.

Daniel W Rasmus:

Thank you very much, for having me on the podcast. I started off my career, not in technology. although I did technology when I was in high school, early computing with cards and other things like that, but, I ended up becoming a poet. what my training is in, when I went to UC Santa Cruz, I was, a poet. I left college prematurely. I'll just put it that way. I continue to publish poetry as well. but I got involved, at, a company my father worked for, I was working in the warehouse and I kept bringing me in to teach me how to use the computer. Uh, and eventually we got to the point that it was, Dan Rasmus would come to the front office, the computer's down and I would come in and reboot the thing and get it running again. I became their first IT person. and, I started Deciding to, use the power of the pen to, discuss my thoughts about what was happening initially in manufacturing, which very quickly started picking up in those days around AI, but it was the area of MRP. it was the AI of the old, It was expert systems and things like that, manufacturing simulations. and I've, been writing about it and living it ever since, and I do find a great intersection between writing. if you read my blogs, they're probably a little more poetic than some people's, and sometimes I just let that go, and have fun with it. and I certainly know that it's very difficult for things like chat GPT or Claw to emulate my writing. and, I think that passion of being in and seeing the technology move forward and being part of that and talking about it and giving speeches and everything else. I just love doing that.

Don Finley:

that's fantastic. I love that you brought up that there is this creative aspect to what you're doing that is very difficult to replicate by other LLMs. And in that regard, think we're all seeing that there's a proliferation of AI content being shared online, whether it's through social, like we do create some content using AI as well, and share it on LinkedIn or share it on our socials. But additionally, we're spending more effort now to make sure that it has a unique voice to it. And then it doesn't become that commodified type of tone and taste that you can see from it. where do you see creativity? Playing out as we like dig into this future of work in the future of AI.

Daniel W Rasmus:

if we do it well, that's our human battleground, we are the ones that are creating. I think one of the biggest issues that we'll see with AI and in one of the futures that we'll probably talk about in the future is the math future of the AI's there. It writes okay. I don't really care anymore. It's just there, It's just this background technology like email. Everybody, used to be excited about me email. We all hate email, but we can't get rid of it. and I think AI may at some point live in that realm of, that's one of the possibilities that, it doesn't become the overwhelming solution to everything. It becomes a tool in the toolkit. That's just there. And, we'll see it can get more creative. Doubt as it's consuming its own stuff. we saw some notes around, perplexity, last couple of weeks about it consuming its own content and rewriting. articles that it generated and, it's, not a good place to go with any of this. and, I think it just makes the AI dumber in the future. but my hope is that people. will, continue to explore. I think our dynamics, our exploration, our way that we voice things. and, certainly the AI has no sense of kind of an internal monologue or other very human elements, that, will make, us continue to be differentiated from anything that the LMMs can do.

Don Finley:

this future of meh hits me in a spot, Because we get to this place where we go utopian or dystopian, But there is that middle kind of scenario. That's a little bit off. It just says that It's going to become, like the car like reshapes society. You look like American culture, American suburbs, American city design, is really shaped around cars. Do you see that as, is the car the meh? are we in that state with the car or is it just that,

Daniel W Rasmus:

Yeah, when I look at driverless cars, I look at a Tesla and I live in the Seattle area. it used to be the kind of BMW capital of the world. Now there's Teslas everywhere. I find it interesting as, I'm now doing digital archiving of a bunch of photos and things that I inherited from my family that go back to the 1840s, but also into the 20s, 30s, 40s, and those vehicles that they drove then, We don't have. Heavy metal, chrome, cool looking cars with fins anymore, they're just most cars as my daughter says, oh, it's just the car, They just don't have any character to them. So we do we think about them as transportation, and not as a representation of our personality and one of the things I talk about with self driving cars, and I think it's happened I won't Definitively say, but I believe it's happened where, you get a bunch of people in, trucks and Mustangs and they corner, a smart car and say, try to figure out, reason your way out of this one, smart car, And they're doing all kinds of things to it that it can't figure out what to do, and those cars are, they're part of those people's lives, And I'm not sure people connect with, A Tesla or a Volt or anything else like that in the same way, it's not kind of part of your soul. It's just a thing you take to the grocery store.

Don Finley:

that's the one thing that, the car companies did well, and they did this early on, was basically making it as part of your identity.

Daniel W Rasmus:

Yeah.

Don Finley:

and it's almost like this is a decoupling of that identity, moving into the self driving kind of nature, because Well, you're less attached to it. there's a physical connection that gets created in our brain when we get behind the wheel of a car that basically the brain then behaves as if it is the size of the car kind of thing. So like when it gets to that self driving, we're not going to have that. Inherent connection to it. If you can just get into a car, it's going to take you someplace. It's the depersonalization. along the lines of the AI and the meh though, like I'm just loving saying this right now. I think it's going to be one of those things that I continue to bring up. But at the same time, when you get to The AI and the meh. The impact of the car is that it basically has gotten us from point A to point B. Right? It's a transportation accessory. And it's just done it. It did it so much better than the horse and buggy. If we take our interaction with technology today as the horse and buggy, what's the future of meh with AI.

Daniel W Rasmus:

so when I, so scenario planning is one of the things that I do, And one of the scenarios I have is the math future, where that's where we live. There are other potential futures, So instead of just saying it's utopian or dystopian, scenario planning forces us by understanding what the uncertainties are to create rich narratives. And I usually create four, you can create more, but. as I call it, the industrial version of scenario planning usually ends up with a matrix of four scenarios. and I have the math future. I also have a transactional kind of future where AI is used as the, kind of catalyst for the new flat world, So Thomas Friedman's, the world is flat, but instead of being top down, it's bottom up and it's people using AI to negotiate deals in China where they're not big lawyers that have, they just are people saying, let's figure out how to make this work. And the AI is weaving things together for them. It's weaving, APIs, it's weaving contracts, it's weaving customer service. And it's facilitating a very, what I call a transactional world. And then there's the semi utopian world. I don't like to create utopian worlds because they've feel finished. This is an aspirational world that is, on its way. I call it the great healing and the world is trying to figure out how to solve the big problems that we've caused. Cause you said the horse and buggy, was not as good as, the automobile, but the automobile created a whole host of problems that the horse and buggy did not create. And we haven't figured out how to solve those yet. And then I have the, for God or country. in which autocrats decide they're going to use AI and purposefully bias it to meet their needs, So you will get people who you walk up to the AI and say, I want to know this. And it will give you the right answer. based on the ideology of the government or religion that has created that. And so you end up with a world of, using AI that's very ideologically bounded, and we can do that because the general engine, as much as we talk about it, having guardrails and being safe and ethical isn't by design, safe or ethical, if you want to train it on ideological content. There's nothing stopping you from doing that except your own ethics, There's nothing built into the tools. And so we could end up with a world where autocracies are using AI, uh, to meet their own needs.

Don Finley:

Which is interesting. Cause I was Having a conversation just on that the other week, where the Christians may want to have their own. It was specifically a conversation on AI as God, which phrasing the question that way was both very frightening to me and also so against where I stand in this kind of thing, or where I use my efforts, my time, my resources to create. That I was like, I want to understand why you would go down this, but then also there are certain, organizations. that see that they have differing enough beliefs from, let's say an open AI, a Microsoft, that they want to make sure that their beliefs are actually like honored in regards to an AI that they would be using. but there's also, would say research programs and foundations that are going into like the Amazon to help them to digitize their stories, to help them to, keep their language, so that it actually, survives the test of time. But on your scenario plannings, we've got. We've got transactional, we've got the on the path to utopian. And I do like that distinction that you

Daniel W Rasmus:

healing world,

Don Finley:

the healing world. Yeah. you recognize that, Hey, here's where we're trying to get to. But at the same time, there's all the ancillaries that get kicked off of it, that we then have to address as well. and you're the transactional as well. I have a friend and he goes, an entrepreneur's goal is to basically create a vision and then point out all the places where that vision is coming true. And I was like, that's, an interesting way to put it. which of the scenarios have you been seeing, the most points of evidence showing itself, or are we still rather early in that?

Daniel W Rasmus:

Yeah, we're early. I think the, one of the foundational principles of scenario planning is that you watch everything. to pay attention to the world around you. That's one of the things that I try to impart to my students. and all of them happen and none of them happen. But at the same time, you think about, as you're watching things, you start thinking about the trajectories, this is not my first set of scenarios. when you have something like COVID, that collapses the scenarios because. One of the uncertainties, a global pandemic, for instance, takes place. So much changed very quickly that any scenarios that were built before that, that didn't have a scenario for a global pandemic. are completely invalidated, So nothing matters. All of your assumptions about markets and people and technology and everything else just goes out the window, And so you have to then reset the scenarios. that's what keeps scenario planners in business is the world changes all the time, but we try to do a 10 year and some of them do 50 year horizons. And if you do a 10 year horizon, you're hoping to just get five because people aren't very good at thinking, about the future. At least I think we are actually very good at it, but education has constrained us from allowing us to do that because we try to get people to think about, Immediate returns versus long term thinking. I don't think colleges and universities teach people long term thinking as a discipline. I'm very happy that the University of Washington lets me impart that to, to my students when they take my class. but I think that's one of the things that, scenario planning is really trying to do is make sure that we're watching what's happening. We create these things called, early warning systems that look for various high end uncertainties and saying, is this going to happen? And this year with elections happening, we could have. Various things that take place. there are things like quantum computing, if quantum computing becomes real, it will invalidate a lot of the security measures, if that's the only thing that it does, creates a collapse because a lot of the assumptions that we built around. Security for the internet and life change, pretty immediately. And so we'd have to do another, what's a reset if that happens. it's a constantly evolving set of stories. and right now I think any of those four futures are. continue to be very plausible.

Don Finley:

It's good to hear that we haven't show, limited ourselves. in the direction that we can go with the technology. give me some permission here to ask you a bad question.

Daniel W Rasmus:

of course.

Don Finley:

Cause I'm trying to go somewhere so what I'm trying to get after is the surprise elements of your scenario planning. Were there aspects that you found to be, let's say, pleasantly surprised in the scenarios or unpleasantly surprised? and then additionally, I'm gonna, I'm, I would love to know how we can impart more of a long term thinking scenario into our day to day lives, if that is a possibility that we could actually extend out our education on that.

Daniel W Rasmus:

so I find that all the time. from a, from an educational standpoint, and sorry if this sounds a little commercially at the moment, but, so I go into agile thinking workshops, which try to just impart the idea of scenario thinking to people embracing uncertainty. and then I do innovation workshops that. Take ideas from within an organization technology roadmap, for instance, and challenge them against the different futures, because most of the time, people think about their planning in a very linear way. And so I now blow up their assumptions that are underneath their stuff and they go, Oh, my God, I'd never thought about that. one of the exercises we did at Microsoft once was we brought in a number of people who were using. Office, Exchange, Outlook, etc. and we brought them in and we introduced them to the scenarios and then we put them in a room and we said, it's 10 years in the future. and you're now in a meeting, uh, to answer these questions. How did you get invited to the meeting? What are you using for side channel conversations? What technology did you bring with you? Are you actually in the room with other people? questions like that. Underneath one set of scenarios, and then they would flip rooms and I'd give them a whole nother set of assumptions and say, now answer those questions again. And the technology teams then were, inspired by what these people were saying in terms of, that they were imagining that they were using, And They weren't completely constrained by the, often when a technology company comes in, you'll set, our three prototypes for the thing, which one do you like better versus the, there's nothing in the room. And you're just asked the question of what device are you bringing? And it's up to you to decide what the features are that you're using, You're not being. sometimes I feel in technology, like we're doing magic tricks, we're forcing a card on somebody and saying, here's what the answer is. Just pick that one. versus just letting people be creative. so yeah, those are, there's. Lots of things, when I did the university system of Georgia scenarios, it's interesting in hindsight, usually, not at the moment, but one of the things that we came up with there was, legislative, because in scenarios we use social, technological, economic, environmental, and political uncertainties. one of the political uncertainties was what the regulatory frameworks were going to be for AI, And so this is a decade ago. And the answer to, in one of the scenarios was, we're going to make everybody explain the reasoning behind any recommendation that an AI makes to people Amazon and Microsoft and everybody else threw their hands up and said, we're not going to do that. We're just going to stop making recommendations. That was the story they told. In the story, then people started their whole new. areas of knowledge management and learning and connection and customer service where you were connecting with people and learning what they needed. And the recommendation engines that now show up on Amazon were people, It was people having a conversation with you again, about what you needed to know. You started having concierge services online and other things that were very people oriented, which then made the whole, it's displacing my job stuff, in that future, very different, Because we were creating jobs out of a legislative thing that constrained technology, and so that's a very interesting, conversation that we might have at some point because we don't know what the regulatory frameworks for AI are going to be. So that's still has possibilities going forward.

Don Finley:

it's actually a really interesting piece that you're bringing up here, because if we're using legislative activities to basically create essentially more jobs in certain areas, Because we're basically providing good use practices around it. And especially like recommendation engines. or another project that came across our plate was an AI that would do underwriting. and I convinced the client not to move so heavy into that area because underwriting is important to understand how you're making the decision and why you're making the decision. And depending on the implementation of the AI, you may actually not understand why you're you know, approving this loan versus, not approving the next, as well. And so what are some of the, opportunities that we have out here in the scenarios for what the future of work would look like, in AI? are we going to be working less? Are we going to be working more?

Daniel W Rasmus:

again, part of the premise of scenarios is that there's at least four different answers to that question, So I Won't just ramble on, the transactional world we already talked about, right? So AI is very much part of the, how you get things done. the man world we've talked about a little bit, And so the future of work is it's just sitting there in the background. I, that, that future is I would say, not devolved per se, but it's stuck in its own moment and it probably looks a lot like today, more than most of the other scenarios because there's not a lot of forward movement in society and technology or anything else is we're just stuck in that future in the. Great healing future. the future of work is very much about and I just realized this the other day when I was doing some research on the current stuff, that one of the things that AI is very good at. And I think that's where the healing piece comes in. It's very good at. Finding a lot of things that could happen and just coming up with one that's really good. So if we think about, the other day there was a better motor that was developed, There could have been tens of thousands of options for that motor. And it didn't matter that the AI just churned out tens of thousands of things that were crappy, but it came up with one. That was verifiable by engineers as being an innovative design that was going to potentially change the world. And it was fine that he did all the other stuff, It didn't matter. Now on the flip side, that same week, McDonald's said, we're not going to use AI in our ordering because. It's making mistakes. So it can do 10, 000 things and come up with one really good thing. It's not so good yet at doing the same thing every time perfectly. It's very poor at that, and so in the great healing, it's being assigned to big problems and people are trying to use it for things where you're doing a lot of the 10, 000 bad things to get to the one good thing. And you're trying to find the models and you're building data into it. protein folding and climate change and energy consumption, smaller batteries and all that kind of stuff. Throw that at it and you're doing these big problems. And of course, in the background, it may be doing some other stuff. I'm hoping, and that's a world also, that's rethinking almost a post capitalistic society where we're also dealing with the Distribution of wealth and the distribution of resources. Those are big problems. Again, that, if you ask AI, how do we make sure people have enough fresh water in the world? Um, going to churn out a bunch of stuff that makes no sense, but it might come up with one that's okay. We didn't come up with that. we use processing power to come up with a very unique solution that we would not have come up with and it solves a problem for everybody. I don't know if it can do that. I think it can. and so I think that's that kind of world. And then of course, we already talked about the fourth one, which is the ideologues coding content in a way that's, that is, that future of work is pretty much, do whatever the industrial state complex, whatever it is telling you, and that's what work is going to look like, You'll build the things that we think need built or whatever.

Don Finley:

think that's my dystopian position.

Daniel W Rasmus:

A lot of people's utopia. Unfortunately, it's also a lot of people's utopia. So that's the problem.

Don Finley:

I'm have you ever seen the show Black Mirror?

Daniel W Rasmus:

Oh yeah.

Don Finley:

Okay. I was always like, you know what, you could shoot all of these episodes and call it White Mirror and paint it as if it's somebody's utopian society. there's elements inside of it. It's our interaction that really changes our perception of, what that society actually is. I'm playing with the idea of we have the four different scenarios. We're walking down many different paths. this is probably more of a question in scenario planning, if you have resources, we all have time, money, Some more than others, But at the same time, we have various characteristics of our own personal kind of interaction with this. What are the choices that you can make today to move down the path that we personally see as being like a utopian sort of path, that healing journey?

Daniel W Rasmus:

so there's two aspects to that. One is on the enterprise side, for instance, I recommend to companies that they think about this in terms of what they lobby for, If they're going to put money into Creating legislation and influencing voter, votes, in Europe, in the United States, wherever, they should be trying to do the things that are pushing them in that direction, that align with the scenario that they would like to see happen, Because, just, as a brief aside, one of the things you want to do is you want to push toward the future that you have, but you use the other futures to avoid surprises. And if something were to happen, you will have already practiced that future and you can probably navigate it better than if you hadn't done scenario planning at all. So that's another aspect of it. On an individual level, when I do the agile thinking workshops, I give people a worksheet that says, now that you've seen these, how would you change your life? Where would you invest your money? What organizations would you join? Where would you volunteer? what would you spend more time doing so that they have a sense? And we don't publish this. This is just a worksheet for them to do personally, for them to rethink and hopefully make it very personal for them of the, I thought the future was going to be this. It could be something else that I really like, and maybe I should try to go. Get involved in helping make that future happen. and they get to very clearly have a visceral choice on a piece of paper that says, or on their tablet, that says, I'm going to do this thing based on, reframing my belief system a little bit.

Don Finley:

Nice. Daniel, have I missed anything in our conversation that you wanted to bring up?

Daniel W Rasmus:

we didn't talk about the serendipity economy, but, that's probably, perhaps another conversation to have at some point in the future.

Don Finley:

I think we're going to have to come back to that one because in all honesty, like I love talking about it with you pre show and we definitely got way into the weeds of the scenario planning. but at the same time, I think it is a great. lead in for another episode, where we can talk about the serendipity economy and how it will play differently based on, the future of technology and that intersection of humanity. so I would highly recommend everybody, to get a head start and forward me any questions that you would like to have for that show, because we'll be, bringing Daniel back so that we can continue the conversation around, serendipity economy.

Daniel W Rasmus:

Yeah. Come visit Serious Insights and read the paper and be prepared. I'd

Don Finley:

Oh, absolutely. So

Daniel W Rasmus:

There too.

Don Finley:

net is your website and we can catch the paper there. and then we'll have to come back for the Serendipity Economy. Again, Daniel, thank you so much. Like I really appreciate the time that we've had today.

Daniel W Rasmus:

You're very welcome. Look forward to talking again soon.

Don Finley:

Absolutely. Thank you for tuning into The Human Code, sponsored by FINdustries, where we harness AI to elevate your business. By improving operational efficiency and accelerating growth, we turn opportunities into reality. Let FINdustries be your guide to AI mastery, making success inevitable. Explore how at FINdustries. co.

People on this episode