Project Flux

Is the next level of AI already here?

August 21, 2024 Project Flux Season 1 Episode 20
Is the next level of AI already here?
Project Flux
More Info
Project Flux
Is the next level of AI already here?
Aug 21, 2024 Season 1 Episode 20
Project Flux

Send us a Text Message.

In this episode, hosts Yoshi and James discuss:

1️⃣ The next generation of AI and its advanced reasoning capabilities

2️⃣ AI tacking whole complex projects not just tasks

3️⃣ The importance of emotional intelligence, our weakness and strength as humans


Chapters

00:00 Introduction: The Next Generation of AI

05:25 Advanced Reasoning and Multi-Step Thinking

07:12 The Shift to AI Leadership and the Role of Emotional Intelligence

10:21 AI's Ability to Answer Questions Humans Struggle to Define

13:30 The Gradual Progression to Level Two Reasoning

16:58 The Paradigm Shift in Thinking and the Importance of Emotional Intelligence

19:14 The Potential Pushback Against AI and the Role of Humans

22:19 The Temporal Orientation of Humans and AI

25:37 The Potential of AI to Overcome the Lack of Emotional Intelligence

28:17 The Role of AI Personality Designers and Human-Centric Data


Keywords

next generation AI, advanced reasoning, complex tasks, multi-step reasoning, long-term strategising, emotional intelligence, AI independence, human decision-making, AI personality, human-centric data


Takeaways

  • The next generation of AI is characterised by advanced reasoning capabilities, including the ability to tackle complex tasks over extended periods and engage in multi-step reasoning and long-term strategising.
  • Emotional intelligence is a key strength of humans and plays a crucial role in decision-making.
  • However, AI can potentially help overcome the lack of emotional intelligence in some individuals.
  • The balance between human and AI decision-making is important, with AI providing data-driven insights and humans providing emotional intelligence and subjective judgment.
  • AI personality designers and human-centric data can contribute to the development of AI systems that exhibit emotional intelligence and better understand human needs and preferences.
  • The ethical considerations of AI becoming more independent and the potential impact on decision-making in areas such as warfare and human rights need to be carefully addressed.

Subscribe to the Project Flux Newsletter!

All our podcasts commentate on the weeks hottest news brought to you by the Project Flux Newsletter!

Get a step ahead on the world of AI and tech, subscribe and get the weekly newsletter to your inbox every Monday!

Sign Up to Newsletter
Project Data Analytics Taskforce site

Show Notes Transcript

Send us a Text Message.

In this episode, hosts Yoshi and James discuss:

1️⃣ The next generation of AI and its advanced reasoning capabilities

2️⃣ AI tacking whole complex projects not just tasks

3️⃣ The importance of emotional intelligence, our weakness and strength as humans


Chapters

00:00 Introduction: The Next Generation of AI

05:25 Advanced Reasoning and Multi-Step Thinking

07:12 The Shift to AI Leadership and the Role of Emotional Intelligence

10:21 AI's Ability to Answer Questions Humans Struggle to Define

13:30 The Gradual Progression to Level Two Reasoning

16:58 The Paradigm Shift in Thinking and the Importance of Emotional Intelligence

19:14 The Potential Pushback Against AI and the Role of Humans

22:19 The Temporal Orientation of Humans and AI

25:37 The Potential of AI to Overcome the Lack of Emotional Intelligence

28:17 The Role of AI Personality Designers and Human-Centric Data


Keywords

next generation AI, advanced reasoning, complex tasks, multi-step reasoning, long-term strategising, emotional intelligence, AI independence, human decision-making, AI personality, human-centric data


Takeaways

  • The next generation of AI is characterised by advanced reasoning capabilities, including the ability to tackle complex tasks over extended periods and engage in multi-step reasoning and long-term strategising.
  • Emotional intelligence is a key strength of humans and plays a crucial role in decision-making.
  • However, AI can potentially help overcome the lack of emotional intelligence in some individuals.
  • The balance between human and AI decision-making is important, with AI providing data-driven insights and humans providing emotional intelligence and subjective judgment.
  • AI personality designers and human-centric data can contribute to the development of AI systems that exhibit emotional intelligence and better understand human needs and preferences.
  • The ethical considerations of AI becoming more independent and the potential impact on decision-making in areas such as warfare and human rights need to be carefully addressed.

Subscribe to the Project Flux Newsletter!

All our podcasts commentate on the weeks hottest news brought to you by the Project Flux Newsletter!

Get a step ahead on the world of AI and tech, subscribe and get the weekly newsletter to your inbox every Monday!

Sign Up to Newsletter
Project Data Analytics Taskforce site

Okay, five, four, three, two, one. Hi everyone and welcome back to the Project Flux podcast. Today you have me and James and it's gonna be slightly different this time because we're recording this ahead of schedule as James is going away on holiday. So James, say hello, where are going? Yeah, it's my turn at last. I'm off to Rome, which I'm really excited about. So flying out later today. But we wanted to make sure we kept the show on the road, as we say. we thought we'd do a quick podcast ahead of time. So we might miss a few big stories, although I don't think there's a huge amount that can happen. But that's famous last words between now and Monday. But we'll see what happens. Yeah. Famous last words exactly James. mean, for this episode, really, I'd imagine it being a lot more focused. So in last week's kind of podcast, or even the one that we've recorded this week and released this week, we touched on strawberry and strawberry is open AI's secret, but potentially most powerful model. And why strawberry is so important today is because, you know, it's supposedly marks the next generation of LLMs and it creates this huge step change in terms of AI capability, getting us closer to AGI. And this is a world where machines will be just as smart as humans. Right. So in this podcast, I'd like to talk a bit more about how, you know, what is the next generation of AI James and kind of how, that next level of reasoning looks like. Yeah, well, I mean, we're making quite a big assumption there, aren't we, that that strawberry is going to be that next level reasoning. And we spoke last week about we're not sure is it GPT -5, is it not? And you know, only time will tell on that. But whether it is or isn't, I think, is kind of irrelevant, because it's only a matter of time if that's not going to get to that level two. And when we talk about level two, that's the kind of artificial intelligence with human level reasoning. They call it PhD level reasoning. So if you imagine what percentage of the population is PhD, I would imagine it's like 0.01 % or something like that. Then you're saying that you've got artificial intelligence that's going to be in that top tier of human intelligence. Where we are currently is level one, which is what a lot of people are using the chat GPT's of this world, the Claude's the Gemini models, they're all at level one. So that level two reasoning, I guess it's not gonna be like a big necessarily event. here we are, we're at level two. I would imagine it happens quite organically over a period of months. What will undoubtedly happen is somebody will release a model, might not even be open AI, it might be one of the others, because we'll talk about the fact that Elon Musk has just released Grok 2. And there'll be a realization that this is next level, right? So this is next level reasoning, but these things sort of come out in waves, not everyone gets them at the same time. So then there'll be the race, then the competitors will have models to match it. And then it will, over a period of months, I would imagine, you'll realize that we're at this next level of reasoning and people will start to able to use it for different reasons. So I don't foresee like this big kind of new story. We are now at level two. It will probably transpire over a period of time is my guess. Yeah, and it's really, it's a really interesting thing because, you know, there's certain predictions saying when we're going to AGI. think it was, it was Ray Kurzweil saying within the next five years, Elon. said in his book, but don't forget that was published about 10 years ago. I think he talks about 2035. But I think the latest one he's about to publish. So Ray Kurzweil is the guy who wrote, the singularity is near. He's about to publish the follow up, the singularity is nearer. I believe he's bringing that timeline forward, but I'm not sure what to. exactly. I mean, also getting even earlier within the next couple of years with Elon Musk and a few others. So even though it feels like we might not feel the change as overtly as we might, we can definitely feel it bubbling, you know, and there'll be all of this new level of capability where we just start to use AI for more advanced things and we don't even realize that we're using it for more advanced things, right? Or to become default in terms of how we live our everyday lives. But the reason why I kind of mentioned strawberry, is because when I try and imagine the next generation of AI, it's quite difficult to think about what next level reasoning is, you know, and what this next generation is. So looking at the strawberry model and aligning that to the roadmap, which you kind of touched on James in terms of saying level two with an open eyes kind of direction. I've just kind of picked up a few things in terms of what that might look like. So one thing that they mentioned is the ability for AI to kind of tackle complex tasks over extended periods. So Right now, when you're Chatted GPT and some of these tasks, it has a really small context window. You ask it a question, it'll give you a response. Now, what this next generation of AI might look like is we're talking more multi -step reasoning and long -term strategizing, which these are both hallmarks of human intelligence. It's kind of where we go to work and what we do. We strategize continuously and we try to advance the insight that we can get to do better effectively. So let's say you're working on a project. And rather than focusing on getting insight on a single task, imagine having an AI which can grasp the whole project end to end, providing insight and reasoning as you move through the project. Also, maybe having the ability to think about how the project itself impacts the wider program and the portfolio or the business. So this will give AI, you know, the ability to tackle more complex phenomena, elevating its capability. And that's currently where we aren't at with it, but where we might go. So what are your thoughts on that, Jones? Well, you know, a lot of people talk about the current set of models as something that augments our service, but you always got to have the human in the loop. And I believe that at the moment, human plus algorithm, it works really, really well. And we can't necessarily trust the outputs of the LLMs and what have you. We still got to have a human in the loop. And some people say, well, you can treat the AI as another member of the team. and refer to it for information and knowledge, but you still got a human overriding that. think what the next generation will do bearing in mind this kind of PhD level reasoning, it's very possible that in most cases, the AI will then become the smartest person in the team. So rather than augmenting what we do, it starts leading what we do. And that's going to be a massive shift because at the moment, think people are saying, responsible, safe use of AI. Absolutely. We've got to. But at what point does it become safer to trust the AI than to trust the humans? So that's going to be like from a cultural point of view as well. There's going to be a big reluctance to let go and there's going to be a big reluctance to acknowledge that actually even though it's a member of the team the AI model may well be your most reasonable and best educated opinion. And how do you feel about that as a professional? It's a really good question. I guess until we're in this situation, I don't know, because at the moment we still got the override button, haven't we? We can still say, okay, well, I've used ChatGPT or an AI for that, but I'm going to override it. And I'm going to use my good old gut feel and good old gut experience. The point where I have to acknowledge that my gut feel isn't as good as the AI or the experience means that... not from a technical point of view, but from a personal point of view, I'm going to have to acknowledge that even though I've had years of experience, I'm second fiddle to the AI. I think there's still a way for the two to join up quite well because there's still going to be things that humans can do that the AI can't. And that's all things around emotional intelligence and all things around... the people part of it. think, you know, having those conversations. Yeah. bit there. I've got a good question for you there then, but just kind of what you mentioned there is you're talking from a position whereby you are quite senior. You you say yourself, you're involved with the task force, you're the chair of the task force, the RSS, et cetera, et cetera. Do you think that perception of, know, letting AI take the wheel, do you think it will be different for those who were early on in their career and those who were later on in their career? Definitely, I think people coming in. like as AI natives, if you like, they're using it at school, it will just be second nature. So they won't feel insulted or, you know, feel like their ego is being threatened because they'll understand that, right, the AI is good at these things and I'm good at these things. So I the AI do the things that they're good at and I'll do the things that I'm good at. And that might be more more creative stuff because I still think that one of the things that human intelligence has that's going to take a long time for the AI to really get to grips with is is the creativity because the very basis of AI is it's trained on past material so it's only ever going to be as good as what it's trained on whereas what human beings are exceptionally good at is coming up with things out of thin air tapping into that universal consciousness and that kind of you know where where does that come from when you sit there and you have that eureka moment and you come up with an idea you know where does that come from i don't know we get quite the yeah. And let's stick with that. So advanced reasoning, you know, is, one of the components which will be, which will symbolize this advanced reasoning capability by AI. Right. And the way that I look at that is kind of advanced abstract reasoning. So it's the ability for the AI to make an accurate and reliable inference without much context. So like you said, humans have a gut feel. They have this thing, which is kind of intangible that helps us to make an inference without much context. And that can often be right and correct. Okay. So if AI can, and they can all think, and they can also be wrong, right? So if we can get to a position, yeah, more than you think. And again, it's relative, it's subjective, but this is where, if we can get AI to achieve that. And if that is something which might be seen within strawberry or kind of these newer tools, that might be something that which we might see as being groundbreaking. Now, if you, if you look further into that, that would mean then that AI will then have the ability to do this independent hypothesis testing. which we know is the approach that we have scientific discovery, right? And this is why this sort of stuff is so important to us, because if we can do this end to end, you know, thinking through a process of these internal dialogues or kind of what you mentioned about multi -step thinking, that's where we start to see an upgrade in capability. But going back to what you mentioned, James, you know, how does AI typically learn? It gets fed information, right? And it's trained on historical... information or data, right? And then there's some calculation in between and it gives you an output based on what it's learned. Now. for now, never one, two and three, but we all know level four, very much the roadmap talks about AI that that does get to that kind of being able to come up with creative ideas out of nowhere that that's when I think all bets are off. And I would argue that we're starting to see this now. And I think that manifests more in terms of how we're training some of these AI models. So strawberry itself, the reason why there's a lot of hype around it is because it is a form of self -trained AI. Right. So this means it's not supervised by humans and it can train itself through pattern analysis, right. And draw its own conclusion using data that is not guided by humans. Okay. So Think how useful this will be when AI can solve problems that humans ourselves struggle to define. So we're putting information into an AI, right, to solve issues which we can define as humans. If the AI is self -training itself, okay, that would mean that it can think and draw conclusions to problems that we haven't been able to define. Is that what mean? promise, isn't it? That's the great promise with AI. mean, you hear about people rightly very concerned about the damage that the data centers and the compute power is having on the environment and the sustainability agenda. But the kind of argument to that is when it gets good enough, it's going to be able to come up with solutions to these problems. So, you know, that's where we need it to go, frankly, because otherwise it's unsustainable. But so there's a question for you then, because this symbolizes to me a shift to the level of AI agency or independence. So what are your thoughts on that? And AI being independent, not needing humans, some might take. Yeah, so we get into matrix territory, we? And terminate territory where we've got AI's running around making the same decisions. You know, that is not really a tech question. That's a philosophical question. And we can get very, very deep on that. You shouldn't speak a lot about it. And I think governments and institutions need to tackle that question because it's coming around faster than they think. You know, and I've said it quite a few times, it doesn't matter. We can argue until the cows come home, whether we're going to reach that point in three years time or 20 years time. It doesn't matter, does it? You know, it's going to be within that time scale. And that's still soon. Even if you take the worst case, like 20 years, that's still soon. That's still, you know, we'll still be on this planet. Our children will still be on this planet. So it's something we've got to get our heads around. what I worry about is people just kick the can down the road and not sort of get to the bottom of it sooner rather than later because there's got to be a halfway house. I don't think we ever want to get to the point where we just hand off the entire decision making process to AI. Because can you imagine in warfare or can you imagine in, well, just letting AI make decisions based on what's best for the planet? Because we've heard the analogy before that if you say to an AI, well, you know, your number one objective is the good of the planet. that it might make the decision that the best thing for the planet is to get rid of humanity. So you've got to be so, so careful. So I think that there needs to be a halfway house where we allow AI to make decisions up to a certain point. But there has to be some thresholds over which point humans have to be in the loop. And that is things around warfare, things around... you know, human rights, things around health and medical issues. Yeah. But I don't know where that line is. You know, we need to get to the bottom of that as a species, I think. Yeah. And you know, I quite like the philosophical conversations in and around AI. mean, when I think of self -training AI, and I can see, look at some of the benefits, the things that I turn my attention to, you know, are things like self -driving cars or, you know, Tesla's robo taxis and all these different things, because the ability to self-train and learn by these quick feedback loops without having to have, you know, that, that, the time consuming behavior of humans having to prep data, you know, when we, me and you developing Some of our GPTs and our end, you know, having to process data, prep it, label it, the ability for AI is to learn continuously. And because it can think it's so and draw its own conclusion. I guess. Let you mentioned answer questions, which humans are struggling to define. It can, it can think of these things really quickly, which makes itself very safer or much more safer in the context of things like self -driving cars, but kind of touching on what you mentioned, James, you know, if. AI can then train without humans, you know, and it can answer questions which we struggle to define. So it's got its own method of questioning to some extent. For me, there'll be some level of divergent thinking from humans and AI. And I think that is something which can be quite scary, especially because, you know, when we start to plan AI in terms of our governments and, you know, in private and public sector, we have a strategy and a mind map and a roadmap, right? So we were building infrastructure, we're finding new ways to develop AI. But we might need to change all of that if the AI is telling us to think differently. So this whole paradigm shift in our way of thinking is something which is going to be really, really interesting. Yeah, and quicker and isn't always better. So we've seen in other industries where there's been a pushback against technology and we still hanker for, you know, traditional ways of doing things. So if you take the music industry, we've moved to streaming, but you know, we've had this massive resurgence in vinyl. mean, the number of people who still insist on reading books, you know, I'm one of those because I want that. I'll see it as a break from technology. And there's still a lot of people who like to do things in a traditional way. So what I can definitely foresee happening is this, and I think I've talked about it before, is a kind of movement against AI where people start, we're in the middle of an AI hype at the moment, but when that kind of dies down, there might be a marketing spin that some companies use, and particularly consultancies. And you could see it in the project delivery space where some companies brand themselves as, without AI. We don't use AI. We are humans. You know, we know that, for instance, vinyl is not as good. It breaks. It's clunky. It's, you know, there's a lot of debate about sound quality, but, you know, it definitely picks up static and buzz and all those other things. Yet people still prefer that to very, very high quality stream music because they want something tangible. So I think that's going to, I think we're going to see a pushback at some point. Don't know when, but there will be. what you mentioned that, you know, talking about consultancies, because, know, as AI, its overall general capability improves, right? It will be able to answer not just these kind of narrow focus questions, but things which are much more broader in a sense. So it will be able to understand complex concepts and phenomena that we as humans, you mental models and theories that we have of the world. it'll be able to deliver against that. So when we look at consultants, especially management consultants, when they're giving you information about how to run your businesses and strategize, there will be this level of threat because AI can do it for you. I guess for them, there'll be a transition to more as how companies can leverage AI as opposed to telling these companies what to do themselves because they're, multiple change. if everyone's got access to that, then how do you differentiate yourself? you know, if every, if we've got AI at that level and every company in the world is able to use that kind of strategy, so everyone's equal now, you've, there is no way of separating people. So the separation might be, actually we need the human input here. So it goes full circle. it goes full circle. yeah, so that looks at the next generation of AI and what we can expect in the future. And like I said, it's not going to be too far in the future that we start to see some of these things. I think thinking this through kind of gives us more of an idea, you know, it brings clarity to what we can expect in the next couple of years. So yesterday, I just had a thought of a question. It was more, it wasn't really tied to AI, but you know, thinking again today, it can be kind of loosely tied back to AI. I thought it was a question I'd ask you, James, and you're someone who thinks quite introspective. And the question is, do you think you spend more time thinking about your past or more time thinking about yourself in the future? Well, okay. I'd love to see how you're gonna link this into AI and project delivery, but still. Well, we all live in the past or the future most of the time, very very few of us live in the present. We are either looking back or we're planning forward. I would imagine probably an equal amount of time. One thing I do try and do myself is try and ground myself in the present as much as possible by techniques like meditation and things like that because you've got to understand that both the past and the future don't exist. know that the past is just a memory and the future's... you hasn't happened. So you're just sort of playing out stories in your mind. And I think it's a real, you know, issue. It's what causes the most stress, right? If you're playing back over past things that you can't change, or worrying about things in the future that haven't happened yet, that you're probably catastrophizing and it's never actually going to happen. So I think I'd probably, I mean, I'm like you, Yoshi. I love planning. I love probably a bit too much. So I'd probably say I was probably spending too much time in the future. But I also do spend like, I'm sure the rest of the people listening way too much time thinking about that stupid thing I said yesterday. And probably I'll think about some of the stupid things I said in this podcast after this one. yeah, equally. I think the important thing is it doesn't really matter whether it's past or future. It's about learning to live in the present as much as possible. Yeah. And I kind of have a similar approach to you, you know, to be, to be more present. And I kind of tie, I'll show you how I kind of tie this back into AI developments, right? So humans, we're looking at the past versus future thought, you know, within humans. So we often reflect on the past to learn from experiences, right? Or we imagine the future to kind of set goals and prepare for uncertainty. So this balance itself between partial reflection and future planning is crucial, you know, for personal growth, decision -making and survival. So it kind of highlights our kind of temporal orientation, right? Whether we're more anchored in what's happened or what might happen. Similarly with AI systems, right? They operate with both past data and future predictions. So traditional AI models are heavily past orientated at the moment, you know, and they rely on historical data to inform decisions. They analyze patterns from previous information to make sense of the person that we're all saying this abstract level of reasoning, you know, it's all formulated on past data. Now, The reason why I found this important because this next generation of AI, like we mentioned with strawberry and it's self -training methods, you know, they are increasingly future focused. So they're designed to anticipate and strategize over extended periods, making decisions that are not just reactive, but proactive. Do you know what mean? So now it's taken. on past data though. mean, a lot of it's still using past data to... Yeah, yeah, I guess it's past data, but it learns quickly. So that in terms of where it's pulling information from, it'll be learning almost instantly. So when you've got all these different sense that it has, it's the ability to make him to make decisions from that really, really quickly. So, you know, I think there's definitely parallels with the human cognition, you know, and where that balanced focus lies. And, and then I was kind of thinking a bit more about that, you know, in terms of myself and how I think and AI thinks back and it uses information to plan ahead, we do the same, except we don't just look back to make decisions about the future as humans, right? So we also look back because of different things like nostalgia. And I think this is something that AI doesn't have, and where I think that's important for planning is that this opens us up for more biased thoughts. So we look back, we live in nostalgia, kind of influences our memories and it changed the way that we think forward, which means that can be a flaw, right? So it can be a bad thing. And that shows us why, you know, I was asking you earlier about Would you let AI take the seat in terms of our planning? Yeah, but humans are really bad because it's not just past data that we use. It's gut feel and then it's group think. It's, you know, people not wanting to raise their head above the power of pet. There's loads of human conditions that make really bad decisions. think the worst one is, I know we touched on this last week, the worst one is this assumption that our gut feel is somehow really, really good. And if you've got loads of experience, somehow that counts above any kind of statistical analysis. And we're really, really bad at that as a species, you know. is why, you know, when I asked that question, if you was asking the question back, I'd probably say, you know, for certain things, there would be a point, you know, I'd let the AI take the seat because it hasn't got that level of subjective thinking. isn't, it isn't as emotionally biased as we are. But what I would say is that that very weakness of having emotion is also our strength. Right. And like, and again, you tie to it, you know, so because it you need to override the decision because of that. But sometimes despite all that, let's take it on a project context, Sometimes despite all the, let's say something's happened, there's been an event on the project, right? Outside where there's been some event that's happened, let's say someone's been run over or something. Mm you could use all the data in the world and the data will look at the objective and say there's no reason to shut the site down right because we still got to keep to our objectives but how do you quantify that human element which is it's just the wrong thing to do you know we need to the site down because it's going to look really bad or you know we got to think about people's feelings and that's the bit that i I can't, I think it's gonna be quite a while before AI is better than humans at that. Yes. And this is why, know, you know, in this episode today, we're looking at the next generation of AI and it got me thinking, you know, if I can do all of this stuff, you know, and it had this advanced reason, which ours is tainted with kind of more subjective thoughts and emotion, right? This is where I feel humans have that advantage, right? Even though it can be a weakness, emotional intelligence is so powerful. And it's something that if we can position ourselves as in, it's what makes us human. It's the very essence of life. So I think within, within our daily lives, but also within a workplace environment, emotional intelligence is going to be key because, you know, people say that the personality is often driven by how you feel, how you think and how you act. And more importantly, how you feel within humans within this context is drives how we think and how we act. Now, if you think to how an AI then thinks in terms of an AI personality, AI thinks and acts in absence of its feeling. So when I asked you the question a bit more about, know, do you think that AI will be divergent in its thinking because it doesn't need humans. I think that's why I think there'll be a level of divergence is because it doesn't think through that level of empathy, it hasn't got that feeling to drive its thoughts. So in terms of where we place ourselves as humans, I think if we kind of build our emotional intelligence and, you know, just be human, I don't think we need to worry about position in the world, you know, in our daily lives, but also at work. You know, I think we still have that advantage. a lot of humans who haven't got much emotional intelligence. So it depends who it might be that we can train what it means to ask the right questions in terms of emotional intelligence and those human factors. And then the AI would end up being better than some humans. There's a lot, you know, and it's one of the big problems in the workplace, actually, that not enough people have got that emotional intelligence. And how do you train a human to have that skill. It's really, really hard because you, if you fake it, it becomes quite obvious. So sort of countering what I've just said, I'm going full circle. It might be that, you know, we get to the point where an AI can, especially when we start to talk about voice, because the voice is so important with human intelligence. And we're moving to this voice input and this voice, you know, where you've got an AI that really does, I know you wanted to touch on the friend thing. I think you're going to talk about the newsletter, but you've got something that talks to you in a way that has been trained to have emotional intelligence. I think I might have changed my mind. I think maybe actually maybe the AI can help overcome the lack of emotional intelligence with a lot of human beings. There's a lot of human beings. Yeah. And, and this is, this is also interesting because we saw the other day, I found out a new job role, which is up and coming and it's an AI personality designer. and then similarly to that, we went to a talk recently with Martin Paver and Donnie McNichol and Donnie kind of looks at human centric data and how that plays its part within managing projects. in the session, I then asked Donnie and said, you know, Donnie, looked at the figure two robot the other day and it's been deployed out in a production environment at BMW. And there's a high, there's a high likelihood that that might be on sites in the next few years. How do you then think the dynamic of human, human data will be important within the site? So I think this idea of human intelligence and trying to mimic that is going to be really important going forward. But yeah. massive questions Yoshi and no no full answers but it's going to be as always interesting to see how it plays out over the course of the next few months. Excellent. Well, yeah, I think we'll wrap up there. I think that was quite a deep, profound conversation. When you let us loose, not to go through some of these news articles, Jamie, you're not going into massive tangent, but you know, good conversation. And yeah, anything to wrap up? No, think it was good, know, doing something slightly different because we are recording this ahead of time. So we haven't gone through the big stories, but obviously the big stories will be available in the newsletter, which we will put into the show notes and then we'll kind of be back to normal next week. So next week we're going to have Anthony Slumbers on, which we'll be really looking forward to that conversation with him. And then, believe it or not, we're getting back towards the new school term, September. and heading towards the end of the year. So it's been really, really interesting. And just to thank all the listeners really, because we're going to be hitting a big milestone over the course of the next week or so in terms of the number of people who listen to and subscribe to this podcast, which is beyond our wildest imagination. know, when we started this, was just a chance for me and Yoshi just to record our chats really. And we set ourselves a very kind of what we thought were ambitious goals in terms of numbers listening, but we've absolutely smashed that. So a big milestone coming up for us. So we'll have a little celebration and just to thank you all, but please share the podcast, please share the newsletter, because that's why we do it. We do it just to sort of help. We just want to be givers and share all the knowledge and the information that we find with people in the project delivery profession. yeah, that's it. See you all next week. Anything else from you, Yoshie? No, I think they're in perfect close up. Thank you guys for listening and we'll catch you again next week. Bye everyone.