Preparing for AI: The AI Podcast for Everybody

THE SECRET AI TAKEOVER: The terrifying reality, are you already a digital slave?

August 24, 2024 Matt Cartwright & Jimmy Rhodes Season 2 Episode 11

Send us a text

Can today's search engines do more harm than good? With Google's RankBrain, BERT, and the new Search Generative Experience (SGE), we investigate whether these AI advancements truly enhance our search experiences or if they add to the clutter. Discover a comparison with Perplexity, a search engine that promises more intuitive results, and hear our take on how these changes impact everyday users, content creators, and businesses heavily reliant on search engine optimization.

What happens when AI takes on the role of a scientific peer reviewer? We delve into groundbreaking AI systems producing research papers and nearly matching human accuracy in automated peer reviews. Yet, we question whether these systems can maintain scientific rigor and independence. We explore the ethical challenges of aligning AI with human values and the unique issues that arise when AI personalities simulate independent reviews.

Imagine a world where AI not only diagnoses diseases but also predicts wildfires and weather changes. From early lung cancer detection at the Mayo Clinic to predicting fire hotspots with the "NOBURN" app, discover how AI is revolutionizing healthcare and environmental monitoring. As we examine AI spending trends and job impacts, we also forecast the future of AI development, offering a cautious yet hopeful glimpse into what's next for these transformative technologies. Join us for a thought-provoking conversation on AI's growing influence across various fields and what it means for our future.

Matt Cartwright:

Welcome to Preparing for AI, the AI podcast for everybody. With your hosts, jimmy Rhodes and me, matt Cartwright, we explore the human and social impacts of AI, looking at the impact on jobs, ai and sustainability and, most importantly, the urgent need for safe development of AI governance and alignment.

Matt Cartwright:

Make yourself at home. We got diesel or some of that homegrown. Sit back in your throne, turn off your phone, because this is our zone Videos, televisions, 64s, playstations, way up a Henry with precision, few herbs and a bit of Benson, but don't forget the Rizzler. Welcome to Preparing for AI with me, matt Cartwright, and me, bimmy Jodes. Well, today we promised you a normal episode after last week's car crash. Did we promise that? Well, I promised it.

Jimmy Rhodes:

Okay.

Matt Cartwright:

You didn't promise it, but you agreed. So we're going to talk today about how AI is creeping into all of our lives. So these will be kind of examples some good, some bad about how AI is taking over the world already and how you guys just haven't noticed it. So let's start off with search engines. I think this is a kind of a boring one, I guess, but I think it's one of the examples of where you know it does impact people's lives. It does, or it does impact the way that people live daily. I'm not maybe affect people's daily lives is a bit. Does impact the way that people, um live daily? I'm not. Maybe maybe affect people's daily lives is a bit of an overstatement. But you know, everyone uses search engines and so it's something that I think does affect people.

Matt Cartwright:

And when I kind of researched this, so there was quite a lot of stuff about how google has changed their algorithm to use something called rank brain. Uh, I think that's rank in a good way rather than a bad way. And BERT, that utilizes AI to analyze the nuances of language. That's what they've called it. I mean it sounds like something they'd call it right. I mean they used BARD to use when they invented Gemini. That's true, I guess.

Jimmy Rhodes:

BERT might Sounds like a.

Matt Cartwright:

Muppet. Well, it is a Muppet. Maybe it stands for something I'm guessing it does. I'm not going to try and guess it now, but anyway they said that this provides users with information that aligns closer to their needs rather than just matching keywords. And then there was a kind of generative AI in Google Search Generative Experience, their SGE, which I'd never heard of before, which apparently allows conversational and contextually relevant answers directly within search results, and that evolution, as they call it, transforms traditional search practices, requiring content creators to focus on comprehensive and authoritative content that meets both user and AI criteria for relevance. I think you've got an opinion on Google.

Jimmy Rhodes:

If their definition of relevance is. I have to scroll further to find what I'm looking for on google, because I was, I was, I was using google today and this exact thing came up, and I don't think it's necessarily related to ai. But why is there so much junk at the top of a google search nowadays? Yes, it's awful, isn't it? It's awful it like it's gone from being, I mean, google. I remember when google came out I'm that old um and uh, and the brilliance of it was that the super clean interface and the fact that it just gave you what you wanted, and I think they even had the little timer on there. There was like 0.01 seconds or something for a search and then bang, you just had the information. Now it's like sifting through, uh, like you know, a junkyard. Yeah I.

Matt Cartwright:

I was sticking with ask g for quite a few years when google came out, so yeah, I wasn't, I wasn't an early adopter, but, as you know, I am an early adopter of perplexity.

Matt Cartwright:

Um, and I think you know perplexplexity for those that haven't listened to all of our podcasts we have talked about it before. Perplexity is kind of an AI-assisted search, so I mean it's far superior to Google. You can ask it questions in kind of normal, in a normal, kind of natural way, and it will not only give you answers but it will also kind of help prompt you for how you can ask it for further information. It will link its sources to you. I mean it's far superior.

Matt Cartwright:

The thing with perplexity is that it is AI search, so you know that you're using something different, whereas where I think there is a change, and whether we think Google are doing it well or not, they have changed the way that they do it and so AI is having an effect. They have changed the way that they do it and so AI is having an effect. And I was thinking back to an episode we did a while ago. We talked about how you know these changes don't just affect users, but it's content creators and businesses who rely on SOE you know search engine optimization and it has an effect on them because the way that it picks up results is different to the way that it used to do it.

Jimmy Rhodes:

Perplexity is great, but can I be honest? They need a better name. The name is crap. Why?

Matt Cartwright:

Because it's too difficult to spell.

Jimmy Rhodes:

Well, it literally means to be perplexed. It doesn't sound like a search engine. It doesn't sound like something that's going to make things clearer, and it's quite difficult to say as well. I'll be honest, it is quite difficult to spell which is not ideal for trying to type it into a URL as well, is it? I think? If anyone from Perplexity is listening, or you know, Jimmy's perplexed. You know someone from Perplexity, then I think they need a better name for their search engine. Otherwise it's fantastic.

Matt Cartwright:

I do agree what about open ai's search? I forget the name of it. Is it called open ai search? Search gpt search gpt unsurprisingly, um.

Jimmy Rhodes:

So it just came out very recently. I haven't actually used it myself. I've've seen a few people testing it. It sounds like just a chat GPT with a few bolt-ons. It's crap at the moment. I'm sure it'll get better, but from what I've seen online, perplexity is a lot better. Chat GPT kind of just spams you with images even when your search doesn't really relate to image search and the initial output it kind of gives you a search related output which is similar to what google have incorporated into google search, but it sounds like it's basically exactly the same output you'd get from chat gpt. So in its current iteration it's not as good, but I'm sure it will, uh, improve quite quickly okay.

Matt Cartwright:

Well, even though you don't like the name, I mean my research. It doesn't mean I don't like it. No for perplexity, not ChatGPT.

Jimmy Rhodes:

No, I know.

Matt Cartwright:

ChatGPT. We just don't like it full stop.

Jimmy Rhodes:

Well, yeah, we don't like Sam Altman. Yeah, you don't like Sam Altman, yeah. I think, he's all right actually.

Matt Cartwright:

Well, we'll. Yeah, I was just going to say this episode, so the kind of research you know usually we do a mix of kind of research on this show was done with Perplexity, because I just find that it's really easy to use for this kind of you know well, for searching for information. I just think it's at the moment it's the best tool. I think you're right that it might not be. I mean, I also think you know Google will get their act together because at some point I still think there's a good chance. They, you know they're number one.

Jimmy Rhodes:

They'll probably just buy perplexity.

Matt Cartwright:

Yeah, true.

Jimmy Rhodes:

Could you sorry, just on the point of why perplexity is good like could you elaborate a little bit on that for our listeners?

Matt Cartwright:

So yeah, I mean it's almost. If you listen to the top 10 apps for summer, you'll have already heard this, but for me it's the way that you get an answer to a question with the sources linked to it, so you're not getting a long list of sources. I mean, I guess this is also a flaw in it on the basic function, that you're only getting a limited number of sources. But rather than just listing your source and then you having to dig into that, you can use it to ask a question and you know it will then also help you to ask follow-up questions to get better information out of it, and it will give you the links in a really, really easy you know. Just at the top it will show you the name of the articles that it's pulled the information from and it will allow you to click into those and link for them.

Matt Cartwright:

It's also also got the pro function and you get, I think it's five pro searches a day for free, even if you're not on the paid plan. And that pro function I mean, rather than me explaining, I think, if you want to use it, have a look because it will give you a better explanation than I can do now, but it will allow you to do a much more kind of refined search. It will allow you to get much more detailed results. If you're doing, you know, academic work or not necessarily academic work, but sort of research for jobs it's it's really really good at being able to get that information as a sort of disclaimer.

Jimmy Rhodes:

Do you pay for it out of interest?

Matt Cartwright:

I don't pay for it because I don't find that I would at the moment, that I would need to do more than five. I think if which I will be in a few months time I'm writing a research project for a master's, I probably will pay for it because I think at that point I may well be wanting to use that pro function. But I mean to be honest, I use it now most days and I barely use the pro function. But that's because the nature of what I'm doing is generally like asking a question that I would ask of a search engine or of another large language model.

Matt Cartwright:

Yeah, yeah and I'm not using it for kind of I'm not, I'm not maximizing the use of it because I'm, frankly, for these things.

Jimmy Rhodes:

I just don't need to yeah, and I only ask the question because I think most people are unlikely to pay for the use of a search engine when they've been effectively free for like forever, like since they've existed.

Matt Cartwright:

Yeah, um, so you know it's the same pro functionality, though you do get more. So with the pro functionality you get access to gpt 40 and claude sonnet 3.5. In terms of like, you can choose what you're using in the background. So when I say it's just a search function, it's a search engine or search functionality. It has a lot of the functionality of a, of a large language model as well, so you can do other things with it. So I think you know I would I pay for that instead of something else. Possibly I think, like if it gave me enough of the functionality around things like coding and you know the things that Claude 3.5 or ChatGPT 4.0 could do, I might pay for it. But I also think you know, long-term, do you think we'll be paying for use of large language models? I'm not sure.

Jimmy Rhodes:

I think it's not how they're going to make money anyway. How much is it out of interest?

Matt Cartwright:

I don't know, but I think it's probably twenty dollars, because they're all twenty dollars twenty dollars a month.

Jimmy Rhodes:

So there you have it. Uh, perplexity. If you want to reach out to us, we recommend you change your name and sponsor us so the next one that we're going to talk about.

Matt Cartwright:

I think this is a really interesting one, so I've put the category of science, but actually it's specifically to talk about this thing called the AI scientist from Sakanaai. They kind of label themselves as being focused on developing nature-inspired AI technologies, but they've got this one particular thing which aims to automate the entire scientific research process. It's a generative AI model, it's a collaboration with the University of Oxford or researchers from the University of Oxford and the University of British Columbia. It can independently generate research ideas, it can write code, it conduct experiments, it can analyse results and it can produce complete scientific papers. And I'm not sure if you can do this yet, but it apparently is going to be able to then peer review its own papers, which sounds to me like a bit of a contradiction. If it's, surely a peer review is not by yourself, but I guess if there are various agentic models, then it might pass the test. But it uses large language models to mimic the whole scientific discovery process and it uses machine learning. It has novel techniques for diffusion models, transformer models, and it can reduce sorry, it can produce a research paper, apparently for $15. And with the outputs reportedly meeting.

Matt Cartwright:

I don't know whether this is just meeting or how high. They are kind of over the minimum level, but meeting the acceptable criteria of top machine learning conferences, whatever that means. So a little bit more of the kind of technical thing. So it operates a continuous loop, which means it refines its research through an automated peer review process this is the bit I am a little bit unsure about that evaluates its papers with apparently near human accuracy. So you know it shows a lot of promise.

Matt Cartwright:

I think it's currently limited to very specific research areas and it obviously requires you know existing code bases as starting points. It can't start things from scratch, but it's got the potential for, you know, accelerating scientific discovery. And the challenges here, I think, will be ensuring that you know it's an AI system but you make sure that it aligns with human values and ethical standards. So I think a lot of question marks, but I mean potentially this is a, this is a wow and this exists right. This is. This is something that it's been kind of really buzzing on Twitter, which neither of us use, but that's what we've heard.

Jimmy Rhodes:

I thought this was incredible, I and it makes a lot of sense to me as well, because I've been following a genetic model. So this makes a lot of sense to me as well, cause I've been following a genetic model. So, um, this makes a lot of sense to me because, especially like even the peer review thing, because with a genetic models, what you do is you have several different large language models that are all told to act a different way, so they're all based on the same model but given different instructions. Um, so the example that's used with coding is you have an agent that is giving out instructions, you have another agent that's responsible for coding, you have a third agent that's responsible for testing and then evaluating that code, and so it seems like a very similar thing to me where you basically isolate the different parts of the problem and then you say, okay, I mean, it's the same language model, but it's effectively several different personas, which is how these agentic models work, and so one of them has the instruction.

Jimmy Rhodes:

You will only write code. You will take instruction from the director and then the director's giving out the instructions and directing. You know, directing the coding model to do the coding and then directing the testing model to the testing, and then the testing llm is given the instruction that you never write any code but you only ever test. So you only ever write test scripts and this kind of thing. And so this peer review thing to me sounds like exactly the same thing, where basically you're giving a model several different personas One is the one that writes the paper, one is the one that does the experiments, one is the one that does the peer review process, et cetera, et cetera. So it kind of does make sense to me.

Matt Cartwright:

But doesn't that contradict, because the idea of a peer review, if you had a peer review, you know, if you had a peer review paper, and I get the point that the, the different sort of agentic models or the different elements of the model have different personalities, but they're all owned by Sakanaai. So you know, if you wrote a journal or a paper, research paper at the university of oxford and then six people from the university of oxford peer reviewed it, that wouldn't be an acceptable peer review to me, because the point of the peer review process and let's not get into whether actually the peer review process is worth the paper it's written on but the peer review process is about. You know different independent specialists. How would you ensure the independence of several? You know different ai models that are run by the same well, the same organization because?

Jimmy Rhodes:

because, if you think about the way these large language models work, they've been trained on all the information that's out there. They're starting from a clean slate. So, like example, if you've got four different AI agents that are all an expert in a certain subject because they've been fine-tuned to be an expert in, I don't know, particle physics, whatever you want to use then they are like four different people that have all experienced.

Matt Cartwright:

They're even easier to influence than independent scientists, which you know. The point of the peer review, you know, feeding back to the sort of episode last week and things like Big Pharma and et cetera, et cetera, like how are you going to be able to trust that it's really given an independent view, because money is going to decide that they're going to be peer reviewed in line with you know what certain organisations or people or businesses want them to.

Jimmy Rhodes:

Longer term that I agree with, I think. I guess I'm talking about the principle of it versus the you know, versus the practical application of it and the future which at the moment, like Sakuna is it? I think they're Sakana, sakana, sorry.

Jimmy Rhodes:

S-A-K-A-N-a dot ai for those that want to look yeah, they're clearly demonstrating the principles of it right now and, you know, I mean I have no reason to doubt what they've done. I, I agree, I guess, I guess maybe the peer review process in the future. What it looks like is you've got, you know, four different research groups and they've all got their own ai models that have all been trained independently and therefore you kind of mimic that, the kind of peer review process that you're talking about.

Matt Cartwright:

So one's funded by Pfizer, one by Moderna.

Jimmy Rhodes:

Yeah exactly and yeah, so you know so and one by Bill Gates, who was the-.

Matt Cartwright:

And the lizard.

Jimmy Rhodes:

Who was the Swedish company we were talking about the other day?

Matt Cartwright:

The the gates, who was the, and the lizard, who was the swedish company we were talking about the other day, the, um, the azempic ikea, oh, azempic, yeah, the azempic, they're not company, are they?

Jimmy Rhodes:

that's a product, but I don't know. No, no, but azempic's the product, but it's produced by a company in sweden.

Matt Cartwright:

Anyway, yeah, one's that company not ikea and not marabou that produced Dime Bars.

Jimmy Rhodes:

It's the IKEA of biological biomedical science or Kopperberg, cider and pharma Knowledge of Sweden.

Matt Cartwright:

Sven-Goran Eriksson bless him.

Jimmy Rhodes:

They're the biggest company in Europe.

Matt Cartwright:

Shout out to Sven, by the way, who's on his last legs, but a great man, sven-goran Eriksson.

Jimmy Rhodes:

Rikaka.

Matt Cartwright:

Novo Nordisk. Ah, ah, you've remembered it. Yeah, definitely did not use google for that, and we didn't pause the podcast for 15 minutes while you stood there, sort of sat there in a daze and tried to think of that. Yeah, anyway, yeah, so, sakanaai, um, I mean, I think this is like, this is amazing and this was kind of almost the trigger for this episode, because it was like wow, this thing is really happening. Um, but you, you messaged me when I, when I sent you the article, you messaged me with this example of the kind of where it had already gone wrong. Um, I don't know if you want to explain that, because I thought that was super interesting as well yeah, so this was about the ai scientist bloopers.

Jimmy Rhodes:

I'll just read it. Um, it was like, basically they'd exactly in this kind of example, they've got an ai scientist and it was trying to increase its chances of success with like, with like solving a task, and so sometimes it would do that by actually trying to solve the task. In other examples it would modify its own execution script. So in one run it edited the code to perform a system call to run itself. This led to the script endlessly calling itself. In another case, the experiments took too long to complete, which hit the timeout limit because they had a finite deadline on this, instead of making its code run faster, it just tried to modify its own code to extend the timeout period.

Matt Cartwright:

Um, so basically it cheated on the test yeah, I thought that that I just wanted to explain that to people. So, basically, you know it had a time limit and because it couldn't reach the time limit, rather than trying to improve its performance, it tried to go back in and extend the time limit. So, you know, in some ways, like people I think people sometimes like that shows it's sentient, that shows you can think well if you understand the logic. It doesn't show that at all, but it shows that it is able to be sort of innovative and to find solutions to problems without being prompted, which in itself is equally kind of amazing and, and I guess, more than a little bit frightening.

Matt Cartwright:

So medical was going to be the next one and we've been promising for ages. We're going to do a series of episodes around medical, um, and maybe we don't need to after this because we're going to list some of them, but I think we will. I think we'll do that to after this because we're going to list some of them, but I think we will. I think we'll do that. In a few weeks We've got an interview with a doctor, so we'll try and get those off the ground.

Matt Cartwright:

But yeah, there's a number of examples I think. Maybe I'll read through a few of these, so they're mainly around the kind of diagnostic side, which I think is where you know it makes. It makes a lot of sense for one for AI to be used first, but also my views on the medical system and the kind of mess that we've got ourselves into in the world is because we're not doing enough preventive and diagnostic stuff. So I mean this is potentially really good news. So Mayo Clinic, which is one of the big ones in the US, obviously have got a lung cancer diagnosis. So it's an AI-powered decision support system that analyzes low-dose CT scans to assist in lung cancer diagnosis and it leads to earlier detection of lung cancer, ultimately enhancing patient outcomes and survival rates, which is bad news for big pharma.

Jimmy Rhodes:

Yeah, I've seen some examples of of this, even like a few years ago, because machine learning has been around for, you know, quite a while now and I've seen there's definitely been. I think there was some dr watson stuff, um with ibm like from a few years ago. I can't remember all the specifics but it was around exactly this. It was basically pattern matching. You know, looking at um, looking at scans and doing pattern matching and and and just doing it like an ai would so just being like, okay, these, here's a load of examples of um, you know, people who've got lung cancer and here's a load of examples of lungs that are healthy, and so they just feed them into the model and then the model can look at a new example and categorize it. And this is actually like a kind of a fairly basic um example of machine learning or artificial intelligence. That's been around for a while. But I guess strides are starting to be made and well, I think it's scale as well, isn't it?

Matt Cartwright:

it's scaled up to the point that there's enough information in there now I think that's you know the thing with ai in the last year or so is the scalability of it and therefore you've got enough compute and enough data that you've now got sufficient data in there and I guess things like lung cancer, which are, you know, the more common cancer so breast cancer was another one you've naturally got more data there, so you've got more.

Matt Cartwright:

The more data you've got, the better it can recognize patterns, the more accurate they're going to be, the more you can do a job, and it seems to me as well. Like you know, I've seen some stuff recently about whether they've surpassed you know radiologists, and that radiologists who were already experts were finding more of an advantage to using AI than you know radiologists who are new in their profession, and I was kind of thinking well, it doesn't matter, because in six months, a year's time, there's no way that ai will not be better at this, because that's exactly what it can already do in. The most basic function is, you know, pattern recognition and and using large, you know stacks of data.

Jimmy Rhodes:

So it would make absolute sense that it would be able to do this very soon yeah, already, already, definitely, and and, and I think there's, you know, there's also a trust thing around this. So I saw, I've definitely seen examples of this kind of application quite a long time ago, like maybe three or four years ago. But I think obviously, especially in the medical profession, some of these things take a long time to pass the bar, to start to become mainstream, to start to actually become trusted bar to start to become mainstream, to start to actually become trusted, um, and clearly, from this example, that is, you know, it's starting to seep into the profession now how long until it tells everyone they need to take statins?

Matt Cartwright:

uh, I think, or antidepressants, statins or or diabetic drugs well, I think that's already happening.

Jimmy Rhodes:

To be honest, depending on which part of the world you live in, so let's move on to another example.

Matt Cartwright:

So this was from the uk, from the royal free hospital in london. So they've got an ai system that analyzes. It's very similar but analyzes patient data, including vital signs and lab results, and diagnoses acute kidney injury and apparently this has an accuracy of 80. It's faster, it is better than traditional methods and it uses ai to help prevent further damage and, again, improves patient care, reduce health care costs. A third one from stanford university around skin cancer. So they have developed an ai system that analyzes images of skin lesions to assist in skin cancer diagnosis. They've used 130 000 images, which to me seems a really small amount actually, um, but it's got 86.6 accuracy at the moment, which is apparently about the same as a board certified dermatologist. So you would imagine that in six months time it will have significantly surpassed it. So I think we're using examples of things that are already here. I mean, this is already here. At a comparable rate, it's definitely going to surpass it. This is, for me, like. This is a really interesting one because I've got you know, I've got a lot of moles, um, as a lot of us pasty brits have got, and like you, keep them I keep them in a little box and they're all blind, of course as

Matt Cartwright:

all good moles are, um, yeah, but one of the things is you're always told, oh, if you've got a lot of moles, you need to keep an eye out for changes, right, but if you've got a lot of moles, you can't fucking tell if they've any changes, because you've got too many moles and most of them are on your back or, you know, on your bum or in places that you can't see. I mean, jimmy quite often, helpfully, has a look at my moles for me, the ones, the ones where other people can't see. But you know, this is something I think it's a really good one, because I've often thought I know they do this thing called mole mapping, where you have a picture taken but then basically someone has to. This is a real thing, but then you have to compare the picture of another picture and see if there's any changes, and it's like I'm just thinking of the animal mole?

Jimmy Rhodes:

now I'm thinking of the moles, the animal. Sorry, I can't help myself. Yeah, so you think it's?

Matt Cartwright:

skin cancer's funny to you, is it?

Jimmy Rhodes:

no, it's not, it's just, I've got you think it's a joke I've got like a load of have you got any moles?

Matt Cartwright:

uh, not with me no one, you don't care about moles and well, anyway, for those of us, I anyway for those of us poor people with moles under the bed.

Matt Cartwright:

this, I think, is really useful because to be able to you know, I guess at some point in the quite near future to be able to have a scan done and then for AI to be able to compare a scan with a scan six months later.

Matt Cartwright:

Because, frankly, the idea at the moment that you take a picture and then you just look at it and see for any changes is impossible for people to do, and you know, skin cancer is one of those cancers that if you catch it early enough, it's pretty much curable. So I think this will have a massive you know, a massive impact again, as long as big pharma don't manage to get their grubby little claws on it. The last one of these kind of examples. So so clinical microbiology so there's a lot of advances in identifying microorganisms and predicting antibiotic susceptibility, a deep learning algorithm, so AI, to screen millions and millions of compounds, and that has resulted in the discovery of a new antibiotic which is effective against antibiotic-resistant bacteria, like methicillin-resistant Staphylococcus aurorius, which is MRSA, which anyone in the UK will have heard of as the one that everyone gets when they go to hospital, and also vancomycin-resistant enterococci Do you know what else stops bacteria?

Jimmy Rhodes:

copper door handles, apparently, or brass door handles. I believe they didn't. They did, they didn't experiment years ago. And um, you need to swallow a brass door handle though too no, no, it's just because it's just because against it no, it's just because it like it's a non.

Matt Cartwright:

It's a natural antimicrobial, and so if okay, yeah, yeah, yeah, yeah and so they just, they just silver is as well silver and copper, yeah so they just swapped out the door handles.

Jimmy Rhodes:

I think it was brass. I think they swapped out the door handles for brass in a hospital and like reduced the rates of I think mrsa was one of them like massively, massively. So you know there you go.

Matt Cartwright:

Well, does it make a difference if people have enough copper in their body.

Jimmy Rhodes:

Uh, I don't know, but presumably that's something to do with what those copper bracelets are all about. I'm not sure.

Matt Cartwright:

No, the reason I wondered is like copper and zinc are kind of opposite. So if you take zinc supplements which you know is very helpful when you're sick because it helps improve your immune system, but it then depletes your copper, and if you take too much copper it depletes your zinc. So just an fact If people are taking a lot of zinc for their immune system, then make sure you top up on your copper.

Jimmy Rhodes:

Preferably not through doorknobs. I didn't know we needed copper.

Matt Cartwright:

Yeah, you need trace elements of copper, but only small amounts.

Jimmy Rhodes:

But yeah, I mean obviously we were chatting about this earlier on In terms of antibiotics. We need new antibiotics because of all the problems around antibiotics in terms of that you create these resistant bacteria, but also the NACIA gut microbiome, and generally it's not a good thing to be taking them if you don't really need them.

Matt Cartwright:

Yeah, it takes months, if not longer, to recover your gut microbiome after you take antibiotics. So something to consider for people If you take antibiotics, you should take probiotics for at least a few weeks afterwards. But you know, understand that you will have literally killed everything. So you kill the bad ones, you kill all the good ones as well. But we're not doctors, so don't take our medical advice medical advice with matt and jimmy.

Jimmy Rhodes:

So we've talked about good bacteria and bad bacteria. So now, how about bad media? There was, uh, an ex. Is there a good media? Well, I think there's media levels of media.

Matt Cartwright:

Levels of bad yeah.

Jimmy Rhodes:

So the other thing that's uh, come to the, shall we say, in the last week is there was some pictures AI generated images with the latest AI image generation software of Donald Trump holding hands with Kamala Harris, which have caused a bit of an uproar, partly because they were so realistic, and I think this kind of technology is getting more and more realistic. And there was also at the same time, again within the last week, there was a German. I mean, it wasn't a very nice song, but there was a German AI generated song that made it into the top 50. Not sure how difficult it is to get into the top 50 in Germany. No offence, germans.

Matt Cartwright:

Is that the song you were playing when I came in earlier, the one that you definitely turned off as?

Jimmy Rhodes:

soon as I opened the door. It definitely was not.

Matt Cartwright:

I love a good german song um, but oh, that was david hasselhoff you were listening to, wasn't it? That was a song that brought down the berlin war yeah, although he's not german.

Jimmy Rhodes:

No, and he wasn't singing a german song but he has been to to Germany. He's been to Germany.

Matt Cartwright:

And they like him in Germany.

Jimmy Rhodes:

He was there in the 80s, I believe.

Matt Cartwright:

He was responsible for bringing down the Berlin Wall, though right A hundred, yeah, definitely. I mean, don't fact check that because don't believe fact checkers.

Jimmy Rhodes:

Believe what you want to believe and believe what I say, him and his special car.

Matt Cartwright:

Yes, I believe Again another of our many references that will only work for people over the age of 40.

Jimmy Rhodes:

Yeah, yeah, yeah, and shout out to Red Dwarf as well, actually, while we're there, and Bagpuss. But yeah, so AI image generation, video generation, song generation it's all in the media again this week. Um, I don't know if it's stealing anyone's jobs right now, but I mean, certainly, if you've got a song that's already encroaching on the top 50, then someone's song that doesn't, it's someone else's song that doesn't enter the top 50, right? Yeah, exactly, and uh, someone's artist representation of donald trump and kamala harris, uh, didn't make it into the the newspapers this week either. What about the?

Matt Cartwright:

grok so week either. What about the grok? So the other grok, not the grok that we always talk about, that we say grok good which grok? Not grok with a Q. Good grok which one's that? That's the one that you like.

Jimmy Rhodes:

Okay.

Matt Cartwright:

The inference, the fast one. What's the bad one? Grok with a K that's the musketeer's grok, grok that's trained on all the the cesspool of information that is twitter that we refuse to call x.

Jimmy Rhodes:

So the, the thing I'm referencing here.

Matt Cartwright:

So grok with a k is the x ai model, which you know, in its training data it's got access to all of, as I said, the absolute shit of twitter. Um, but it was released and I actually think this is quite fun. It was released with image generation software that didn't really have any guardrails on it. So, yeah, me and jimmy promote claude 3.5 sonnet, but we will have to put out a bit of a disclaimer here that we're we're a bit unhappy with claude at the moment because it's become unbelievably woke and will not answer. You know he gets offended by the, the simplest question, whereas this new version of grok with a k seems to be completely unfiltered.

Matt Cartwright:

It was producing, you know, images of, I think, taylor swift in nazi uniforms. It was producing images of donald trump as the devil. I think. There was basically like almost no filters on it. Is this? Are these your searches? These are things I read about in the, the, the, not the not so bad media. Okay, I don't. I use good grok, I don't use bad grok jimmy, I only use good grok as well.

Jimmy Rhodes:

I've never, actually, I'll be honest, I've never used, uh, elon musk. I've not used it, but I'm interested, can we call it elon's grok? Yeah?

Matt Cartwright:

I'm interested in it because, like I say, the conversation we've had recently where we're getting more and more frustrated with Claude's inability to answer questions, and the fact that we've both got uncensored large language models on our computers probably helps us to not be so bothered. But it is frustrating when you want to ask questions or you want to create stuff. You just want to have fun with it and I think this is the question that we don't need to go into here. But the question of how much you restrict things to ensure them to not be dangerous or not to cause harm or I don't want to say offence to people but cause harm to people, but anything that is slightly offensive. It sort of reflects a lot of the wider kind kind of yeah, issues in the world, doesn't it? But? But claude, I think, has become very, very um, it has become very woke and grock is kind of the.

Jimmy Rhodes:

it's got the antithesis even if it's not woke, though it's got loads of guardrails on it, like it's got loads of guardrails that just won't allow you to. You know, for example, I write songs and I want to write like, even when I'm just writing lyrics. If I want to write lyrics in the style of a certain artist, I'm not trying to copy that artist, I'm not trying to like make a rip-off of a song, and I've never yet to do that anyway, because I can't do that through suno, but claude will eat very, very quickly. If I want to like, pick a particular style, it will very, very quickly be like no, I can't do that. And this comes into the.

Jimmy Rhodes:

You know, I feel like it encroaches on what we, you know what we think about as free speech, and that's what I think, that's what gets me. I think that's what I think that's the, that's the like. That's at the root of it is like you're just trying to have a chat and you know, if I was just trying to have a chat with you, we could have a chat offline and we'd be able to talk about whatever we want to talk about and it'd be our business and nobody else's business. And you know these large language models. It feels like you're having a chat with them and then all of a sudden, it's like, oh, sorry, I can't talk about that. And it's like, well, why?

Matt Cartwright:

It's like something I'd just have it's stuff that what we're talking about here is stuff that you can search for on the internet on google so, so you're fine you're finding that, yeah, we're not talking about a recipe for a, you know, biological weapon. We're talking about just asking it a question that it then answers in a way that it it it's like it's taking offense, isn't it?

Matt Cartwright:

and it's like the thing that you love about claude, which is the kind of human side of the interface. At some point in the last month it's like they've, they've installed an update that has just ramped it up to 11, yeah, and has meant that you, you know it's not just it's not just not answering it, but it's like it's telling you how offended it is and and it will not engage in this conversation.

Jimmy Rhodes:

It kind of takes on a kind of school teacher personality yeah, I would yeah, and that's where I'd say like I think woke is the wrong word because it's kind of like even if you ask it for the lyrics to a song, it'll be like I can't do that because you know various reasons and it just it feels like it's kind of got a bit mixed up between like morals and just having a chat and one of the great things about these models is that you can have quite a natural chat with them.

Jimmy Rhodes:

That that's why we liked Claude, because it didn't have so many of those guardrails and it felt like you could have a very natural conversation with it. And then, when you know, it's like if you're having a, it's not like I think an AI is my mate, but like if you're having a chat with your mate and all of a sudden your mate goes oh sorry, we can't have this conversation Because X, Y, Y and Z. It's weird. Are you woke, Jimmy? Am.

Matt Cartwright:

I woke.

Jimmy Rhodes:

Yeah, no, but yes, but no. Are you as woke as Claude? Definitely. Well, it's not.

Matt Cartwright:

No, but are you more woke than Elon Musk?

Jimmy Rhodes:

I think there's a difference in woke and having no filter whatsoever.

Matt Cartwright:

I was just going to say if you're in between Elon Musk and Claude, I think you're in a good place to be.

Jimmy Rhodes:

Oh yeah, well, I mean, there's quite the space there.

Matt Cartwright:

This will be a pretty quick one, I think, but I just thought this was really interesting use and kind of top think. But I just thought this was really interesting use and kind of topical. But wildfire prediction and also actually weather prediction is kind of the same concept. So, using AI algorithms to analyze satellite data and aerial data to identify kind of potential fire hotspots and monitor progression of wildfires. There are machine learning systems that use historical data. There's real-time kind of analysis of environmental factors like temperature, humidity, vegetation, moisture that predict potential fire behavior and risk areas.

Matt Cartwright:

I don't know where this is necessarily being used. I know there's one particular thing. So there's a crowdsourced data app that is being used by the developed sorry, developed by the universe of adelaide in australia, which is called no burn. Um, I thought that I hadn't actually noticed that no burn was relevant to a wildfire, just thought that was a name because there was no dash in it. It was like norbert or something like that. Um, anyway, but yeah, this is it actually like people take photos, so hikers when they're out they take photos and that is basically used to assess, like the potential fuel load, to predict what fire severity would be.

Matt Cartwright:

Um, and you know that data. Obviously the pictures are then analyzed by AI, which then aids in risk assessments of wildfires and stuff. So I always think this kind of stuff on the episode where we talked about accessibility and we talked about using this AI eyes app, which was now using AI but originally was using people to actually, like you know, perform the role I always think there's these things where they actually use crowdsourcing and get people to feed like you know, perform the role. I always think there's these things where they actually use crowdsourcing and get people to feed in. It just means that, like communities and people have got a buy into it. So you're kind of combining ai to do all the kind of analysis, but with people actually being able to feed into it. I just think that kind of stuff is is really cool.

Jimmy Rhodes:

Um and then smoke. I've got a. I've got like a image in my head of like a bunch of um, a bunch of hikers, like gathering around the wildfire, the campfire sorry the campfire and taking a picture.

Matt Cartwright:

They're taking photos of the wildfire. It's like kind of too late for the predictive technology I meant, I meant campfire.

Jimmy Rhodes:

So like they're they're having a campfire and then they take a picture and then it's like using it to predict wildfires yeah, and then it immediately like just triggers a load of drones to come and drop water.

Matt Cartwright:

It's probably not actually a bad thing. Smoke detection was the last one, so special cameras that have basically used AI to detect smoke and then again, you know, they can potentially kind of alert people and the same stuff is used on weather prediction. So you know, I guess this is an example of like when we're talking on this episode about how ai is kind of, you know, creeping in without people noticing it weather prediction for most people you don't really care where your information comes from and I think you know ai has been used for a long time in weather prediction. It's just that it's getting better and it's integrating data from more sources. So obviously, weather stations, remote sensors, you know internet of things, but also, using that kind of capability, you use AI to be able to analyze much more data, to analyze more patterns, to be able to give more accurate information.

Matt Cartwright:

I've got to be honest, the weather forecast I see I'm not convinced that they're any more accurate than they were before, but potentially AI would help with things like long-term weather predictions, because for short-term weather forecasting actually a lot of it is not that difficult. You know you're able to. I mean even things like you know you can measure the pressure and you can predict what the weather's going to be like. But what you can't do is look ahead. But it's like that thing we talked about earlier with medical stuff about pattern recognition. So the more information you can crunch, the more data you can put in, the more you know. Ai can help you to analyze that data and can be more accurate about kind of long-term prediction someone at work the other day.

Jimmy Rhodes:

Um pointed out that there was dragonflies flying around, so that was.

Matt Cartwright:

It was going to rain shortly well, we I mean, we've said this for a long time. I don't know how this works everywhere, but certainly in the north of china the lunar cycles and the 24 lunar cycles.

Matt Cartwright:

I mean they're accurate to within like one or two days. So I will talk to one of the sort of local beijing guys I know, you know guy in his 50s, 60s and I'll say, oh yeah, it's got, it's got a lot more humid the last few days and he'll be like, yeah, two days ago was you know the the big heat or um seed in ear or you know small heat or whatever was the most recent one and it it almost feels like you can feel it in the air and it's still accurate to within one or two days, which actually, with climate change, you'd think it was becoming less accurate, but it still seems to pretty much match. It's incredible, like I don't. It can't work everywhere, but I think it's.

Jimmy Rhodes:

I think in Beijing it works particularly well because the climate is probably a bit more predictable.

Matt Cartwright:

It's a capital and maybe you know that's what it was based around, but it's kind of incredible how accurate it is. Yeah, you can, you can trust it. It just became well.

Jimmy Rhodes:

It became autumn, like a week and a bit ago or two and it was li cho, which is the beginning of it was on the day it was on the day, almost like it just cooled down and became less humid anyway. That's uh not so much about ai, no, I mean, that's literally the opposite of the opposite of ai, isn't it?

Matt Cartwright:

chinese folklore is perfectly accurate. We don't need ai. Um, there was just one more, like google have. I'm not sure if we're giving google a good or a bad name on this episode, but, um, I guess we will be fair to them and give them credit where it's due. So they have a wildfire simulator which utilises AI to generate data across various kind of wildfire scenarios, which apparently this is more about understanding the dynamics of fires, and then that will feed into the models. So this is not so much about predicting, but this is about using this simulator to generate to see what's going to happen when what's a general wild fire.

Matt Cartwright:

Yeah, exactly, to generate simulations of wildfires and to understand the dynamics of the fire. Yeah, this kind of to me was a little bit like ai models, training models. So you're creating, you know you're creating fresh data to be able to analyze and then feed into the predictive models.

Jimmy Rhodes:

Yeah, sounds pretty cool. I mean, Google do do some good things we don't dislike.

Matt Cartwright:

Google, just not with search recently. Do we like Google?

Jimmy Rhodes:

I'm neutral on them. I'm neutral on Google. I think they're good, neutral, lawful neutral.

Matt Cartwright:

There was one last one Lawful neutral. There was one last one actually. So Tokyo metropolitan government are launching an AI system which is using a load of high-altitude cameras which will detect fires and building collapses to help with emergency responses in an earthquake, so that AI, again, again, will analyze that information. So we're talking about when something has happened. It will help them to coordinate the response to an earthquake by using these cameras. So you know, Tokyo has, I think it's a 70% chance of the big one within the next 30 years.

Jimmy Rhodes:

So that's something that is.

Matt Cartwright:

Well, I'm not sure they're going to do much to help with Godzilla or King Kong Although that's New York, isn't it? But they fight each other, don't they? Godzilla and King Kong?

Jimmy Rhodes:

I think so, yeah, yeah, I'm not sure where they fought each other. Well, maybe in Tokyo. Yeah, probably.

Matt Cartwright:

Maybe AI will predict who wins. So one last one and then we're going to do some predictions, I think. So this one's some data that I got. So credit here to the AI Daily Brief, which is one of the 10 recommendations I gave a few weeks ago for the summer. So it's a daily podcast 15 minutes long and Nate Whitmore that runs it he's one of my favorite kind of, I guess, kind of accessible people in the kind of AI world.

Matt Cartwright:

So a couple of days ago, I think, or sometime last week maybe, did a kind of report with a load of data from something called ramp. Ramp is the name of an organization that does a lot of kind of financial reporting. So this stuff is real data. It's not self-reported stuff, it's, it's hard kind of facts on spend based on, I think, about 1500 vendors, which I presume are mainly in the us I'm not 100 sure, but it talked about where workplaces were spending money on ai and I think this is like this is really important, because when we talk about kind of effective jobs, you can see by seeing these trends you know how much ai actually augments people as companies like to talk about and how much it actually is about replacing people. So I think you know this. Looking at this kind of spend, whichever one of those it is, you can see the trends in, in, in sort of, where businesses are starting to adopt ai, so um it's a lovely turn of phrase, isn't it?

Jimmy Rhodes:

augment.

Matt Cartwright:

augment is like the prelude to replace yeah, exactly, and it's like this specific word that's being used here. It's like everyone has agreed in the global cabal, or whoever agrees these things, that augment is the word that we're going to use until we're ready to just replace everybody. Yeah, but we digress. So the fastest growing expense from quarter two, the fastest growing expense for organization from quarter two, was spend on ai, which increased 375 percent between quarter two 2023 and quarter two 2024. So there was also retention of spend. So this is about where you know organizations that started spending money on ai. They didn't just use it for a bit and then stop. So 70.4% who had been spending on AI in 2023 continued with the same vendor into 2024.

Jimmy Rhodes:

This is where your pay rises are going, people.

Matt Cartwright:

So OpenAI is the leader for enterprise adoption, followed by Anthropic. The average spending quarter two was 20,000. So bear in mind that this is a cross section of organization. We're not talking here about massive, massive multinational corporations. We're talking about small businesses. We're talking about a real cross section. So the average spend in quarter two of 2023 was $20,000. By 2024 quarter two, they were spending about $100,000 on average.

Jimmy Rhodes:

Wow.

Matt Cartwright:

So you know, massive, massive increase. They also and I thought this was really interesting they tracked card spend. So that tracking before was more about kind of you know, traditional ways in which which organizations procure things, so kind of long-term spend. Card spend shows you shorter-term spending decisions. So it favours kind of experimentation. It favours not necessarily small organisations, because a lot of big organisations will use, you know, global procurement cards for small spend. But it favours where you've got people at maybe a lower level who are making, you know, shorter-term decisions to purchase things. That doubled in the same period. So the doubling was only from $1,000 to $2,000.

Matt Cartwright:

But bear in mind, like we said, that with card spend you're talking about people who are just taking out, you know, maybe an enterprise subscription to ChatGPT, maybe a particular video generation model that they want to try out in their business.

Matt Cartwright:

Anthropic did really well here. There's no figures, but apparently Anthropic's market share on that period went up from 14% sorry, from 4% to 17% and in July this year, when Sonnet 3.5 was kind of bedding in, it really shot up. So you know, that's if we look at their uses. And we talked think before in this episode about how they're never going to really make money out of selling models to, you know, the general public, at the twenty dollars a month. But you're seeing that sonnet whether we think most general people have understood it is really, you know, anthropic with sonnet and really making inroads in in the kind of world. Another key finding was that businesses were supplementing and I think this is really important for the kind of job thing supplementing their workforce with independent contractors to boost productivity. So when you supplement your workforce with independent contractors you don't hire people to real contracts.

Jimmy Rhodes:

Yeah, yeah, and so it's a. It's a, it's a short-term thing because you think that you're not going to need people much longer.

Matt Cartwright:

Effectively is what you're saying but also what I think is interesting with this is, I don't know how, you know, labor figures are reported but if you're an organization that has laid off or not laid off, but just you know, through naturally, naturally people leaving, you've reduced your workforce by 10 people and you take on eight short-term contractors. It looks like you've got a net loss of two, but actually you know, 10 people have lost jobs or 10 jobs are not being filled because these, you know, supplementary contractors are not a long-term solution.

Jimmy Rhodes:

And this is where you know overall, like all the stuff you've talked about, this is overall, where I've this was the point of this podcast, right, which I think maybe we haven't emphasised enough, which is that there is. We feel there is. A while ago, when we first started this podcast, we talked about some of the impacts on jobs and how we expected that to transpire, and I think we are seeing that and a lot of the things that we talked about are happening, but and and actually we mentioned this a long time ago as well they're happening by stealth, they're happening under the radar it's also hard because the economy is doing badly and and therefore, it's not clear what is, what is ai?

Matt Cartwright:

what is the economy? What is global? You know what is geopolitics? What is people on long-term sick? There are so many factors at play, but but we're gonna.

Jimmy Rhodes:

We're gonna get into our predictions in a moment. But part of all of this I mean I mean part like one of one of my, if it's not already happening is that, yes, you're right, we're in an economic downturn at the moment, arguably in a pretty serious depression, or sort of recession at the very least. But you know, one of those kind of predictions longer term is that these jobs just don't come back, and that's the way that this erosion happens. It's not about, oh, suddenly a robot took my job. It's over the next two, three, four, five, ten years. There are just less jobs and unemployment in these kind of sectors that we're talking about. White collar, to begin with, just doesn't actually recover yeah, I, I.

Matt Cartwright:

This is spot on. I had this conversation with someone the other day actually, about we were talking about the kind of downturn and saying the problem with this downturn is, even if the downturn recovers at some point, the two lines of ai and the economy are going to cross. Yeah, and there's going to come a point that, even when the economy recovers, that's when the influence of ai is going to have hit, and so it doesn't mean that you won't have recovery and it doesn't mean that there won't be any new jobs. I'm sure there will be and I'm sure we'll, we'll find our way through this in some way, but there will be less jobs.

Matt Cartwright:

I don't see how that's. You know how, how that doesn't come to pass and unless the uh, the global cabals depopulation exercise succeeds, we're going to have to find a new social model that works to support at least part of society. And just an example it's not like in five years time, 50 of jobs are going to have gone. But if you took out five percent of jobs in the world, like adding five percent of people unemployed or ten percent of people unemployed is, you know, an off the charts figure. That maybe has only happened in the great depression. I don't have the figures at hand, but it's an incredible amount. And and how do you fund those ten percent of people's existence?

Jimmy Rhodes:

yeah, it's going to require a massive societal shift. Uh, one piece of advice I would just say is that I think the uh idea of being a prompt engineer is as transient transient as um some of the crypto stuff. So please don't go out there and think you're going to be a prompt engineer because, uh, that it was never even a ship. But if it was a ship, it's sailed already. So we're into the uh 55th minute of the podcast, but we've, uh, we were onto the fun part of the fun part, the we.

Jimmy Rhodes:

We said that we would come up with a few predictions and I think what we're going to do is have. We're going to come up with a prediction in the next six months, so by the end of 2024, pretty much, and then the end of 2025 and then the end of 2026. I'm going to start kick off with a fairly boring prediction, but unfortunately open ai have been pretty slow to the, to the chat gpt5 game and they haven't even really managed to release voice fully yet. But my prediction for the end of 2024 is that actually not much is going to change. I think we'll get a better version of um, some of the large language models. Maybe we'll see claude, opus 3.5. Um, we will probably see chat gpt5. Maybe we'll see q star or strawberry or whatever that is. I don't honestly. I think probably we'll see chat gpt 5 with increased multimodal capabilities, but basically just a better large language model that can.

Matt Cartwright:

I think you will be able to produce physical strawberries out of your computer. And that's the strawberry thing. That's the dream, yeah.

Jimmy Rhodes:

But yeah, honestly, I think the end of 2024 is going to be a bit of a damp squib, but my predictions for 2025 are going to get a little bit more exciting. Over to you, matt. Well, my prediction is anything but a damp squib, um, but my predictions for 2025 are going to get a little bit more exciting.

Matt Cartwright:

Over to you, matt well, my prediction is anything but a damp squib. So my prediction by by the end of the year, right? At 2024 yeah so I'm going to go even earlier than that, so by christmas. So six days earlier.

Matt Cartwright:

That is when that's six days earlier. Okay, so sam altman is going to peel off his mask and reveal he's actually the devil himself. I think he's already done that. Or he's one of the lizards controlling the global cabal, or he's being controlled by the global cabal. And then a sub prediction for me is that by the end of the year, when, as you say, they release 5.0, I will have resubscribed to ChatGPT because Claude is too woke and maybe or maybe I'll go full in on the Musketeer and I'll just use grok with a K.

Jimmy Rhodes:

I'm never going back. I'm currently subscribed to Claude. I have a large language model running on my computer. I've got that running through LM Studio. Honestly, it's pretty good. I think it does me for most stuff. I don't think I'm going to go back to the subscription model Once I cancel my next subscription. That's going to be that, and I think models on open source are going to get better and better.

Matt Cartwright:

Well, we'll see. So I've got one more within six months. So I think Google will finally get its act together and start releasing agentic models. I'm not sure that they'll be fully complete, but I think it will start releasing something agentic and then I think that changes the game for enterprise use. I think that puts Google in the lead in terms of usefulness in a real world scenario, even if at that point they're not the highest benchmarked model okay, yeah, I mean.

Jimmy Rhodes:

But if by a genetic model you mean something writes emails for you, then I mean they were showcasing them in may, right?

Matt Cartwright:

they did their google thing in may, and so I think, yeah, okay, google have done a lot of um, a lot of events where they've shown things that were just didn't exist. But I, I, I think at some point they've got to get it right. So I'm going to say within six months, so that takes us to what march next year. So by march next year they start releasing agentic models you're like nostradamus.

Matt Cartwright:

Well, we'll see. I'm not sure, I'm not sure all of mine will come true, but sam outman's definitely gonna peel that mask off so have you got another one within six months? I'm just making one prediction no, I haven't got any more in six months okay, so by the end of 2025?

Jimmy Rhodes:

no, within a year. First I thought we were doing by the end of 2024, 2025, 2096 oh no, I've got ones within a year?

Matt Cartwright:

okay, well, I'll, I'll do them, you can just challenge me on. Got ones within a year? Okay, well, I'll do them, you can just challenge me on them. So within a year, new open source models are banned from being public released in China, the US, the UK, japan, south Korea and the EU. Maybe other places, but those are the places that I think are potentially going to have developed models. So this would follow GPT-5 class models or above being released on open source. And then I think musk ignores this. He's got his supercomputer and he leaks an absolute beast of a frontier model that goes absolute awol I hope we don't end up living in that world within a year's time like I.

Jimmy Rhodes:

I'm uh, we've talked about it before I'm like a big advocate for open source models. I equate it to free speech, as I mentioned earlier on this podcast. Um, I don't think. I don't put it. I don't think that's going to happen either. I think, even if you, I think the power of open source and one of the things that maybe the governments of the world kind of if of, if this is the case, right, if governments, if the governments of the world sort of get together to be like, well, we need to ban open source because it's dangerous, I think one of the things that they've underestimated, and maybe you've underestimated, but I apologize if you haven't is that part of the power. Open source is exactly that it's open source, and so you probably can't stop it, like it hasn't happened so far. You know you've got you've got examples like linux that are still around um 30, 40, 50.

Matt Cartwright:

You can stop it though, because the only people developing frontier level model I'm not saying you can't release any any open source models. I mean, you know, same as you. I've got a large language model open source running on my computer and I've got an uncensored version that I'm very, very happy with. That allows me to ask questions that I could never ask of any other model. However, those are not frontier models.

Jimmy Rhodes:

They're getting closer. They're getting a lot closer.

Matt Cartwright:

They're getting closer. But if we have the next stage of models, there does come a point at which I think the reaction whether or not you think they should and I actually sort of agree with the point you made a few weeks ago that I think it sort of doesn't matter, because once you've developed those models, they exist and they're out there anyway. And actually I, I, who would you trust those closed source organizations? But I don't think it matters what I think or what you think. I think the reason I think this will happen is that it gets to a point all it takes is for one thing to go wrong and then the reaction is quick, and and that reaction immediately is oh my God, we need control of these things. And so you shut down open source models and your argument that you can't stop it. Well, you can, because the only organizations developing those frontier models you know, they know where their bread's buttered, they know where their money comes from.

Jimmy Rhodes:

If they're told not to release them, then they won't release them, apart from Muskell just do what he wants, which is why I said he'll release, yeah, an absolute beast of a frontier model anyway and okay, so maybe you're right in the short term, but it's going to get. The algorithms behind these models is going to get better and better, the cost to training them is going to get easier and easier and suppressing open source models is going to get more and more difficult. Because, yes, okay, right now it costs 100 million to train a model and it takes six months and all the rest of it. But this stuff is coming. This stuff is getting cheaper and easier and more efficient exponentially, like it really is.

Jimmy Rhodes:

Um, and so I. I. Even if, even if open source gets suppressed to a certain extent by governments around the world in around a year's time, I don't think that'll last very long. If you look at things like Lama 7B, lama 3.17B, which I've been using recently I don't know how it compares to ChatGPT-3, but I would guess it's on par, if not better. I think it compares with 3.5.

Matt Cartwright:

Yeah, I think they compare with 3.5. Yeah, I think they compare with 3.5. I think that's where.

Jimmy Rhodes:

That's basically where those smaller models are yeah, and, and this is it, like you're talking about a model that you can run your laptop at home, you know, or your computer at home, um, you know, okay, okay I, I get your point, but I I think you actually maybe I've not explained this, but I think let's let's talk in terms of gpt 4.5, because it's the easiest way to explain it.

Matt Cartwright:

If you've got a gpt 5 class model that is closed source, right, and then you release the 5 open source. I think the point is that the frontier model needs to be closed source. So once you've got gpt 6, I think, then a gpt 5 class model can be released open source, because the idea is then, if we need a model, an ai that can control the other ai, we've got one under control. I think that's how. I think that's how it works. I think that's how you've got to remember as well, like we've talked about this many times, who are the people running the senate in the us? Who are the people running the communist party in china? You know, these are old people who don't really necessarily understand the technology, and I just think there'll be a kind of it will be very reactionary and there will be a lot of fear around things being out there in open source, whether that's right or wrong. But that's my prediction and we'll see where we are in a year's time, mate.

Jimmy Rhodes:

On this point, I am firmly on Elon Musk's side. That's the hot take.

Matt Cartwright:

It doesn't matter whose side you're on. Am I going to be right or wrong? I think you're gonna be wrong. Should I do another one? Go ahead, so within a year. Here's a more positive one.

Matt Cartwright:

Within a year, I think that ai will discover at least one more additional antibiotic, and it will also allow you to screen for a number of I don't know which ones, but a number of common disease by simply analyzing a photo of your mouth, as in the inside of your mouth, your eyes or your skin, by when, within a year, so august, the 22nd 2025- yeah, I'm not going to disagree with everything you say.

Jimmy Rhodes:

I'll go with. That makes a boring podcast if you agree with me that I didn't agree with you on everything. This is the first thing I've agreed with you on so far, and we're an hour in right, so by the end of 2025.

Matt Cartwright:

Do you want to go first, or should I go first again?

Jimmy Rhodes:

no, I'm happy to go first. So my predictions I think this prediction was going to be for a year's time, but I'll say by the end of 2025, and it's about agentic models. So I was going to say in a year's time, I think, we'll see the first agentic models actually genuinely start to do effectively like a whole person's job, a whole person's role. So, for example, a coder, they'll actually be able to to do all of that job and basically you'll just be able to give them direction. They'll go away, they'll complete the, they'll complete the action and they'll come back and you'll have your website, you'll have your um app, you'll have whatever it is that you need to code, and I don't think it will be super advanced within a year's time.

Jimmy Rhodes:

I think by the end of 2025, honestly, I think it will be relatively commonplace. I genuinely think it will be relatively commonplace and I also think by the end of 2025, so, going along with the jobs theme, I think we will see robots such as figure 01 or figure 02 or figure 03, whatever version it is then and the latest generation of the Tesla bot and whatever else. I think that we will see robots starting to do tasks in the workplace like physical tasks, and actually I don't think commonplace. Commonplace is probably wrong for robots, but I think agentic AI will be starting to become commonplace by the end of 2025. And I think you'll start to see robots in the workplace actually replacing people by the end of 2025. And that's pretty shocking so boringly.

Matt Cartwright:

I completely agree with you on agentic models. Um, I think you could have been more ambitious and said a year. I think you you've hidden a bit by saying the end of 25. No, I said, I said, I said we'll start to see the I said we'll start to see the first ones we'll put you anywhere.

Jimmy Rhodes:

Your mouth is in yeah, so we'll start to see the first. So in a year, we'll start to see. No, we'll start to see them happen. Within a year, we'll start to see them.

Matt Cartwright:

I think they'll become commonplace by the end of 2025, okay on the robots thing because we didn't, we didn't talk about, because we didn't have time. But, um, robots was an example. So I was away this weekend in a place called punglai, which is a pretty small, it's like a, an area of a city, but it's like a tourist resort. It's probably a million people. So in you know, in the terms of any other country it would be quite, quite big, but in chinese terms quite a small place, and we saw three robots on that trip.

Matt Cartwright:

My son, who is two, um, already knows the word in chinese and english for robot and he in fact, a couple of days ago saw a water cooler and he thought it was a robot because he's more familiar with robots than he is water coolers. So we saw in a vineyard, in the visitor center of a vineyard, we saw a robot and he went over to it and said gtN, which is robot in Chinese, and he said daddy, daddy, robot. We then saw one in a hotel in the reception doing cleaning. And then this was the fun one that I sent to you Ordered some.

Matt Cartwright:

It was like a drink, like a yogurt drink, to be delivered, take away to a hotel, and when the delivery driver called us he said what room number you in and we said, oh, you're going to deliver it to the room and he said, no, I deliver it to reception and they put it in the robot, they give it the room number and, to be fair, it took a long time, like I was pretty disappointed because it was another 15 minutes, but then the robot arrived at the door. You open the door, you press the button, the doors open, you get the yogurt out and then the robot goes back downstairs. Can I just thought it was quite a cool story of how we met three robots doing you know those are replacing people's jobs, like not a full job, but those are all things that people would have done you, you, you sent, you, sent me a picture of the robot that delivered your um package.

Jimmy Rhodes:

I sent you the picture, sent me the picture. You sent me a picture of the robot that delivered your um package. I shent you the picture, shent me the picture, you shent me the picture of the robot that delivered your package. And, uh, I, I just realized what it reminded me of and it I'm gonna, I'm gonna bring back another 80s reference here, but it reminded me of the robot that david freeman hid in in flight of the navigator yes, in 1986 bagpuss flight, the navigator was going to be one of my other 80s references but it's like that.

Jimmy Rhodes:

That delivery robot was exactly like the robot in flight of the navigator that he hid inside so that he could get out and get back into the spaceship it was um.

Matt Cartwright:

Yeah, I might embed a picture in the in the notes of this episode, if I can be bothered to do it I don't think it's the sort of robot that's going to take over the world. Put it that way Well, I don't know, maybe it will, and the fact that you think it won't is what it will use to do that, so what was your prediction anyway? My prediction was that. Was it my prediction or your prediction?

Jimmy Rhodes:

I was talking about robots and then.

Matt Cartwright:

Yeah, I didn't make a prediction. I just talked about robots. So my prediction which I haven't done yet, so this is the end of 2025, right? Yes, so I've got two. The first one is an AI model, whether that's a large language model or whether we have a different architecture by then will make a decision or carry out an action which causes financial losses into the tens of billions of US dollars, and it could be either a cyber outage, it could be a rogue algorithm, it could just be down to misuse, but this then results in the beginning of a period of fear and suspicion of frontier AI models. So I'll stick to my pessimistic theme.

Jimmy Rhodes:

The problem I have with this prediction is that didn't this already happen? Like wasn't this? Like Black Monday or something?

Matt Cartwright:

Well, that wasn't an AI sort of led decision? Was it or an AI led action?

Jimmy Rhodes:

Yeah, I guess not. It was some sort of algorithm, I believe.

Matt Cartwright:

but Well, in that case, I've won with my prediction because it's already happened. Nice one, yeah, yeah. So moving on right next one.

Jimmy Rhodes:

I've got another one, like I said, nostradamus by the end of 2020, 25.

Matt Cartwright:

So what, 16 months away? Text to video apps like sora will still be a disappointment and they will still be lagging behind more practical uses, which hopefully, by then, will have taken over the focus of ai from novelty applications this.

Jimmy Rhodes:

I I don't know if I'm completely opposite to this, but but I, I I sort of I actually think it'll kind of be the opposite. I think I don't know if it'll be there by the end of 2025 that we'll see them, but I think, I think that film studios are going to start using AI to assist them with generating certain kinds of effects and I think, yeah, okay, I'll put my money where my mouth was, as you said, and I think by the end of 2025, we're going to see the first film, tv show, whatever it is probably film, I guess with ai generated vfx in it I actually I don't disagree with that because I don't think that contradicts my prediction.

Matt Cartwright:

I think that's highly possible. My point and it was a bit tongue-in-cheek because I'm I'm sort of mocking the fact that it's taken so long for Sora. We talked about it in our second episode and then it's just disappeared, and we talked about how Dream Machine is basically unusable. My point is not that it won't happen and it won't be available. My point is that the quality of it will still not be up to the standard that we expected.

Matt Cartwright:

We were talking about, by the end of this year, being able to make your own feature film. That's kind of what it felt like at that point. My point is that you know you might be able to produce it and maybe people don't care about the quality of content, because that's one of the trends we've seen is, you know, people just accept a lower quality of content and people don't really care so much, or not. All I say people, some people won't care, and so there is a market for that. What I I think here is that the progress of the quality of it is not going to happen in the way that we think, so you may well have a film by the middle of next year. But people will be disappointed. It will not be up to the standard that we thought it was going to be and and I think I don't know why, but text the video feels like it's. It's really dragging behind yeah, and I do.

Matt Cartwright:

You know what something's not working with the, with the architecture and text to video, I think I well, I think I think I know what I mean.

Jimmy Rhodes:

I have an idea what the answer is here, which is that like when you, as a visual fx designer, as an artist, whatever, like when you, as a visual effects designer, as an artist, whatever, like when you're coming up with a scene, you have a very clear picture in your mind of what you want. And the problem with this stuff like Sora and things like that, is all you can do is describe a scene and then see what the AI model produces, and it might produce something that looks beautiful, but it's actually not what you had in your head, and I feel like that's it's the. It's the same thing with ai images as well.

Jimmy Rhodes:

Like you can be, like you can have a really clear picture in your head of what you want, and you can't make that because you've got on yeah, because because you have to give it a prompt and you have to talk to it and you have to like, try and coax it into giving you what you want, and it's kind of like a. It's kind of like a poor interface really, because you can't you can't take the picture out of your head and put it into the ai it's a bit like when you've read a book and you really like a book and the film comes out and you're like, oh, it's not.

Matt Cartwright:

It's not how I imagine them to be, whereas if, if you watch the film first and then read the book, you, you read the book and you're you're imagining the characters from in the film.

Jimmy Rhodes:

Yeah, yeah, yeah, and I feel like I feel like a lot of these texts to song, text to image, like all this kind of like creative field. Part of the problem with it is you can't it's really difficult to get it to do exactly what you want, cause you've got a clear idea in your head. Like when I'm trying to create a song, I have to go through so many iterations and I'm not changing the prompt, I'm just like getting it to regenerate and regenerate and regenerate. And I would imagine it's kind of the same problem with and I think I think this is where you know some of the model. Some of the models now are allowing you to put an initial image in, put an initial video in and then generate from that, because, because otherwise it's like you can't describe what you actually really want in words, and that's where it's like, like I say, it's like a kind of a poor interface.

Matt Cartwright:

I'm not sure I understand what Well, that was Siri. So there's an example of a piece of rubbish AI that still doesn't work, although I'm not sure I understand. No, I do understand. Yeah, I agree with you, and I think I was going to say, well, you're contradicting your own kind of prediction, but you're not, because I think we're sort of agreeing that there are two different things at play here, aren't there? There is that ability to create something, and I'm sure the technology is not far away to just be able to give it a prompt and it create a piece of content.

Matt Cartwright:

But the question and I think this is a really interesting one that we're not going to explore in detail here, but this kind of plays into that well, is it actually going to, you know, replace film and tv at the highest level? Probably not, but is it going to replace some of the you know rubbishy cartoons that you see and some of the really simple, you know, teen kind of fiction, kind of stories that don't require a high level? Probably it can. All of Disney, then, well, and Disney's good, they do some good stuff. Which Disney they used to? Disney Plus, whatever it's called? Right, let's do the big one.

Jimmy Rhodes:

So by the end of 2026,'s gonna go first uh, terminator 2 no, um, in all seriousness, I I genuinely think, okay, 2026. So I've had a, I've had a good think about this. Um on, on a societal level, I genuinely think by the end of 2026, we're gonna see massive upheaval in terms of like by 2020. By the end of 2026, we are genuinely gonna see massive upheaval in terms of ai replacing jobs or well, seriously augmenting jobs. I guess, if we're towing the line, um, you know, and it's going to start to have a real impact, I think we'll start to see societal problems off the back of it, unfortunately, but hopefully, on a positive side, we will also start to see some experiments into UBI and some positive changes where we recognise that AI is here to stay, stay and it's going to again, at the very least, seriously augment what we're doing, if not possibly replace significant sectors in terms of jobs.

Jimmy Rhodes:

Um, I think that. I think that by that point I mean some of the robotics we're already seeing. If you look at some of the advances in robotics and some of the things that NVIDIA are doing about simulating worlds and allowing robots to test themselves in these simulated worlds and all the rest of it, I can't see how by the end of 2026. I think by the end of 2026, maybe it's not commonplace, but people are going to start having robots in their homes. We're getting into iRobot now, but people are going to have robots in their homes and you're going to have a robot where maybe you can I mean, we already have robot Hoovers Maybe by the end of 2026.

Matt Cartwright:

Robot sex workers.

Jimmy Rhodes:

Not sex workers. No, I wasn't going down that particular.

Matt Cartwright:

That's not my prediction. Avenue Not sex workers.

Jimmy Rhodes:

no, I wasn't going down that particular avenue, so to speak. When I think of someone, in my home. I immediately think of a sex worker, apparently, Jesus.

Matt Cartwright:

I'm sorry in your home, not my home. My home doesn't have any.

Jimmy Rhodes:

Hopefully people in your home don't listen to the podcast. So, no, so, no, so, genuinely, I think by the end of 2026, like, maybe we'll have there's already a 19 000 robot potentially, um, that's going to be commercially available fairly soon. So by the end of 2026, yeah, I reckon we'll have a robot that you can. You can pop a message in your phone saying I'm gonna be home by 6 pm, have my, have my dinner ready, and, uh, maybe we'll have a robot that can prepare our dinner, wash our dishes, give us a little scrub down in the shower so we're not far off my sex worker robot then uh, yeah no, no, no, we're not far off.

Matt Cartwright:

Yes, we're gonna have okay, I'm not sure how many people have made it this far. We're an hour and 21 minutes in now. We we were trying to make we always try and make an episode under an hour, but we just have so much fun, um, so here's mine, and then we'll finish with this. So it actually is quite a nice segue from yours, because you talked about upheaval and society. I mean, you know everything is so great these days that you know I'm sure we'll be fine with a bit more upheaval in society.

Matt Cartwright:

But my prediction, by the end of 2026, there will be full scale mass protests in most of the highly developed economies where AI has really started to take jobs, of the highly developed economies where AI has really started to take jobs. So I'm talking here about hundreds of thousands, if not more, perhaps millions, across countries attacking the headquarters of big AI firms and of companies that are not AI companies but are perceived to have been the most innovative in adopting AI to replace people. So a linked prediction here is that then I think many countries then rush to put in place hastily developed policies restricting replacement of people because they haven't got their act together on future social models, and so we see an attempt to try and mitigate the problem, but actually what we see is a kind of short-term fix and we still, I think, don't really resolve the issues. But the main prediction is about mass protests and rioting and, yeah, I think that's inevitable in that period, if what you say comes to fruition.

Matt Cartwright:

I think that kind of unrest is necessary and I actually I'm not going to say I think it's a good thing to be people out on the streets burning buildings down, but I think it's going to need that level of societal backlash for there to be the kind of, you know, governance and structures put in place to allow us to be able to have a you know, a decent future with this. So I think it's an inevitability future with this. So I think it's an inevitability. I think it sounds shocking, but I think it's a you know, these things have to happen on on the route to, uh, adapting and adopting what is not just a new technology but is a whole new way of organizing society yeah.

Jimmy Rhodes:

So I'm gonna coin the phrase right now, uh, because I kind of agree with you like I don't know, I don't know what the extent will be, I don't know how bad it will be. I hope it will be fairly peaceful, but I'm going to coin the phrase.

Matt Cartwright:

We'll be fine in China, cause there'll be no protests here. Yeah, we won't know about them.

Jimmy Rhodes:

Anyway, I don't, yeah, well if we're in China at that point. But I don know, and this is like this is you copywriting it copy? I'm copying now your phrase yeah, and I'm also inviting you all to join our aimish community that we're going to set up sometime around mid 2025, early 2026 sounds like the perfect way to end the podcast.

Matt Cartwright:

So, as usual, we will play you out with our song this week. Of course it's entitled burn the bots and uh keep listening, subscribe, pass it on to three friends and uh. Thank you everybody who supports the show and uh gives us comments and feedback, and we will see you all next week. Thank you, get up, fight, justice will be done. Sam Alman is the best friend of Mars.

Speaker 4:

They said it was progress, a new era was born, but now we are without anything, without joy. The streets full of anger and fear, while the robots take away our courage. Las calles llenas de rabia y de temor, mientras los robots nos quitan el valor. ¡quema los bots? ¡quema los chat? ¡caos en la calle? Nos vamos a hundir. ¡levántate, ¡lucha que se haga justicia? ¡sam Alban es the best friend of Mate? They said it was progress, a new era was born, but now we are without anything, without joy. The streets are full of rage and fear, while the robbers take away our courage. The, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the.

Speaker 4:

Is Disfrazado de humano Traidor a la humanidad, con escamas en la mano. ¡quémalo vos? ¡quémalo ya? ¡claos en las calles? ¡nos vamos a hundir? ¡levántate, ¡lucha que se haga justicia? ¡samarman es el mejor amigo de Marte? Las llamas se alzarán, la gente gritará Thank you, humano en libre y el robot destruido.

People on this episode