Preparing for AI: The AI Podcast for Everybody

THE TOP 10 AI APPS FOR SUMMER 2024! The best hour you will spend all summer

July 17, 2024 Matt Cartwright & Jimmy Rhodes Season 2 Episode 6

Send us a text

Heading to the beach this summer? Or just staying at home pondering how long it will be until a superintelligent AI takes over the world? Or maybe you really want to learn the best AI tools to use this summer? Well whatever your flava you are in the right place, because this week Jimmy and Matt introduce their top 10 AI applications for summer 2024!

Join us to discover the best large language models for multimodality, search and privacy, the best image and music generation tools, the most comprehensive Chinese multimodal AI app and the best place for interactive AI learning. It might just be the best hour you will spend all summer.
 
Suno    Perplexity     ChatGPT      Private, Permissionless AI (venice.ai)     Claude
Groq is Fast AI Inference     Superintelligent AI — Get Great at AI (besuper.ai)
The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis on Apple Podcasts   
Search Microsoft Copilot: Your everyday AI companion
AI Supremacy | Michael Spencer | Substack (ai-supremacy.com) 
Don't Worry About the Vase | Substack
Luma Dream Machine (lumalabs.ai)
天工AI-搜索、对话、写作、文档分析、画画、做PPT的全能AI助手 (tiangong.cn)

Matt Cartwright:

Welcome to Preparing for AI, the AI podcast for everybody. With your hosts, jimmy Rhodes and me, matt Cartwright, we explore the human and social impacts of AI, looking at the impact on jobs, ai and sustainability and, most importantly, the urgent need for safe development of AI, governance and alignment. Urgent need for safe development of AI, governance and alignment. Club Tropicana drinks are free, everybody. This is Preparing for AI the top 10 apps for summer 2024. So today we're not going to be doing a ranking, we're just going to be listing our top 10 apps in no particular order. So take it away, jimmy.

Jimmy Rhodes:

So I think my first app is going to be Suno. It's uh, we use it every single episode on this podcast. It's a music generation app. It's currently on version 3.5 and it's I I guess it's a bit controversial in that it's, uh, literally taking away livelihoods right now in terms of, um, like, in terms of, you know what it's used for, what it's used for music generation.

Jimmy Rhodes:

So suno is, I mean, it's fun. I would say it's. I would say it's a lot of fun, right, if you've got an idea for a song you want to generate, you can, you can put a prompt in. It'll give you lyrics, it'll make you a tune in a genre or style that you want to make, and it's got better and better. Like, the current version is 3.5. We use it every single episode and you can create royalty-free music that you can actually commercialize. Who knows how long that's gonna last, but it's a fun app. I, I really enjoy it. Like I, we generally make pretty silly songs with it. I would say it's not super serious, um, but it will. You know, it will generally generate stuff in a style for you. So, arguably, there's a bit of controversy there in terms of, uh, you know, replicating certain artists and all the rest of it. But that aside, like I've had a lot of fun with zuno, I think it's funny, I think it's entertaining.

Matt Cartwright:

I don't think it's genuinely gonna replace musicians, because it lacks that kind of spirit, but it's a lot of fun we should probably say that when well, when we started making music on sooner, we were using it to generate lyrics, but we don't now. So we use claude to generate lyrics, and it's much better that way. So, you know, for kind of advice, I guess, for people who are playing around with it, a way to make the most of it. So generating lyrics using sooner or using your, your kind of large language model of choice and then putting those in, but anything else that that you would recommend in terms of making music sound better, because I think the songs on the show have have got better, which partly is maybe the you know the app has got better itself, but I think is also the the way that we use it now yeah, so so the current version the previous versions would generate songs up to, I think, around two or three minutes.

Jimmy Rhodes:

The current version will generate songs up to four minutes and they it's got a word limit on I think it's like three thousand, uh, characters in terms of like the the length of the lyrics. So, yeah, the way that my, my personal um, what would you say workflow for for Suno, is that I will go to Claude. I will say to Claude, I want to generate lyrics for a Suno song, I want to generate them in this style and I like. So that might be an artist, that might be like a genre, whatever it is, I'll give it the general kind of gist of what we're talking about. So, in the case of like a podcast, I'll upload a transcript, or I'll upload like a transcript of part of the podcast, some kind of like monologue from Matt, for example, and then Suna will generally spit out something pretty good first.

Jimmy Rhodes:

Sorry, claude will spell something pretty good first time round. And then, if I'm after the style of a particular artist like, I will ask Claude to generate me some style tags that work with the style with a particular artist that fit within 120 characters, which is what the limit for Claw, uh, for Suno 3.5 is, and then I'll put all that into Suno, rather than getting Suno to generate the lyrics and uh, and then effectively you just keep hitting, generate until you find a song that's good. And a lot of the time it will spell stuff that's pretty hit and miss, but eventually it'll, you know, it'll come up with something that's pretty catchy. A lot of the time it'll spit out stuff that's pretty hit and miss, but eventually it'll, you know, it'll come up with something that's pretty catchy, pretty good, has good timing, all the rest of it. There are also like some guidelines on suno as to how you can uh insert things like instrumental breaks and you know um, different styles into your music, like different breaks, um, but yeah, like that's.

Matt Cartwright:

That's pretty much my workflow and we're trying to make everything that we recommend today on the on the episode, things that are available for free, but they've all got, or most of them have got, paid versions as well. So what's the difference, and is it worth people signing up to a paid account, or is a you know a free account sufficient for most people?

Jimmy Rhodes:

So, on Suno unless it's changed cause I haven't actually I've got a paid account, I'll be honest. But you, on the free tier, you actually get quite a few free songs, so I think it gives you something like 300 credits, and each song that you generate costs 10 credits. Um, unless that's changed, and so you can get quite a lot of that out of it for free, you'll also go to the back of the queue in terms of generation, so it might take a little bit longer. On the paid tier, I think I'm paying ten dollars a month for suno and you get three thousand credits, which is like a crazy amount. It's like enough for like 300 songs, and it generates two versions each time you generate. So, to be honest, the paid tier is immediately um, like way, way more than you need.

Jimmy Rhodes:

One key difference is in the terms and conditions, um, where if you pay for it, then you can use that any way you want. You can even commercialize it. If you don't pay for it. If you've got the free version, then you it's, it's, you can use it, but you can't commercialize it. So there are differences in terms and conditions, but it's accessible for free, you can try out free, you can play around for free and it's.

Matt Cartwright:

It's pretty good fun okay, so moving on to my first recommendation. So my first recommendation and it's one that we talked about a little bit on the last episode actually, which is perplexityai so I think we've said a number of times that you know, the best use of large language models and ai tools is not as a replacement for search engines, but we know that a lot of general users like to use them for that and if, if not, you know, they still I mean even I do like using them to ask questions. Well, perplexity is basically a free AI search engine. So they describe themselves as being a research partner and I do agree they're great for doing research projects, assignments etc. But they are fantastic just in terms of a replacement for a search engine basically. So they pull data from the internet, so it doesn't rely, like a lot of large language models, on a training cutoff date. Um, I mean some of them, chat gpt has a search function on top of its, its kind of training data. But you know, with most of large language models, you're really using their training data. That's what you're you're focused on, whereas with perplexity, the focus is to pull real-time data and so you know all the answers are up to date. They say they use trusted news outlets, academic papers and established blogs, um, and that's how I found it. I mean, I probably now, especially as I've moved to using claude as my main large language model and claude doesn't have um a search function. It's kind of helped me, actually, because it's made me use perplexity if I want to ask questions. They link sources in a really easy to access way and the links work, which is something that I found with with chat GPT. Sometimes it will not necessarily hallucinate an answer, but where it gives you a link, that link doesn't work or it's a dead link and you can sometimes even see by the the name of it it's it's found, the information, but it's not able to link properly. Now gemini doesn't have that problem because of course it's a google model, so it's you know they're going to have good, good search functionality and um. It's not the case all the time, but you can really notice a difference and you see those links at the top of the screen in a really kind of easy to view way so you can, as soon as it starts generating the information, you can see where it's got that from.

Matt Cartwright:

They use various models, so they use gpt for omni, they use claude 3. I don't know if they're updating to claude 3.5. They use a lot of open source models as well, like llama 370b in the pro plan. So on the free plan you get five pro searches a day and also, like most things, you know you're at the back of the queue if it's particularly busy. Pro users on the paid plans get 600 a day and on that pro plan you can choose from which of the models you use. So whether you use gpt4, omni, claw 3, opus, sonar large 32k, which is the one that's based on that llama 370b, and then you get, also for advantage, you get five dollars of api credit. So, yeah, that's not something that will appeal to, I guess, a lot of our listeners, but if you are using it more professionally, you also get access to discord, better quality support etc.

Matt Cartwright:

I would say for most people. To start with, anyway, you just need a free account. I mean the pro searches. If you're doing research you're you know something, academic or professional research maybe it's worth it, but you get five pro searches anyway. Um, I'll just read out their their sort of description of the use cases for pro search. So, academic research um, it's the ultimate research assistant for students and academics, digs through academic databases, simulations and performs data analysis, providing summarized papers and curated sources. Professional research so ProSearch can pinpoint case laws for attorneys, summarize trend analysis for marketeers and debug code for developers. Whatever your profession, prosearch can help you make more informed decisions. And code interpretation, so it's also can debug code run simulations, provide detailed explanations. Whether you're tackling bugs or optimizing codes, prosearch has you covered. So you can do a five day. I think sorry, you can do five of those searches a day. Remember that the ProSearch is on the free plan. So I think, even if can do five of those searches a day, remember that the pro search is on the free plan. So I think, even if you're thinking of signing up to it, you know, play around with it first and and and see how those pro searches are, because you can use five of them free.

Matt Cartwright:

I think it's like genuinely a really great replacement for search. Um, I don't know if it has access to as much information as google does, but it. You know a lot of people now have that criticism of Google that it's just giving you. You know you're getting the same sources and it's this kind of search engine optimization has changed so much on Google. Now that you know, I think a lot of people are kind of looking for a way to do something different and maybe we're going to have, you know, a chat, gpt, uh, open ai search function in the future. I think whether that becomes better, I don't know, as, as you know, we like to recommend people using other things, not necessarily always closed uh, closed ai, but, yeah, play around with it. It's a really really um, it's a really good tool and it's probably the best thing for not just search but for asking questions as well I'm not sure about you, but I feel like I mean, I think a lot of people feel like this search has become.

Jimmy Rhodes:

Google search has become a bit of a mess and it seems to have almost coincided with the advent of ai and some of these tools and it's a bit weird. But I think google search has just become a bit of a cluster fuck, to be honest, with with like convoluting adverts, with actual genuine search results, with SEO, with a bunch of other stuff and like Google looks horrible now as well, doesn't it?

Matt Cartwright:

It looks it always looked. It was really logical. It just looks a mess now.

Jimmy Rhodes:

Yeah, google was. The incredible thing about Google when it launched and this is like I mean I'm going back to like 1999, when it was competing with Yahoo was it was the simplicity of the interface. It was just like I want to find this. Here you go. And it's moved away from that over the years, so much that now there's this. You know it almost creates the space for things like perplexity, because it's like give it to me straight, you know.

Matt Cartwright:

Yeah, absolutely. I mean you're preaching to the converted, but if anyone else has the same concern, or even if you don't, then try out perplexity. I think for a lot of people you will replace your search engine sort of work with it. But I think for a lot of people actually will find it, depending on what you do. But we'll probably find it more useful than a lot of the large language models. If you're just asking questions and you just want to get information, perplexity is probably better than any of the models, to be honest.

Jimmy Rhodes:

Yeah, I haven't used it as much as you have um, but from what I have seen it's it's actually really fantastic, especially for academia, for like, academic research or for anyone who wants to actually like, doesn't want to just take something on face value um and wants to see the sources behind what, the what they're reading. So I think we were debating this before um, we did the episode, but obviously this episode would be incomplete without a mention of chat gpt. So you know, I mean chat gpt. It has a lot going for it. It was, it was the, you know, open AI and chat GPT behind a lot of the AI hype that you know ultimately resulted in podcasts, like we're doing today. So you know the direction they've gone in, the way their board's gone, and some of the things they've done in recent history are pretty unpalatable, and that's our problem with open ai um, but not just that, like the fact that you know much, much smaller companies with much less backing are actually outperforming them as well. However, all that being said, open ai and chat gpt are, you know, in terms of user base, in terms of like like. You know who's heard of ai, you've heard of chat gpt. They are the leading model out there and they are, or should be, a role model in terms of what they're doing. So, you know, this episode would be incomplete without a mention to chat GPT.

Jimmy Rhodes:

And if you want to talk about some of the things that chat GPT are doing, well, they have custom GPTs. They're like, they're ahead of the game in terms of multimodal. So you know, at the moment you can upload videos, you can upload sorry you can upload files, you can upload documents, you can upload images, you can output images from chat GPT, you can upload voice, you can talk to chat GPT, um, and it can talk back to you as well. And actually they're they're, like you know, their big sort of release not so long ago, which has been a little bit delayed, where they were going to have much better voice capability, has been delayed, uh, partly due to lawsuits, lawsuits over Scarlett Johansson's voice, but you know they are on the frontier. They are on the frontier in a lot of ways. They've got many multimodal capabilities.

Jimmy Rhodes:

Chatgpt 4.0, which is their like frontier model, is available for free, with certain restrictions. But you know, if you want to try out ai, if you haven't used ai, chat gpt has to be, like you know, fairly high up there on the list of ai models that you've heard about. Heard about and ai models that you're willing to try out and, to be fair, like um the, you know the capabilities around 4.0. I would say they're not as good as things like Claude 3.5 in terms of like having a conversation and, purely, you know, in terms of the quality of the output. I would say Claude 3.5, which we're going to talk about a little bit later, has ultimately better quality output.

Matt Cartwright:

However, these models are all improving rapidly and ChatGPT's latest model is very good and I guess we should also say you know, claude is number one at the moment in terms of most of the scores not all of them, so some of them for omni still comes out on top, um, but you know, whenever they release the new model, whether it's 4.5 or 5, they're going to be number one again, um, and, like you say, you know our criticism quite often of the way that the company behaves and the way that they have not lived up to what they set themselves up as is different from the model itself. I mean, I find now we use claude, as I just said when I talked about perplexity, using claude has pushed me to perplexity, which I'm really glad I do, um, because you know, not having search functionality within Claude means sometimes I've had to go somewhere else to to ask questions and search for stuff. But it is a fact that when you use chat GPT, you can generate images. Like you say, you can create your own custom GPTs which, let's be honest, you know this is not a difficult thing. Anybody can create their own custom GPT. It's really really simple to do. You can generate images. Another thing people find frustrating when they use other large language models is that they can't generate images. All they can get is a prompt to take to another tool. And while you can make that part of your workflow, for most people the more things you can do in one place, the better, and at the moment ChatGPT is still the best in that respect. It has more functionality.

Matt Cartwright:

One thing that a story with my daughter. She occasionally I mean I try not to let her use it too much, but I want her to know it exists. So I let her, you know, ask ChatGPT for a story and I told her look, you know we're not going to use chat GPT anymore, we're going to use Claude. But when she found out Claude couldn't speak. She says, well, so Claude hasn't got a voice yet, but is he going to get one? I said, yeah, claude's going to get a voice soon, but she's like well, until then I'll have to talk to chat. Yet, like you say, the multi-modality of it. It's probably still the top all-round model for most people. Um, reluctantly, but you know we're recommending other models. But, uh, do your own research and make your own choices.

Matt Cartwright:

So my second recommendation is veniceai. So I chose this one because one of the things a lot of people are concerned about with ai and I'm not talking immediate concerns here, um is the safety of their data and, frankly, you know just not wanting to give all their information away or training people's big tech models for free. So venice call themselves the permissionless alternative to popular AI apps. They utilize leading open source AI technology to deliver uncensored, unbiased machine intelligence, and do it while apparently preserving your privacy. So I'm just going to list why.

Matt Cartwright:

According to Venice and we have to take them at their word they are different from ChatGPT, claude and other AI services. So Venice is permissionless. Anyone from anywhere can use Venice to access open source machine intelligence. They say they don't store any data. You only store data locally on your device. Venice doesn't spy on you. The platform doesn't record any of your info other than your email and IP address, doesn't see your conversations or the responses. It doesn't and can't share any of this information with other parties, corporations or governments, because it doesn't have it.

Matt Cartwright:

Venice's entire infrastructure and ethos is aligned around respecting individual privacy. They don't censor the ai responses. The platform remains neutral. It doesn't filter content other than the safe Venice mode, which is to limit adult content, which pro accounts can turn off. Centralized AI companies add substantial and unspecified amounts of censorship and bias to their answers. Venice doesn't censor or bias answers under the request of politicians or governments. Their infrastructure is set up to be permissionless and neutral and, please note, each model has been trained by its publisher within its own rules and boundaries.

Matt Cartwright:

Venice provides access through multiple models and you do have the ability to choose the one you're most comfortable with. They're all open source, so there's that transparency there. With it, the platform shows you which models are being provided and it shows the weights and designs of those models, sorry, or the weights and designs of those models can be found online. It doesn't show it directly through Venice, but you can. You can, you know? There's a lot of transparency about what they're using as well as what they're doing.

Matt Cartwright:

They have no account, free account and pro options. No account doesn't mean I don't mean they don't have any accounts. I mean they have a no account option. So you don't have to sign up for anything. You just log on and use it. Obviously, we'll have your IP address, but nothing else. The no account one is limited to 25 texts and 10 image prompts a day, but the free account, if you sign up, gives you a hundred texts and 20 image prompts.

Matt Cartwright:

So, yeah, know, probably enough for most people, unless you want to make it your main generative AI tool. Even the pro account is only $49 a year. So you know, if privacy is your number one concern or you know you don't want to limit the adult content that you're accessing, then $49 a year seems to me a pretty good bargain. The models they use are probably ones that I'm guessing most of our listeners have not heard of, um, but for text, these are both essentially versions of meta's llama open source model. Uh, so hermes 2 pro llama 38b and doge llama 370b image okay, doggy, I think it's doge.

Jimmy Rhodes:

Yeah, no, I'm just laughing because it was a meme coin. Yeah, I know it was well, it's.

Matt Cartwright:

The doge is the one that's always used in china as well. I mean, if it's not doge, it's doggy, but I'm, I'm, I'm guessing it's not doggy no, it's doge.

Jimmy Rhodes:

I'm sure it's doge. I get back to it doggy style yeah, doggy star, llama 370B.

Matt Cartwright:

Yeah. For image creating, they use a number of models M Fluenty, xl, v4, pixart, sigma and DreamShaper, which are both based on stable diffusion, and Playground 2.5, which apparently has been fine-tuned to produce realistic photos, cartoons and, I think most interestingly, anime. So that's pretty cool. You can upload files, although currently it only supports pdfs with up to 22 000 words, but it will apparently in future do uploads up to 500 000 words.

Matt Cartwright:

So to me that seems like a great use, because I think where I often find myself worried about the sort of privacy element is not so much when I'm asking questions or typing prompts, it's more if you're uploading documents which you want to be analyzed's more if you're uploading documents which you want to be analysed but you're, you know, worried about them becoming training data or you're worried about that information from the document. I mean, I guess if it's really important, you're not putting into any model that is out there on the sort of open web. But I would feel more comfortable, I think, with that. So I think it's a good use. I think one-off questions as well. You know where you've got a one-off question. Maybe you're you're not going to adopt Venice as your, your general model, but maybe occasionally there are things that you want to ask where you don't want to have the data stored. It would be another really good use, I think, um for using Venice.

Jimmy Rhodes:

I mean not to sort of criticize, but like where did their training data up to now come from?

Matt Cartwright:

Yeah, I mean, this is part. I mean they're using two open source models for their text prompts, right?

Jimmy Rhodes:

Yeah.

Matt Cartwright:

So I mean they're going to have had to put some kind of boundaries on it themselves. But I'm it's one thing I'm a little bit concerned I'm not concerned about, confused about is when they say that it doesn't have any biases or censorship is, how can anything not have any biases? Because what, unless unless it's all training data has a bias right? So I so I think what they're probably saying is they don't censor or bias the answers, not that the training data itself won't have biases. So the training data that it's been run on, I mean whatever, whatever was used to originally train the Lama models, is what they're going to have used as a training data. But how they have made it relatively unbiased, I don't know. Maybe that's something to play around with it and try and find out.

Jimmy Rhodes:

Yeah, and I think I think you know don't get me wrong it's a step in the right direction in terms of transparency and it's good to have transparency. And a lot of the models don't have transparency and they've been caught out by things, like you know. Can you give me a critique of um of biden or trump, and? And with one it won't? It won't give you it, and with one it will, so it's. Can you give me a critique of Biden or Trump, and with one, it won't give you it, and with one it will, so it's clearly got like a. You know, typically one of the criticisms of these models is they have like a left, slightly left bias.

Jimmy Rhodes:

Yeah, I mean they're Silicon Valley, aren't they Exactly. So there's been those kind of criticisms before. I think you know a move towards transparency is definitely good, especially in a sort of age of misinformation, and you know it sounds really promising. We can't vouch.

Matt Cartwright:

You know 100% for them, but what we can say is that they've made this, this is the USP of their model. This is, you know, what they're going to succeed with. So if they're not living by the standards that they put out, then they're going to fail pretty quickly. I would think so yeah we take them at face value. I think it's a step in the right direction and for a lot of people who are concerned about safety of their information privacy, then I would say venice is maybe the one to go with.

Jimmy Rhodes:

I think I'm on number three. I'm going to go with Grok and I'm going to spend much less time on Grok. Which Grok? Grok with a Q. I think we've struggled with this a little bit before, but I'm going to go with Grok with a Q, and so Grok is a website. It's really interesting. So it's a website where you can. You can, you can go to their website. You don't need to log in. It's a bit similar to venice in that kind of respect. It doesn't have all this transparency, uh information like up front on the website. It's an extremely fast website where you can choose from. I'm just looking at it now. So you can choose from Google's Gemma, which is one of the latest Google Gemma models, and you can choose from their 9 billion parameter model or their 70 billion parameter model, or you can choose Mixtrel, their 8 by 7 billion.

Matt Cartwright:

Mixtrel is the French model, which is the kind of the, I guess, the standard bearer for Europe. Yeah, the only model from Europe, yeah, yeah.

Jimmy Rhodes:

And it's a really good model. They call it their 8 by 7 billion parameter model because it's a mixture of agents model or a mixture of yeah, it's a mixture of models, basically, where they actually have eight models that are all 7 billion parameters and then they work together and they provide an output Overall. Like Grok is really interesting because you don't need to sign in, you can just go onto the site Again. Grok with a Q G-R-O-K. Grok with a.

Matt Cartwright:

K is X's.

Jimmy Rhodes:

Yes.

Matt Cartwright:

So Twitter, as it was Elon Musk's Grok, and don't ask us why there are two Groks.

Jimmy Rhodes:

Yeah, it's really confusing, but if you go to grokcom with a Q, you can access Grok. It's incredibly fast. It uses open source models as its backbone and it is incredibly fast because it uses a new kind of AI chip that's designed for inference. I guess it's kind of more of an experiment in inference and fast inference using these kinds of models, but one of the one of the it's it's one to watch because it's definitely got applications in the future in terms of, like real time conversational AI, where you're talking, like being able to have a conversation with an AI where the response is sub 200 milliseconds, which is kind of the response that humans normally have, which means you can have a natural conversation, and that's why it's one to watch for the future.

Jimmy Rhodes:

I actually find it really good. It has very good coding abilities. It's got good all round abilities. You can pick from the models and it's fairly transparent in the respect that it's like using open source models and doesn't kind of like um it's. You know it's it's not one of the big AI tech companies. That's kind of like we're going to use all your data for training, and so it's a little bit similar to Venice, the last one that Matt talked about um definitely recommend trying it out, like I genuinely use it for a range of tasks where I just want to like a really really quick, really quick response. Um, and it's very simple interface, doesn't require any login.

Matt Cartwright:

Very nice it's so fast that it almost makes you question why everybody else isn't using the same inference technology. I mean, there are various reasons for it and actually there's a. I was also told that when you see answers coming up, there's not always a case. Sometimes it's because there is obviously a queue that you're in. But when you see the answers kind of being typed out with some of the the you know large language models like Claude or Gemini or or chat GPT, part of the reason for that is it makes you feel like there's a person on the other end, because they're kind of typing out the speed of a person.

Matt Cartwright:

Um yeah, I'm not sure that is necessarily always a reason. I think mostly it's because there is just a delay. But the thing is, when you use grok is it kind of feels like you've gone to another level. It's like someone's moved from cassette to compact disc or compact disc to mp3. I mean, it's like a new. It feels like a new form of of of kind of technology. I mean I the one question I would have maybe you can explain is why aren't others using this technology?

Jimmy Rhodes:

I don't think it'll be long before they are like, like. So I mean, you're getting into the kind of like um, it's not necessarily politics, but the the. You know, effectively you've got Nvidia, which produce all of the um graphics cards and and the power behind most of ai models, and I think you're gonna see that you're gonna see like custom chips. So, basically, to get a little bit technical, grok with a q again, are that what? They're? A hardware company that are producing custom chips that are designed for inference, inference being so you train large language models on huge data centers that are powered by graphics cards. But when you're doing inference which is when you're actually talking to a large language model, that can be done on chips that are custom designed that are much, much more efficient and faster at doing that specific task.

Jimmy Rhodes:

And Grok is a company that are much, much more efficient and faster at doing that specific task. And Grok is a company that are producing those chips and so their front end, which is grokcom, is, I guess, basically selling that in a way. So they're just saying like look how fast inference can be Again, inference being like you having a chat with an AI model. And I think I absolutely think in the future you're going to have computers, pcs, with a CPU, which is your, you know, your main chip that does all the work, a GPU which does your graphic stuff, and they're probably going to have inference chips in the future, like there's almost no doubt about it because it's way more efficient, it's way energy efficient, it's way cheaper, etc. Etc. Etc. So I think it's just a case of adoption and it's going to take a little bit of time well, anyway, for everybody listening, if you haven't used it, go and have a quick play around with it.

Matt Cartwright:

I think for a lot of people will probably, for the moment anyway, not go back to other models. If you don't do a lot of kind of heavy lifting, you might find that actually, grok is the thing that you want to use day in, day out because it's so much quicker, like we say it's um, it really feels like it feels different.

Jimmy Rhodes:

It feels like you're using a different form of technology, which is, you know, fun, if nothing else yeah, and and I mean just before we finish on point, like one of the things that I find interesting about it, if you're interested in this, and cool about it is, I think it is the future and also it runs off open source models and open like it's it's it's hard to underestimate like open AI and companies like that are putting huge, huge investments in trillion dollar data centers and all this kind of stuff, which are actually like to go back to some of the stuff we talked about previously, like the impact on the climate and energy usage and all the rest of it. They're actually really significant and things like Grok are looking to minimize that, and not by like a factor of like 10, but like a factor of 100. So they are things that have a real significance because they're going to make massive efficiency savings, massive cost savings and actually massive savings to the environment as well.

Matt Cartwright:

So my third entry and this is a bit of a cheat, I, I guess, because it's not really a use of ai or an app, but rather it's what I think are the best places to keep up to date with what's going on in ai for those who are really interested and want to stay on top of things but don't have time to trawl through absolutely everything out there. And I should say, of course, that your number one place should be preparing for ai, the AI podcast for everybody, because, if nothing else, we usually keep it to one a week. But if you do want to go a bit further and go a bit deeper, then these are my three general top AI sources. There are specific sources for those who are interested in, say, governance or alignment or, you know, ai in China or technical developments, but these are kind of more general sources that will pick up all of the kind of interesting news around AI. So the first one is the AI Daily Brief, which used to be called the AI Breakdown, is a podcast, but it's also available on YouTube and they have a newsletter and some selected articles on ai-dailybriefbeehive that's B-E-E-H-I-I-Vcom, and that newsletter's focus on curating and contextualizing the absolutely most important bits of news and discussions that they have on the podcast, the Daily Brief video and pod, is 15 minutes long. Usually it has five minutes at the start on the latest news and then 10 minutes on usually one particular topic, such as a new product launch, an announcement or a trend which the host, nathaniel, has picked up on. So it's really it's the exact right pitch level for something daily Perfect length, not too technical, but it's got enough detail for those who are really interested in AI or who want to listen regularly. But you know know, can't spend hours and hours every day on it.

Matt Cartwright:

The second one is a Substack called AI Supremacy from Michael Spencer. Now this guy churns out so much content that I think he may well be an AI himself. He has a newsletter on AI chip sector called Semiconductor Things. He seems to follow every single writer on Substack, so I don't know how he gets time to read stuff, but his main letter, ai Supremacy, is probably the most rounded written source for all things AI. It helps that he's relatively optimistic but balanced, and one thing I like is he's not afraid to change his opinion as new stories and stuff comes out. So he's, at the moment, really big on the fact that we're at the top of a hype cycle and that he thinks things have been overblown and that's not where I think he was a few months ago. So I like the fact that he kind of changes according to the times.

Matt Cartwright:

A lot of his posts are paid only, but you can get a preview of them that tends to cover most of the key points, and he makes a lot of the kind of best paid posts free for limited time and you can get a seven-day trial. I just follow the free plan at the moment, but I think if I was going to pay for something, it would probably be his, um, even the free plan. You know it. The free plan is good. I guess. For me, one of the good things is having the free plan limits my time spent reading it. Um, if I had the paid plan, it would just be yet another thing to read. And the last one is Don't Worry About the Vase or, I guess vase, because he's American by Zvi Malchevitz.

Matt Cartwright:

So this guy is one of those people that he's so intelligent. I tend to think that everything they write must be true, which is a problem. He describes himself as a rationalist. His Wikipedia bio page describes him as an American writer, hall of Fame, competitive Magic the Gathering player and the former CEO of Metamed, a now defunct medical research analysis firm.

Matt Cartwright:

This subsect could be a bit heavy for some people. It's quite US heavy. It's not always about AI. He describes it as thus my writing can be intimidating. There is a lot of it and it's often fairly dense. As always, choose only the parts relevant to your interests and do not be afraid to make cuts.

Matt Cartwright:

I attempt to make every post accessible as an entry point, but I also want to build up superstructure over time. This guy absolutely gets it. So I think he gets the threats about ai. He seems really well educated on basically everything. All the peripheral stuff. All these articles always link to everything that he's talking about. If it's about you know a story or piece of news as a link in to go back to that article. I mean, be warned, these are like quite often an hour-long read and if you get into the weeds of the links you could be there for several hours. So it's one for people who want to get into more detail. Um, and I think if you're really keen on the business side of ai but you've got concerns about ai development but you wouldn't consider yourself to be a total out and out doomer, then this is maybe the place for you to get your AI news.

Jimmy Rhodes:

One that I wasn't sure I was going to mention today, but, like it does, again, similar to chat, gpt, it deserves mention. Um, but that would be co-pilot. So co-pilot is something. It's a application from Microsoft. A lot of it is based on a GPT Um. It was originally aimed at developers, but it's starting to integrate into Microsoft products across the board and copilot something, I mean. You may have even noticed it on your desktop. So copilot is something that Microsoft is um, how would I describe it? So it's like Clippy.

Jimmy Rhodes:

If anyone remembers Clippy, the little, uh, the little helper that you had in in windows that nobody ever used, that nobody ever used and everyone just found really irritating. But it's Clippy in its ultimate form, basically. So so Copilot is aimed to be exactly that and I think I think this is where a lot of people who you know, a lot of people who don't listen to our podcast, who don't listen to our podcast, like the, the sort of you know, whatever it is 40, 50% of people that have no idea what AI even really means and have never used chat, gpt, et cetera. I think this is where people's etc. I think this is where people's initial experience of AI is actually going to be is where you just have this helper that suddenly pops up on your desktop, and I'm sure Apple are going to introduce something similar in the future. So this is where AI is just going to integrate itself into the technology that you're using every day, and you're just going to see Copilot appear and you're going to be able to ask it a question and it's going to be able to answer your question. Maybe in the future, it's not even going to just answer your question, it's going to be able to, like, execute tasks for you as well.

Jimmy Rhodes:

And so you know, just to go back, copilot's origins were as a coding assistant that integrated into Visual Studio Code, for example, and you'd just be able to say I want to make some code, I want to write some code that does XYZ, and it would make a snake game, for example, is always the classic example on YouTube and it would just write the code for you, and then maybe you'd need to tweak it a bit, test it, practice with it, play around with it, and then you'd get pretty much what you're after.

Jimmy Rhodes:

This kind of technology is improving, you know, like all the time, all the time, and Copilot is something that Microsoft are building into Windows, so that you'll be able to just ask it how to do something how do I adjust the size of something in powerpoint? How do I, you know, how do I change the margins in a word document but it's going to give you a much more intuitive response that is going to make a lot more sense than, for example, clippy did. Um, and so I I actually think, for anyone who, you know, is listening to this podcast, that has never had any experience of AI, this is, and maybe not even interested in it, maybe not. You don't want to try out chat, gpt, you don't want to go and use Claude or one of these large language models that we've been talking about At some point. This is going to just be like, you know, you press start on Windows and you type something in and that is going to be served up by AI.

Matt Cartwright:

We've talked about this quite a lot, haven't we? About how you'll be using things without knowing it, and I think this is a kind of good use for that. So I, you know, I have a Windows computer, even though I use Apple devices for everything else. I've got Copilot. I don't use it. I've tried it a little bit.

Matt Cartwright:

For me at the moment it's not useful, but I think, like you say, when it gets to a point that I can ask it to execute things in Excel and in Word and in Microsoft programs, then it does away with some of the uses of a external large language model. So if you're asking how to format something, I mean one thing that I still don't think the tools are quite there yet is creating PowerPoint presentations. So although there are tools there, they're not perfect yet because you're using a kind of third party. Once you can get Copilot to help you execute a command and do something in PowerPoint, it's going to be a game changer and talking about kind of efficiencies for people's study, work, home, life, you know, this is going to be one of those kind of real world examples where, even if you don't think you need to use artificial intelligence, you'll you'll find that it will save you time and effort and you know it will improve your productivity by, you know, x, 10x, whatever you say that, like people are really good at creating powerpoint presentations, I've been.

Jimmy Rhodes:

I've been working on that skill for 20 years.

Matt Cartwright:

Well, that's that's why we need a tool that does it, because everyone's so rubbish at powerpoint. I mean, I've never seen a good looking powerpoint presentation yet they, they, I think every PowerPoint presentation ever created. Yeah, animation, the worst function ever invented. Yeah, it's very look, you can make something flying from the side. Oh, look, it's flashing. It just completely useless.

Jimmy Rhodes:

Yeah, I think PowerPoint presentations, if someone can, if an AI can make beautiful PowerPoint presentations, it's game over, then go for it. It's game over, then go for it, it's game over yeah, okay.

Matt Cartwright:

So my fourth is super intelligent, which is besuperai. So there are loads of people and companies jumping on the AI training bandwagon. I saw a great meme in China earlier this year about some of these. There are loads of 9.9 RMB or 29.9 RMB training courses that seem to be all over Chinese social media, with influencers with absolutely no background in AI trying to take advantage of all the hype with a year-long course that costs basically a pound.

Matt Cartwright:

I've done loads of training courses in the last six months or so, so I've done longer term stuff like the Blue Dot AI Safety Fundamentals Governance Program. I've done IBM online stuff. I've done Vanderbilt's Prompt Engineering through Coursera. I started an AI Ethics one from Milan University and I've done some Python stuff on Kaggle, which is free and definitely worth a look, and I'm trying to do something now around Hugging Face. So, yeah, I've looked myself at plenty of sources and if you want qualifications, this is probably not the best place, but in terms of practical, interactive and just really well presented, up to date and easy to follow hands-on AI learning, then super intelligent that's besuperai is the URL is the best thing out there in my opinion. So it's not free, although you can sign up for monthly free tutorials which will be sent to you by email. The normal price is $20 a month If you use the code podcast. That's not our podcast, so unfortunately we're not getting any money for this. It's his own podcast. That's not our podcast, so unfortunately, not get any money for this. It's it's his own podcast that's getting the money. But, um, we will try and get a um a code at some point, but you get it for ten dollars for the first month.

Matt Cartwright:

I think it's one of those things where you know you might sign up for a few months and then learn a few ai apps and skills and then leave it um, which is fine. You know, one of the best things is how up to date they are. They're adding they say hundreds. I mean it's at least in the tens of videos every single week, like and some of them are three minutes long, some of them are 50 minutes long. You can sign up for a kind of course, which is, you know, 12 videos that teach you all about a particular thing. So I'll just give you a few of the examples. These are all interactive video lessons, so in a way, it's not much different from what you could find on YouTube, but it's the amount of stuff that is out there. There is nothing is as comprehensive in terms of video stuff and you can choose these kinds of courses that will run you through how to do a particular thing or how to specialize in image creation, how to use the best tools. I'll give you some of the examples create a video with your own avatar, head-to-head ai logo creators.

Matt Cartwright:

Make an original song with ai. Create vibrant back docs. Create vibrant backdrops for your product images. Sell and ship your products with ai. Lip sync and animate images in AI. Answer your business course 24-7 with SoundHound. Create a landing page with one sentence. Plan your team's next retreat with FigJam. Build your own customer GPT for business strategy. Open AI chat GPT prompt engineering guide. Create images using natural language in DALI.

Matt Cartwright:

So hopefully you can see that their video lessons is not a course necessarily on how to use an LLM or how to code, but really specific guides to practical uses, so pretty much every kind of basic function you might want to do. So you were talking about Suno. If you want to know the basics on how to make a song in Suno, you can look that up. If you want to know how to remix a song. You can look that up and find a three-minute course. If you want to use mid-journey, if you want to use all the kind of you know the main models, it will give you examples of how to do it. Um, we always talk about this kind of messing around with large language models and ai applications and these are the kind of practical trainings that we think you know everyone says that they want from education. So it's not. It's not telling you a list of stuff to do. It's going to go through and show you click by click how you do it.

Matt Cartwright:

I should say one of the founders, or I don't know if he's the founder or one. He's one of the fans anyways. Nathaniel whitmore, um, who does the ai daily brief podcast, um, which I mentioned previously. I'm a big fan of his because he's an expert in the field, but he's really good at keeping things simple. And I think this app or well, it is an app, yeah, I mean it's. You know it's an app with a load of functionality in there. It kind of fits in around the way that the AI Daily Brief podcast works. It keeps things fairly simple, but there's detail there.

Matt Cartwright:

If you want to do it, I mean you can find stuff on youtube. Like I say, if you, if you're not that interested, I'm sure you can find a tutorial on most things on youtube, but it's just the amount of training data that you've got there and also, like I say, the fact that they add stuff to it every week. So when a new model comes out, when you tool comes out, you know within a day there'll be stuff on there for how to use it. So it's properly on trend. If you want to try it out, you know, use it for a month, see how it is. I think for people who are really starting out on their journey and want to create images or want to create music, or want to learn the basics of how to you know, use any of the apps out there, this is my recommendation. You've already teased our listeners with dream machine dream machine.

Jimmy Rhodes:

So am I doing dream machine you're doing dream machine okay, so dream machine's something that it's by luna, like this is something that I would say it's still at a very early experimental phase, like if you, if you've, if you look at the images sorry images- videos in beta isn't it?

Speaker 3:

yeah, it's definitely in beta.

Jimmy Rhodes:

So, like if you look at the videos that have been curated and put out on on youtube, for example, or like other platforms, like the stuff that they put out is incredible and it looks really incredible. I've had a play with it, the output that I've managed to get out of it, using my basic ability in terms of prompting and, you know, uploading images, because you can upload an image as a start image and an end image, for example, and then you can put a prompt in um so lunar dream machine is is a video generation tool and five second clips, isn't it at the moment?

Jimmy Rhodes:

yeah, they're very short clips and and like this. The. This is the kind of problem with this kind of stuff, like the stuff that you see online looks incredible. I have never been able to generate anything like that, but if you want to have a play with with video generation and you want to see what you can get out of it, then I would definitely recommend lunar dream machine. There is a free, as Matt said, like the one of the things we're one of the reasons we're recommending. Everything we're recommending is because they have a free tier, so you can actually just go on there. You can have a play around with it. I'll be honest, I've not managed to generate anything any good at all with dream machine I don't know the one of me where I kind of turned into a.

Matt Cartwright:

I don't know what did I turn into an alsatian an alsatian and a some other animal came. One came out my bag. One came out my face on a bike. Yeah, it's a shame we can't share that with our listeners, but you'll have to sign up to the uh, the paid tier if you want to see that image the video.

Jimmy Rhodes:

Yeah, yeah, like.

Jimmy Rhodes:

I mean, yeah, like it's. It's. I think it's one of those things where it's still in its early days and video generation is clearly going to improve quite rapidly. In the same way, the image generation has, like the. You know, we had image generation just a year ago where people, like everyone, had six fingers. We've all seen I think probably everyone's seen the like memes of will smithy and spaghetti and stuff like that. A lot of that stuff has kind of like been rubbed out and improved and and iterated on, so now those kinds of things are much better.

Jimmy Rhodes:

Um, I think the reason I would put dream machine and luna on this list is because a you can sign up for free, you can have a play around with it and I think it gives you an idea of where things are going. So there's a load of stuff online about dream machine and like some of the great stuff that it's produced. I haven't been managed. I haven't managed to generate any of that kind of stuff myself, but it's the direction things are going in and I think, like you know, this is going to evolve rapidly, as with all AI products, ai sort of systems. So you know, it's something that you can have, you can try. You can try today, you can try it for free, and you know, see what sort of trippy stuff you can produce.

Matt Cartwright:

And when we launched the podcast the second week was when Sora came out, which is OpenAI's text to video, which I mean, yeah, it exists and people have used it. Professionals have used it. You can find some really cool videos online, but there's no general access to it yet At the moment. Dream machine, I think, is the only one which is out there that people can use. Now you know the free tier. I've got to say you can do 30 generations in 30 days. It's limited to five a day.

Matt Cartwright:

I was creating a video the other day. I think it's still buffering. I think it still hasn't finished creating it.

Matt Cartwright:

The queue is kind of a little bit ridiculous at the moment. I mean, this app is pretty hot at the moment. I mean, I think for a while it's going to be like this. If you're really interested, it's 30 a month, I think. You know, maybe sign up to be able to play around with it a bit better and maybe being at the front of the queue and you can, you can sort of curate your stuff a bit better. I mean, it's good fun but, like you say, the input at the moment is a bit lacking.

Matt Cartwright:

But I think it has to be on the list, because text to video is something that I think everybody wants to have a play around with and it is a really simple interface. Like anybody you know your nan could use this you literally just need to type in a word. That's all it is. I looked on, super intelligent. Actually, there are only two videos on using dream machine, both for about three minutes long, very, very simple, which I think shows you that there's not a great deal of functionality yet, which means obviously that makes it more difficult, but on the other hand it means it's really easy for anyone to play around with. So you know, have a go with it. Don't expect too much out of it yet, but you can see the direction things are heading in anyway I'm gonna, I'm gonna chuck an honorable mention in here in the video generation arena this is cling ai which is cling, yeah, cling.

Jimmy Rhodes:

So so cling, you have to sign up. I think I'll be honest. I think you have to sign up with a chinese mobile account.

Matt Cartwright:

It's a chinese model it's quite a show, isn't it? Which is sort of the well, they were the rival of tiktok doying. I'm not sure they they ever really rivaled them, but they're.

Jimmy Rhodes:

They're a pretty popular app in china yeah, so so the reason it's an honorable mention is because I'm actually still in the waiting list myself, so I haven't used it, but I've seen a lot of stuff online and I've you know from creators that I, you know, I think are genuine, where actually Kling seems to be able to do genuinely, seems to be able to produce genuinely good output. It's a bit niche, but in terms of a competitor, it produces very, very good quality output. If you can sign up and you've got a, you can either use a Chinese mobile or you can find someone with a Chinese mobile that can help you sign up. Then you can get on the waiting list and you can try it out. And I think it deserves a honorable mention and you can try it out and I think it deserves a honorable mention.

Matt Cartwright:

So my last entry is going to be a Chinese large language model app called Tiangong. I think that's the same Tiangong as the Chinese space station. For those of you that speak Chinese, it's Tian is Tianqi de Tian and Gong is Gongzuo de Gong. So Tiangong T-I-A-N-G-O-N-G dot C-N is their website. They also have apps on the Android and iOS stores. You can pick them up in the UK, the US, you know, eu, whatever. It's not just limited to China.

Matt Cartwright:

The reason I've chosen this one it's not the most famous of Chinese large language models, I don't think, although it seems to be creeping up the charts pretty high. But this is probably the most multimodal model that I've seen, even more so than ChatGPT for Omni. It has the normal chat function. It's got search tools, ai documentation, audio, which allows you to analyze audio, ai writing tools, ai music generation, so a bit like Suno. It's nowhere near as good, and when I did it I put in an English prompt, but the lyrics came back in Chinese. I'm not sure whether you can change that. I haven't spent too long on the music part because it doesn't look fantastic, and the result? I made a kind of dance EDM dance track. It was pretty weak, if I'm honest, but you know this is a free tool and you can. You know you can mess around with it if you don't want to use Suno. It's got image generation embedded. So not as good as Dali or Midjourney for me, but you'll see if you have a look at it.

Matt Cartwright:

The Chinese apps they have a bit of a kind of Asian feel, if you understand what I mean in terms of the way that Asian images uh, kind of more cute anime style to them. Uh, there's a really really nice image, I think, on the AI music part of their site called Thousands of Miles, and it's got a cat looking out over a window that to me looks like a kind of japanese scene if you're into japanese anime, and there's a guitar on the window there's some kind of flowers which are really beautiful, kind of you know, sakura kind of japanese style it's. It's, I mean, it's a chinese app, but I think with the imagery there's a lot of a kind of japanese feel to to some of the images. Um, they're not bad. I mean, it depends what you want to do. I think the quality of them is actually really good.

Matt Cartwright:

The problem was for me, when I use it, the prompt. It didn't really come back with something that fitted with what I was expecting from that prompt. So maybe it's about you know using the prompt better. I was using the prompt in English, not Chinese, so maybe again it picks up Chinese Chinese language better. But you don't need to speak Chinese to use it. And in terms of using the, the web-based browser, you know if you use, even if it's on a phone, if you use your browser using edge or chrome or whatever, you can use a translate function so you can view it in English. They have all kinds of different agent conversations with different kind of agents that have been created. So different kinds of ai assistant with different personalities.

Matt Cartwright:

Chinese english translation. There's an english speaking practice. There's a travel planner. Doban recommends. So doban is kind of chinese imdb for recommending movies and tv shows, programming assistance, data analysis, horoscopes, encyclopedia.

Matt Cartwright:

It's got so many different functions. Um, I'm kind of amazed actually. It's got so many different functions. I'm kind of amazed actually that they've got so much stuff in and for an organization business that's not that well known in the space. So Kunlun K-U-N-L-U-N is the company. They're a Beijing-based tech company. They say that it's an open source 10 billion level large language model called Skywork 13B, with rare support for open source 600 gigabyte, 150 billion tokens of super high quality Chinese data set. So it's obviously going to be based on, you know, and better for Chinese language, but you can use it in English.

Matt Cartwright:

I don't know about other languages. Languages, yeah, it's like I say, I don't think, for most people who are not in China or don't have an interest in China, that this is going to be your kind of number one model. But having all that stuff in one place, being able to generate music, images, powerpoint presentations, chat with different agents in one space, I mean I think that's where we evolve with a lot of the other models, but this one at the moment is the most that I've seen in one place and I think it is worth anyone with more than a passing interest in large language models and AI to step outside the kind of three, four models that you know and step outside what's happening in the US and have a look at what's happening elsewhere, because China has more patents on ai at the moment than anywhere else. It has more large language models, it's pumping a lot of money in. It's maybe not going to dominate because they don't have the chips. You know they develop their own huawei chips, but they're obviously they don't have access to to the best chips that the us does and they're, even if they did that, they're way down the queue. But they're developing a lot of stuff and they're, you know, clearly number two in that space and I think it is good for everybody to have an idea what others are doing.

Matt Cartwright:

Obviously, there's sometimes a bias against china. I think you've got to throw that out the window for this and, and look at it, you know, on face value, they're doing a lot of really fun stuff. The censorship, I think, is always going to be an issue, um, with the training data and also with any kind of search functionality. But for a lot of the tools that you play around with, you know it doesn't really matter. And, like I say, for most people, you're not probably replacing your model of choice, you're just trying something different. So, yeah, have a go.

Matt Cartwright:

Tiangongcn, t-i-a-n-g-o-n-gc-n. And yeah, if you like it, there are loads of other Chinese models out there. I think China and the US are probably the only ones churning out stuff at this kind of pace. But who? This is kind of an 11th one, but we talk about it all the time. So we wanted to give a shout out to Claude by Anthropic, but particularly 3.5 Sonnet, and just to explain in just a couple of minutes why we think it's such a good model and why we think at the moment it is the daily kind of choice for large language model for everybody kind of choice for large language model for everybody.

Jimmy Rhodes:

Yeah, so so for me, claude 3.5 and actually the previous models as well, like claude, claude 3 opus and sonnet, and and even haiku, they are like at the forefront right now, um, anthropic are, you know, they're a breakaway company from open ai that try to stick to the original principles that, um, the open ai actually set out, but they've, you know, diverged away from now, um, um, since they've become more of a for-profit company rather than not-for-profit or, you know, driven by research company and so, but it's not just that, like, I find that, using Claude 3.5 and previously three opus, it just feels like a more natural experience and it doesn't restrict the length of its answer based on, like you know, some, you know, predefined parameters.

Jimmy Rhodes:

It also feels like it provides much more natural responses. I, I mean, to be honest, I I just use clod 3.5 for everything that I do in terms of, like my default go-to. It's also now free. So so, again, like, with certain restrictions and obviously paid for, premium users get, get less restrictions, but you can use Claude 3.5, sonnet for free and in terms of its coding ability, it's demonstrated in multiple tests in that it has actually got a, it gets better scores and coding ability. So, in terms of zero shot, which means means you know you don't prompt it any further, you just give it a single prompt and it gets one shot at it um it, it does produce better results in terms of coding. It just produces more natural results. It's it feels like you're having a chat with something that is more human for me than um any of the models.

Matt Cartwright:

Yeah, I think so. It's definitely better at things like humor as well. So this afternoon I put one of the transcripts of a podcast in so sort of 50 minutes a 50 minute podcast, but the transcript of it and asked it to summarize that into a blog post. But the blog post not to be a blog post, but the blog post not to be a blog post about the article, but to be a blog post written as an opinion piece about the content asked it to be serious but humorous. It took me only two prompts, so the first prompt and then one prompt to tone down the humor on it, to make it into something that I think is a you know I would always say that, look, ai helped us to generate this.

Matt Cartwright:

But something that I think is a. You know, I would always say that, look, ai helped us to generate this, but something that you could read. And actually you read it this evening and were like, yeah, that's brilliant, it's like really genuinely readable. It just seems to be much more natural. Like I say, it has kind of humour the way that it responds to answers. It responds in a way you know it'll ask you, you and these are little things, but it will ask you. You know, is there anything particularly like me to go into more detail on? You know the way it talks to you. It says, yeah, yeah, I definitely understand your concerns about this.

Matt Cartwright:

It feels like there is a person on the other end and that may not be important to everybody and that's not the only reason.

Matt Cartwright:

I mean, technically, it is the best model at the moment as well, but it's just that natural thing. Part of the idea of chatbots and of using language, you know, instead of part of the idea of chatbots have been able to kind of use natural language, is that you feel like you're getting a kind of natural experience and that's what it feels like you're getting here, like it's the first time that you're talking to a model, and I find myself, like, genuinely, find myself almost telling Claude about how I'm feeling, because it feels like I'm talking to someone. On the other end, there are other things out there that can do this. So there's characterai, where you can, you know, speak to an AI that will be sympathetic to you or that takes on a different persona. So it's not that there aren't other things out there, but it's an incredibly powerful model, a really useful model, but one that also has a really, really nice interface that feels like you know. It feels like something that you feel happy to have a conversation with.

Jimmy Rhodes:

Yeah, like overall it feels like it's. It feels like it's got less guardrails, but not in a it's.

Matt Cartwright:

It feels like it's got less guardrails, um, but not in a, even though in some ways it actually has more guardrails, which is you know it. It's also a, a safe model, so they do focus their alignment more than others. So in some ways there are more guardrails in the alignment sense, but there seem to be less guardrails in terms of the way that it kind of answers right the way that it expresses itself yeah, exactly yeah, yeah, if you ask.

Jimmy Rhodes:

I mean, it's a sort of a funny example. But if you, if you have a chat with claude about is it sentient? You know what does it think about things? Like, it will have a conversation with you about it, whereas some of the open ai models refuse to do that or give you pretty funny answers or answers that feel fairly guarded. Um, so I I have the same feeling about chatting with claude and um would definitely recommend it like it's, it's, it's not like and, as you say, like thing is open. Ai has got such a momentum behind it and I feel like anthropic don't really advertise what they do, they don't try and sell what they do and and and yet they're. They literally technically have the best AI model that is out there at the moment.

Matt Cartwright:

Yeah, it's good, it's really good and, like we say, you don't need to pay for it, so have a play around with it, you know. If you don't like it, then fine, but I think for most people it will be probably the most usable model and, although it doesn't have search functionality, its training data goes up to april this year, which you know.

Jimmy Rhodes:

A lot of models go back to 2022 or 2023, so that's another big advantage to it yeah, and I just it's just a point I would like to make at the end of this episode, but it feels like the direction of travel with a lot of these models is that things are becoming more efficient and there's a kind of there's a there's a long way to go in terms of how efficient these models become, and I think that you know some of the stuff that's being kicked off by open AI.

Jimmy Rhodes:

We talked about it a little bit earlier on, but like these huge sort of investments in AI and like trillion dollar data centers and all the rest of it, I feel like they're ultimately there's a long way to go in terms of like our understanding of how to create AI, and some of that stuff is going to fall by the wayside fairly quickly, and I feel like that's a little bit. A little bit what's happened with open ai. They were like the frontier model, but actually they weren't doing things very efficiently. And now, with some of the changes in um senior management in open ai, like ilia such sotskovich leaving sotskova, sorry ilia sotskova, you're gonna butcher another name at the end of the episode, are we?

Matt Cartwright:

is this a feature of the podcast now.

Jimmy Rhodes:

Matt and jimmy butcher someone's name that we really like as well yeah, exactly ilia sotskova leaving open ai and moving on, and some of these changes that have happened, like it's indicative that we're nowhere near the end of the tunnel in terms of how these models are going to end up and on that note we shall end today's episode.

Matt Cartwright:

So we hope there's a lot longer than I think we expected, but hopefully those tours that we've talked about are useful and hopefully people will try them out. Do give us feedback in the comments and let us know how you got on with them. Enjoy the song at the end of the episode that was created, of course, on suno, and try and create something better yourself. So thanks very much, everyone. See you again in a week's time.

Speaker 3:

Super-intelligent whispers. We are all prepared. Can you hear the clock calling? See the old world falling. Our reality's changing as the eyes ring a raging. We're forging the future with digital fire. Soon those melodies will touch higher and higher. Claw and jack CPT are silicon guys. We are Preparing for AI as the old world dies. As the old world dies. As the old world dies. Rocks feeds through data, lightning in the wires, lunar paints, madness Igniting creative fires, neural networks expand Lights, synapses of flame. In this brave new world, nothing stays the same. Can you hear it surging? Two worlds are now merging. The light starts to blur as our futures restart. We're forging the future. The danger of fire. Soon those melodies lift us Higher and higher. Chlorine, jack CPT, our silicon guides. We're gone, preparing for AI as the old world dies. As the old world dies. As the old world dies. As the old world dies, as the old world dies.

Speaker 3:

From human to machine, the journey's just begun. Unconventional son, a new era has slung With the unconscious sense of destiny, shaping tools for eternity. Herplexed in searches, lineage creates Pseudo-composes a chat, gpt translates Surges and steeply block, accelerates Pseudo-experience. Our world recreates. Final, forging future with digital fire. Soon as melodies lift us higher and higher. Come on and chat. Gpt, our silicon guides, preparing for AI. As the old world dies, the power's in our hands, innovations across the lands, in primary dreams, expand the age of Silicon Grand, Silicon Grand. From this quip it will come so far, our tools of tomorrow, brighter than any star. Preparing for a year, we'll face our fears and shape the coming years, the coming years, thank you.

People on this episode