Leveraging AI

99 | AI video generation explosion, Multiple new models released including Claude 3.5, Small chance that AI will destroy humanity, and more fascinating AI news for the week ending on June 21st

June 22, 2024 Isar Meitis Season 1 Episode 99
99 | AI video generation explosion, Multiple new models released including Claude 3.5, Small chance that AI will destroy humanity, and more fascinating AI news for the week ending on June 21st
Leveraging AI
More Info
Leveraging AI
99 | AI video generation explosion, Multiple new models released including Claude 3.5, Small chance that AI will destroy humanity, and more fascinating AI news for the week ending on June 21st
Jun 22, 2024 Season 1 Episode 99
Isar Meitis

Are you ready for a whirlwind of AI innovations and industry shifts?

In this week's episode of Leveraging AI,  Isar Meitis takes you on the latest and most exciting developments in the AI world. From new text-to-video models to major industry changes, this episode is packed with valuable insights and updates that you won't want to miss.

In this episode, you'll discover:

  • The latest in text-to-video technology and its potential applications.
  • Insights into new AI models released by major companies like OpenAI, Google, and Stability AI.
  • How AI is revolutionizing content creation and what this means for creatives and businesses.
  • Updates on AI courses and training opportunities to enhance your skills.

Don't forget to check out our AI Business Transformation Course, where you can learn to harness AI for your business. Use the promo code LEVERAGINGAI100 for a special discount, valid until the end of next week. For more details, visit this link> https://multiplai.ai/ai-course/ 

 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Show Notes Transcript

Are you ready for a whirlwind of AI innovations and industry shifts?

In this week's episode of Leveraging AI,  Isar Meitis takes you on the latest and most exciting developments in the AI world. From new text-to-video models to major industry changes, this episode is packed with valuable insights and updates that you won't want to miss.

In this episode, you'll discover:

  • The latest in text-to-video technology and its potential applications.
  • Insights into new AI models released by major companies like OpenAI, Google, and Stability AI.
  • How AI is revolutionizing content creation and what this means for creatives and businesses.
  • Updates on AI courses and training opportunities to enhance your skills.

Don't forget to check out our AI Business Transformation Course, where you can learn to harness AI for your business. Use the promo code LEVERAGINGAI100 for a special discount, valid until the end of next week. For more details, visit this link> https://multiplai.ai/ai-course/ 

 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to a Weekend News episode of Leveraging AI, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This Isar Meitis, your host, and we have a Jam Pact News episode for you today. Multiple companies have released really interesting models and in addition, a lot of other crazy stuff happened. So let's get started. The first piece of news I want to share with you comes from me from Multiplai, the company that I've been running in the past year and a half and about this podcast. So On Monday of this week, we had a live episode of the episode 100 of this podcast that's going to be released on the podcast next week in two different segments. We have done this live both on Zoom and streaming into LinkedIn, and we had over 1, 500 people attend that with 20 experts that each of them shared his or her best AI tip. If you missed the live episode, first of all, don't worry about it. We do lives every single week on Thursdays. So you can join us on Thursday for our live expert episodes. But if you missed that huge, big one, the 100 with 20 different experts, you'll be able to catch the recording next week. It's going to be live both on this podcast, as well on our Multiplai AI YouTube channel, if you also want to watch what people were sharing on the screen. The other big piece of news that we also announced during the live on Monday is that we are launching another cohort of our highly sought after course that is called AI Business Transformation Course. It's a course I've been teaching personally since April of last year. And we have been running two courses every single month. Most of these courses are private, meaning we get booked by specific companies and organization to do the training just for them. So they're not open to the public. We open a public course once a quarter, and we have people from all over the world join these courses. We have people from Singapore and Australia and many people from Europe and obviously most of them from North America. So if you are interested in learning How to transform your career or how to change your business with AI. This is probably the best course in the market out there right now, especially after we've been teaching it for so long and we keep on updating it every single time. So if it's something that's interesting to you, check out the link in the show notes. It's four sessions of two hours each with me as the instructor live with a lot of other amazing people from all over the world. And for a very short amount of time as part of celebrating the episode 100 meaning until the end of next week, we have a promo code of 200 off with the promo code, LEVERAGINGAI100 all uppercase. Again, this is the first and only time we are giving 200 off of the course. So again, if you want to take the course, use this opportunity and enjoy the discount. And now let's dive to this week's news. The first piece of news I want to talk about, not just this week, but the last two weeks where three major new text to video models have been released. So those of you who've been following the show or be following AI news in general, The most impressive model that was introduced so far is called Sora. It was introduced by open AI about six months ago, but it was never released to the public. Sora became only available to people from the industry. So Open AI has taken the path of giving studios and professional creators access to that model, but they have not released it to us, the people to be able to use it. But it was a huge difference from everything we've seen before. The only two companies we had before that did something in that field that was worth mentioning we're running a and he collabs. Both of them were generating short videos, not very high resolution with a lot of consistency issues and things that are morphing in between frames. Now, since Sora was announced, Google also announced their competitor to that, which was called Veo, which based on the demos was similar to Sora with its capabilities, maybe not as good, but in the same ballpark, but that was not released to the public either. But in the last week and a half, as I mentioned, we have Three new models that are all really good that were shared. The first one is Luma Dream Machine, which allows you to create short videos that are higher resolution than everything we've seen before. And with good consistency, I've tested Luma on multiple types of videos, and it's dramatically better than anything we have access to or had access to before. It's not as good as Sora or Veo's demos, but as far as something you have access to right now, this is the best that you can get. Now, The problem with Luma is they're still a young stage company and they don't have a lot of compute. and they're getting a huge amount of demand right now, because they're the best model that you have access to. then you're going to wait either a few minutes and sometimes a few hours in the queue for your video to render. I'm sure that with the right funding that they're going to get with the amount of noise they're creating, that is going to be resolved. The second company that released a very interesting model, just in the last few days is runway. So Runway has been in this field for a very long time when Sora came out, which was way better than what runway had at the time. When Sora came out, the CEO of Runway basically tweeted, bring it on and back then I was laughing. I said, they're so far behind. I don't see how they're going to catch up to open AI, but I must admit that the demos from Runway Gen 3 are extremely funny. Impressive. It's not available to the public yet. It's an alpha version that they released to a very short list of people, but I'm sure it's going to get released within the next few weeks, like all the other previous rollouts that they've done. And it generates very high resolution, high consistency, really amazing videos. The only difference between them and Sora is length. So none of these videos that they've shown as demos run a minute long that Sora does, but from a resolution perspective, consistency perspective, realism perspective, the types of videos you can create, it's absolutely amazing. And I can't wait to use it as a long time runway user. And the third company is called Kling. It's a Chinese company that I've been releasing really interesting models for for a very long time, and I've shared with you some of the things that they released in the past, they now released their latest version that also creates pretty amazing short videos from text and or images like the other models. But that model, while it is available, it's only available if you understand Chinese. So you can go to their website that has a homepage in English that you can see demos of. The videos that are created, but to create ones, you need to use their Chinese, either app or website, which I'm not a Chinese speaker and I don't really understand the user interface. And so I was not able to use it, but again, it looks very impressive. A few thoughts. The first thought that comes to mind is, will this be the thing that pushes open AI to finally release Sora? I don't know. I would assume that's what's going to happen because they don't like to stay behind. So I anticipate that OpenAI will release Sora or some variation of it in the near future. But two more interesting thoughts. One is, I anticipate that by the end of this We will have when I say we is people who want to create content will have the capability to create videos that are highly consistent, that are indistinguishable from professionally generated videos, whether it's cartoons or life like videos, and we'll be able to even control the motions of the camera, the visualization, the lighting, just like we can do on still images on Midjourney And we'll be able to probably edit the videos with prom. So basically change the angle, zoom in, zoom out, crop and do all these kinds of things. And I might be wrong by a couple of months, but I'm guessing within six months or so that capability will exist. What this means, it means a complete democratization of video creation. It means that anybody with creative ideas will be able to generate income amazing, incredible videos, anything from short clips for either fun or ads all the way to full Hollywood style movies by stringing shorter videos and specific shots. Now, what does that mean for the video production industry? When there's people are videographers and camera people, lighting people, sound people, et cetera, et cetera, et cetera. Obviously editors and directors and so on. This puts all of them at a very high risk. I don't see that completely destroying Hollywood in the next year or so, but in the next three to five years, Hollywood and other big studios are going to face very serious challenges. I totally anticipate companies like Amazon and Netflix to have channels in which you can request what series or mini series or a specific show you want to watch. And it's going to generate it for you on the fly as you wish. And so this is the direction that we're heading and it has a lot of very positive implications, but also a lot of interesting problems when it comes to the jobs of people in that industry. Now, in an unrelated announcement, Google DeepMind, which is Google's research, AI research group has unveiled an AI model that adds realistic audio to silent movies. It can generate music based on what's happening on the screen that fits it exactly as far as the beat and the theme based on what's happening frame by frame on the screen. It can generate sound effects and it can generate the voices of people on the screen. So it follows the visual cues on the screen and it's matching what's happening as far as the sound and as I said, both music, sound effects and the voices of the people to what's happening on the screen. Now, it's not publicly available yet. They just demoed that model and the research that they've done. The audio quality is still not perfect and the lip syncing still requires some improvement, but the direction is very clear. By the way, a 11 labs, which is an amazing company that does cool stuff with sound that can replicate anybody's voice in a highly accurate way has a similar product called text to sound effects API that also allows you to do some of these things. Why am I mentioning this now? Because combine that with the previous piece of news of the ability to create videos and you see where I'm going with this. If I can create a video with a video generation tool and then I can add the soundtrack and the voices of the people with something like this. I can create a complete video with everything that it requires to make it completely engaging and mesmerizing to whichever audience that I want from short cartoons all the way to anything. And so the combination of these technologies and the direction is jointly going is very good. Now, staying somewhat on the topic of visual content creation with AI stability, AI, which is the company behind stable diffusion, which is the most capable open source model for generation of images, but also the generation of videos and code and other stuff. So they have other models that they've developed, including the generation of 3d objects finally has a new CEO. So the former CEO, Imad Moustaki was removed by the board for lack of performance, mostly serious concerns about the fiscal stability and spending of the company. The company was reported to lose 30 million in the first quarter on only 5 million of revenue. But they had no CEO since Moustaki step down. And now they've announced that Prem Akaraju, who is the former CEO of Weta Digital, is going to be the new CEO of Stability AI. This will also come with cash infusion from some big investors to potentially get the company going. The company just recently released Stable Diffusion 3 as a text to image generation, which is absolutely amazing. It's the first time that there's a real competitor to mid journey, and it's an open source model, which means you can do interesting things with it without giving your data to mid journey. And it's an open source model, meaning you can run it on your own data without allowing access to it. And it also has a huge community that develops a lot of really interesting plugins for it. So I really hope, because I really like stable diffusion. I really hope that this new CEO we'll get their act together. So they will be able to keep on investing in the amazing models that they've been developing and releasing. Now to stay on the topic of creativity, Mira Moradi and David Droga. Senior executives from OpenAI and Accenture had a panel talking about the role of AI in creativity and advertising. And they mentioned that they view AI as a collaborator and not a competitor to human creativity. They share the statistics that currently 75 percent of creative professionals say that AI helps them create higher quality work, but they also discussed that many people have concerns that it's going to put a lot of jobs at risk. That being said, I want to share my personal opinion on this, but before I share that, I'm going to share something from Cassie Kozikoff, who used to be the chief strategy officer for Google and now runs her own thing. She has an amazing lecture on her YouTube channel called Thinking vs. Thunking, which basically says that AI will allow us to focus on the things we are really good at, meaning thinking. Thinking versus doing the tedious work. Nobody wants to do, which is the thunking step of this. Now, when it comes to creativity, there's two types of creation. One of it is just for fun. I just want to express myself in a creative way, and that's definitely not going away. The other part is I want to use creativity in order to drive business results, whether it's in marketing or in sales presentations or whatever the case may be. And in this case, there's a huge opportunity to just make the creation of our creative thoughts, a lot more impactful. I personally can do a lot more stuff and I am doing a lot more stuff than I ever could, because every time I have a great idea for a new presentation or a new topic or a new way to explain something, I can do it myself versus hire a third party company or grow my own internal team in order to do these kinds of things. And I believe that these tools enable exactly that they enable people with creative ideas to express themselves in ways that were just not possible before, because they were lacking the technical skills to do that. And that barrier will go away. Now, will that come as a cost of people who their job depends on expressed on executing the creative expressions on others. Absolutely puts it at risk, but that has been true for many other things in history. If you think about cartoons and as an example, cartoons used to be drawn by hand by thousands of people in big rooms frame by frame and our computer generates it. And yet we still have people creating cartoons. So all I'm saying is, I don't know what the implications are going to be on the job market. I do know that we're going to see an explosion of creativity, which in general is not a bad thing. Now, since we talked about Mira Moradi and OpenAI, the rumors around the shift from being a non profit to a for profit organization for OpenAI are solidifying, and according to a report, Sam Altman shared with his shareholders that is his plan and that he's going to announce it potentially in the next few weeks. For those of you who don't know the whole story, I'll give you a very short summarized version of this. OpenAI was established as a nonprofit organization. When they were short on cash and understood they need a lot more money. They shifted to a weird scenario where the nonprofit board still controlled a small capped profit organization, which is the organization that got 13 billion investment from Microsoft In a really strange and problematic structure. And now Sam wants to get out of that structure and restructure the company in a way that will allow the company to be a standard, no profit business, more or less like all the other AI companies. As you probably know, there was a big restructuring of the board after the firing of Sam Altman at the end of last year. So there are new board members who are probably loyal and very supportive of Sam Altman and the company. Especially that the company was recently reported to keep on growing at a very high pace, which are reported last week. The double their income from the end of last year till now. So that change again has been rumored for a while, but apparently it's going to be announced in the next few weeks. Staying on OpenAI, there was another chat GPT outage early this week for a few hours. If you haven't experienced it, you were lucky I did. I wasn't able to use it for an extensive amount of time. And the company did not provide any details, but it was fixed about two and a half, three hours after the outage started. That's not the first outage that ChatGPT is experiencing. There was at least a big outage once a month in the last few months. I assume ChatGPT Open AI are figuring out ways on how to A, prevent this and B, address it a lot faster when it does happen, because more and more individuals and organizations depend on that technology to work consistently. Now, since we mentioned ChatGPT board and the firing of Sam Altman, there's a big piece of news from Ilya Saskover. Ilya was one of the co founders of OpenAI that founded the company with Sam and a lot of other people and he was the chief scientist officer of the company. He was part of the people on the board that kicked out Sam Altman at the end of last year. And after Sam was brought back, he more or less disappeared. Nobody knew what he was doing. Nobody knew exactly whether he's still holding his role in the company, etc. Complete radio silence, both from him and from Sam. And from Sam, when he asked about it, he said it's not his role to share what Ilya's plans are. And in May, Ilya left the company, followed a week after that by Jan Liki. Both people were leading the super alignment team, which was the safety team within OpenAI, which was dissolved after they left. But the big news this week is that Ilya has formed a new company called SSI, which stands for Safe Super Intelligence. And his co founders are Daniel Gross, who is a former Apple AI and search lead, and Daniel Levy, who is a former open AI employee, and together they're founding this company, they claim that superintelligence, which is a higher level of intelligence beyond AGI, meaning it's a AI model that is much more capable than humans on more or less every cognitive process. So they're claiming this is within reach and they're claiming it's extremely important to put safety and security first. First, so their goal is to raise money, which they're saying is not going to be a problem to pursue safe, super intelligence and not fall into the race and the commercial pressures That is happening right now with all the other companies. Whether they can be successful in that or not. I don't know whether it matters or not. I don't know because if everybody else is just running fast and like being reckless about security right now, just to get there first. The fact that there's another company that tries to do it safe makes a difference or not. Hard to say, I think it's a good step in the right direction. I really hope they'll be able to pull other companies to maybe collaborate with them on this effort, but I really don't know at this point. And from open AI, one of the biggest giants in the AI world to now for a short amount of time, we're The biggest company in the world. So NVIDIA this week has became the largest company in the world by market cap for a short little while. And then they took a small dip that drove them down to number three behind Apple and Microsoft. I don't expect them to stay number three for long if the trajectory they've been on in the last two years is going to continue, they're going to be number one again, and they're going to be number one by a very big spread because they're the one that's fueling the growth of everybody else. Now that process also made Jensen hung there co founder and CEO and president to be, one of the most wealthiest people in the world, actually number 11 in the world, if you want to be very specific. And if you want to make the numbers even more interesting, his net worth has increased to around 120 billion. And his value has grown by 42 billion since the beginning of this year. So the company's market cap has increased by 177 percent this year to about 3. 3 trillion and Hung owns about 3 percent of the company. By the way, I personally think he deserves every single penny. He has proven to be an incredible CEO on every aspect from taking care of his employees to driving innovation, to growing the business in an efficient business model. So kudos to him. And I'm very happy that he's able to capitalize on his efforts. Now, I told you in the beginning of this episode that a lot of new models have been released. So we talked about the text to video generation models, but there are a lot of other models released. The most exciting one is Anthropic just announced the release of Claude 3.5 Sonnet. So those of you who don't know, Anthropic is one of the most advanced AI companies in the world. They're behind the AI chat called Claude, and just three months ago released Claude 3 in three different levels, Haiku, Sonnet and Opus with Haiku being the smallest Sonnet being the middle level and Opus being the largest model. So they're now releasing Claude 3. 5. It's already available to anybody who's using Sonnet on all the different platforms. And according to Anthropic, it has significantly better capabilities on visual, reasoning, chart interpretation, text transcription from imperfect images, coding, reasoning tasks, better handling of multi step workflows, better understanding of humor and human like writing, and it runs twice as fast as the previous model. They're also claiming that it sets significantly better results than ChatGPT for Omni on multiple benchmarks. Now, in the demo, they shared a really cool new feature of Claude, which is called Artifacts, which is basically an interactive version of the chat, where the screen is divided into two, where on the left, you're having the chat, and on the right, you can see the results of what the chat is doing. And the demo that they've given is how, literally by using language and explaining what they want to do, the user was developing a 8 bit crab video game that was running on a computer. on the right side of the screen. So as the person was asking for specific things like the crab and then the shells and then how the game actually behaves and small changes to it, the game was being coded and was running on the right side of the screen. So a very cool new capability. It reminds the data analysis capability within ChatGPT just with a lot more functionality. I anticipate more and more companies to go down that path because it's a very useful way to use it. It's also like the way you develop GPTs where you have the GPT on the left and the way it's running on the right or a similar functionality within Zapier Central, where you develop the bot on the left and you can see the outcome on the right. So more and more companies are going down that path of making their environment a lot more interactive than just the chat, because these tools can do and more things and allows you to see the outcome right there and then. I personally really like Claude, I use Claude every single day. I find it to be the most human like in writing. I find it to be significantly better than the other models in summarizing data. And as I mentioned, I use it for the creation of the news that you're listening to right now. And from Claude's to another favorite tool that has been in the news a lot recently, which is Perplexity. So Perplexity got a lot of bad press in the past few weeks. I shared with you last week that various media outlets and specifically Forbes, we're claiming that they're literally stealing their work without giving them any credits for that, including four firewall content. So not a lot of good news, but what they've done this week is the fact that now perplexity displays direct results for factual queries like weather, time, currency conversion, simple math questions through visual cards right in the model. This is obviously aiming to keep people on Perplexity instead of letting them go to Google and other search platforms. I use Perplexity all the time and I actually use it more and more. And I probably use it more than I use Google right now. And basically this move is going to keep more people on their Perplexity platform instead of going to Google for some of the day to day small stuff that they were going to Google before. So this is a direct punch to Google on its main business. I expect Google to take some very big moves. One of them might be to just go and buy Perplexity because Google's recent attempt to copy what Perplexity are doing ended up not working very well. And I'm talking about the AI summaries that appear before the regular search results. They scaled that down dramatically in the past few weeks because the initial rollout was not successful. Staying on releases and new capabilities, Meta just released a bunch of new open source models allowing to do some interesting things. The first one is called JASCO, which stands for Joint Audio and Symbolic Conditioning. That's a mouthful, but it's basically an AI model that can generate music straight from text. But different than other models like it that has been released in the past few months, it also allows users more control on features like chords and drums and melodies, rather than just telling it the style and having it generated. It will be released As an MIT license, which basically means it's an open source license. You can do whatever you want with, and a creative common license, which also allows you to do a lot with this model. So a great open source model to anybody who wants to create music. They also released audio seal, which is a tool that allows users to detect segments of AI generated audio and watermark them so other people will be able to know that they are AI generated. This will be released under commercial license. They also released Chameleon, which is a multimodal text model for tasks that requiring both visual and textual understanding. So like the latest chat GPT for Omni, and they released two variations of that one, seven billion parameters and the other with 34 billion parameters. These models are going to be released for research only at this point. And they also released a new capability that allows to do multi token prediction instead of a single token prediction, which is how All the large language models right now. This means that it will allow to train models and run models significantly faster and more efficiently than it's done today. So Meta is staying committed to their open source and research approach, developing new, really interesting and capable AI models solutions and releasing them to the public under a variety of licenses depending on which of the topics and sharing it with the community to enhance AI development as they have since the beginning. Another interesting model released DeepSeek, which is a Chinese AI startup, has released DeepSeek Coder version 2, which is an open source model built on a MOE, which is mixture of experts architecture. Which is trained to be amazing at writing code, and they're claiming it outperforms closed source models such as GPT 4 Turbo, Cloud3 Opus, and Gemini 1. 5 Pro in coding benchmarks. Now it also can do other large language model functions close to the capabilities of these models, but not surpassing them. But the key functionality, as I mentioned, is developing code. It supports over 300 programming languages, and it's very good at creating code, reviewing code and math tasks. And it has 128, 000 tokens context window, which is highly impressive for a new startup. And it's available in two different versions, 16 billion and 236 billion parameters. Now, this new coding tool is, as I mentioned, open source, so it's available right now via HuggingFace. You can also use it through an API through DeepSig's platform itself. Now, three interesting thoughts about this. First of all, it shows how incredible AI is becoming when it comes to writing code. There are now multiple, very capable platforms right there, right now, that allows you to write code and review code and debug code with AI. It also demonstrates the Interesting battle between open source and closed source models, where the closed source models are not being able to open a gap from the open source ones that are continuously closing the gap and in many cases are surpassing the closed source models. It also the race between the West and specifically the U. S. and China when it comes to developing these models. if you haven't listened to the interview on the Dwarkesh podcast with Leopold Aschenbrenner, who is the guy that recently left OpenAI and has shared three hours of his manifest on this podcast, he talks about a lot about the security risk of China having better models and achieving AGI and then super intelligence faster than the US. So if you want to open your mind and be somewhat disturbed about what might be the future of AI, go and check out that podcast, but it directly relates to Chinese companies releasing models that are as good and in some cases better than the models released by US companies. Another very interesting release this week by a company that was in stealth so far is Unify. Unify is a very interesting tool that allows you to connect to it through an API and it behind the scenes connect to multiple large language models and can pick the right model to different steps within your process. In other words, you write your prompts within the Unify environment and it is going to break it down into smaller steps. some kind of an agent kind of process and send different segments of your prompt to different large language model to A, get the best results and B, save you cost in the process. Now I've been using a similar approach somewhat manually, but I'll explain very quickly how I do this. I use a different tool called OpenRouter that allows me to get one API key and use it to connect to multiple large language models. And I basically run the tests myself. I run a small scale test on tasks that I want to run that are more long and complex. I run it across five, six, seven different large language models. I see which results are the best results. And then I go and compare the results the cost of the models that got the best results, which then I pick to run the full task. If you want to learn how to do that, this is one of the topics that we teach on the AI business transformation course, that as I mentioned, the next cohort opens on July 8th. And from the really long list of new AI model releases to a quick update on something I shared last week. So I mentioned last week that Adobe has shared a new terms and conditions that started an outrage in their customer base as far as them using the data to train a I models. So Adobe has revised their terms of service to explicitly state the following that content that is stored locally on your computer. They are not going to train on content that is submitted to Adobe stock marketplace and be used for training of Adobe Firefly and content that is stored on their cloud, but is not submitted to the marketplace might be monitored and scanned for illegal or abusive material with human review in specific circumstances. Now, what they're claiming is that this is not a change to the previous terms, but just clarifying the terms. If you would read both of them, you will see that's not exactly the case. I think some of the damage was done as far as users now losing trust in what Adobe is actually going to do with their data. And I'm sure they're going to lose some of their users to other platforms, but this at least clarifies where your data is going to be used and where it's not going to be used. If you are using Adobe tools, and many of us do. Two interesting pieces of news from X AI, Elon Musk's companion from Elon Musk himself. So X announced that Dell technologies and super microcomputers will provide server racks for X Musk is calling the AI factory. like on the same naming convention of the Gga factory by his Tesla plants. The supercomputer will run NVIDIA chips and they're aiming to be able to have it up and running by fall of 2025, the training of DR two will require about 20,000 Nvidia H 100 GPUs and GR three over a hundred thousand of these GPUs, and probably some of them are gonna be H two hundreds and God knows what Nvidia will release by then. X. AI is working with Dell and with NVIDIA and with super microcomputer to build a new gigantic data center. Now in another interesting news from Elon, he has praised Geoffrey Hinton, who is called the godfather of AI on social media for highlighting the risks of AI. So he was relating to a video that is actually more than six months old, that just resurfaced this past week. And in this video, Hinton shares that he estimates that there's a 50 50 chance that AI is going to surpass human intelligence in the next few years. He's using the example of GPT 4 improvement over GPT 3 to show how rapid the advancement is. Hinton also predicts that there's a small possibility of a complete human wipeout due to AI doing one or two things going bad within the next 5 to 20 years. That's very soon. And to prevent that, he suggests that 20 to 30 percent of computing resources in the AR world should be diverted to AI into studying and finding countermeasures to AI risk. Now, going back to Ilya Saskover leaving Open AI, they were supposed to get exactly that 30 percent of compute to the super intelligence group to keep the development by Open AI safe. But we know how that ended. Based on Yan Liqi, who also left with Ilya Saskovar, he's saying they never got the amount of compute and the resources that they were promised because the compute was used to run faster, develop more capabilities, more functions, and lead the AI AGI race. And so now while Musk and Geoffrey Hinton are not great friends and were at each other for multiple different things, they at least agree on that, that there are serious risks by AI. Elon has been very loud about this. And it will be interesting to see when now he's building X. AI and Grok, how much compute are they're going to dedicate to keeping it safe? Now, since we're talking about these really large server farms to drive AI growth, each of them requires a stupid amount of power, which means they are consuming more and more of the global power. And again, going back to the interview on the Dwarkesh podcast, they are talking about data centers that will in the next decade, consume 20 to 30 percent of the electricity generated in the U S. So we're talking about a serious major problem of power generation. And the way to fight it is to actually develop better methodologies, better hardware, and better algorithms to be able to train the models and do the inference, which is the generation of the outcome of the models in a more effective way. So this week Yandex introduces what they call Y A F S D P, another mouthful, which is an open source tool that dramatically reduces LLM training costs and time and hence power consumption. This Presumably reduces the training time by up to 26 percent and dramatically reduces the number of GPUs required. They're doing this through various of approaches from changing the way the parameters and the way it's Shared in a training step to memory optimization and communication optimization through different aspects of the hardware. And they have released this as open source on GitHub. This is obviously very good news. I shared some other approaches like this with you in the past. I really hope that these companies will figure out ways to dramatically reduce the amount of compute that is required to drive the growth of these models, because they're not going to stop. They are going to invest as much money as needed and as much power as they can get their hands on in order to grow the models bigger and faster and fast. And the only way to prevent them from dramatically polluting the atmosphere and causing more issues of global warming is to find better ways that are significantly more efficient than the ways do this right now. So this is a great step in the right direction. And I'm glad to end this episode on a positive note. As I mentioned, don't forget to look at the AI business transformation course. There's a link in the show notes, as well as if you like this podcast, please rate us on your favorite podcasting platform and share it with other people who may benefit from it. That's it for this week. On Tuesday, this week, as I mentioned, we are sharing amazing recording of episode 100 with 20 different experts. Each had five minutes to share the best tip they can on how to leverage AI in business. So be on the look for that. We'll probably release it in two different shorter episodes because it's a two hour thing. So the first one will be released on Tuesday and we'll probably do a second release either instead of the news next week. Or on Thursday of next week. So you're going to have a lot of AI stuff to listen to from the leveraging AI podcast in the coming week. And until then have an amazing weekend.