Leveraging AI

106 | OpenAI defines 5 stages in the road to AGI, Writer releases a 10M words RAG platform 🤯 Amazon releases App studio, and many more important AI updates from the week ending on July 12

July 13, 2024 Isar Meitis Season 1 Episode 106
106 | OpenAI defines 5 stages in the road to AGI, Writer releases a 10M words RAG platform 🤯 Amazon releases App studio, and many more important AI updates from the week ending on July 12
Leveraging AI
More Info
Leveraging AI
106 | OpenAI defines 5 stages in the road to AGI, Writer releases a 10M words RAG platform 🤯 Amazon releases App studio, and many more important AI updates from the week ending on July 12
Jul 13, 2024 Season 1 Episode 106
Isar Meitis

Special Announcement: Our sought-after AI business Transformation Course is now available as a self-paced online course  more details below.

Is OpenAI's New Framework a Glimpse into the Future of AI?

Are we prepared for AI systems that can outperform PhD holders and entire organizations?

In this episode of Leveraging AI, we explore the latest developments in AI, from OpenAI's bold new framework to Microsoft's strategic retreat from OpenAI's board. We'll also dive into intriguing collaborations and advancements, such as OpenAI's partnership with Los Alamos National Laboratory, and the potential impacts of AI on global energy consumption.

Special Announcement: Transform Your Business with AI

Exciting news! Since April of last year, Multiplai has been offering the AI Business Transformation course, helping hundreds of companies revolutionize their operations with AI. While the instructor-led course requires a time commitment, we are thrilled to announce a new offline, self-paced version of the course!

What’s Included in the Self-Paced Course:

  • Eight hours of comprehensive video content.
  • Step-by-step guidance on implementing AI across various business aspects.
  • Hands-on experimentation and exercises.
  • Tools, use cases, and a complete checklist for successful AI integration.

Ready to transform your business at your own pace? Access the course through the link in the show notes and begin your AI journey today!

https://multiplai.ai/self-paced-online-course/ 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Show Notes Transcript

Special Announcement: Our sought-after AI business Transformation Course is now available as a self-paced online course  more details below.

Is OpenAI's New Framework a Glimpse into the Future of AI?

Are we prepared for AI systems that can outperform PhD holders and entire organizations?

In this episode of Leveraging AI, we explore the latest developments in AI, from OpenAI's bold new framework to Microsoft's strategic retreat from OpenAI's board. We'll also dive into intriguing collaborations and advancements, such as OpenAI's partnership with Los Alamos National Laboratory, and the potential impacts of AI on global energy consumption.

Special Announcement: Transform Your Business with AI

Exciting news! Since April of last year, Multiplai has been offering the AI Business Transformation course, helping hundreds of companies revolutionize their operations with AI. While the instructor-led course requires a time commitment, we are thrilled to announce a new offline, self-paced version of the course!

What’s Included in the Self-Paced Course:

  • Eight hours of comprehensive video content.
  • Step-by-step guidance on implementing AI across various business aspects.
  • Hands-on experimentation and exercises.
  • Tools, use cases, and a complete checklist for successful AI integration.

Ready to transform your business at your own pace? Access the course through the link in the show notes and begin your AI journey today!

https://multiplai.ai/self-paced-online-course/ 

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to a weekend news edition of Leveraging AI, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Isar Mehtis, your host. So there was no big, huge, gigantic news this week, no huge releases. Actually, maybe one that's not huge, but interesting and unique, but a lot of really interesting, small news. And we'll start with an interesting announcement from OpenAI. OpenAI just shared a new framework by which they are going to look at the capabilities of new models on their way to AGI. And so the way they've categorized this into five different categories, They're claiming the current systems are somewhere between level one and level two, which level two is defined as systems that can solve basic problems at the level of a person with a PhD. So they're saying we're getting close to that, but we're not there yet. Level three refers to agents capable of taking action on users behalf. Level four involves AI capabilities to create new innovations on its own, a level five, which is the new definition from them to AGI is AI that can perform the work of entire organizations of people. Now, previously they've defined AGI as a highly autonomous system, surpassing humans in most economically valuable ways. So it's the same, only it expands it to performing the tasks of an entire organization, meaning the interoperability and not just the tasks of one single individual. So let's look a little deeper into what that means. It means that the new. Lens if you want in which open AI is going to evaluate systems is not by its computing power or number of parameters of how many GPUs they've used or how much money was used in order to train it and so on, but rather through its ability to mimic and if you want to be more extreme, replace human activity. So in the beginning, it talks more about augmentation and collaboration with humans or being able to work with humans in a system in different stages. Up the echelon of levels, you get to the point that it can do things and take actions behalf of people in level three, and then creating new things in level four, and then replacing entire organizations in level five. So that's the path that open AI is seeing ahead of us. And that's where they're going towards, which like many things with AI, I find exciting and terrifying at the same time. And again, that's the lens that they put us through. And based on the fact we're expecting GPT 5 to come out sometime, probably this year, we don't know exactly when, then we probably should expect us to be around level two, if they're saying we're getting close, which means being able to solve problems at the level of a person with a PhD across any aspect of PhDs, which, Technically no person has right now. Another interesting piece of news about OpenAI comes actually from their partnership with Microsoft. As you probably know, following the whole issue with Sam Altman getting fired, Microsoft got an observer seat on the board. And that lasted since towards the end of last year, when the whole thing happened until this past week, where OpenAI basically forfeited its observer seat on the board. Now they never had voting rights. It was always an observer seat, but now they don't have it at all. There's two reasons why they give up their board seat. One of them, which is what Microsoft said is that they're witnessed a significant progress from the newly formed board, which as a lot of the board has changed in order to hopefully prevent this kind of situation in the future. There's obviously good and bad in that because the whole event started with several different board members saying that there are significant safety issues. And that's why they wanted to remove Sam Altman as the CEO. But the other reason they are giving it up is because the European commission said that Microsoft could face. antitrust investigation and maybe following that by litigation that has not been concluded yet, but it's very obvious that the EU is not looking favorably at a really large company controlling one of the most advanced capabilities in the world, in addition to everything else that Microsoft controls. And so these two reasons combined, probably the latter more than the first, is causing Microsoft to give up It's seat on OpenAI's board. Another interesting and positive development from the OpenAI side, unless you're a conspiracy believer, is that opening has entered into a partnership with Los Alamos National Laboratory, which is one of the biggest laboratory in the U. S. They were involved in developing the original nuclear bomb. I'm not saying that in a negative way. I'm just saying that to tell you that they've been around doing various types of research for decades. So the new multi year collaboration is going to provide a development capabilities from open AI to the research lab in order to help in accelerating bioscience research, specifically to try and find new drugs and promote positive research of scientific biological development. The other thing that it may help to do is to research the potential negative impacts of using these tools out in the public. So I really hope that this collaboration will A) lead to benefits to mankind as far as health solutions and B) will potentially put the right guardrails on what other people can potentially do with the OpenAI tools because these, this research lab may find these loopholes first and help OpenAI Plug these holes before anybody else exploits them. And still on OpenAI in this past week, Oliver Godman, their head of API product, was interviewed by VentureBeat and he shared that he believes that the training costs of AI models is going to decrease dramatically in the next few years. Despite the growth in need and adoption, he anticipates the cost will decline because of efficiencies in compute hardware. So better chips and technologies that we may or may not have today, software optimization, and just economies of scale across everything that they're doing. Now the big question in that is, will the improvements in the technologies and efficiencies is going to be more impactful or less impactful than the growing need for the technology and the scale of the models that are being developed. And there's contradicting facts to both sides. And we're going to touch on a few of these in the next few pieces of news. I want to pause the news just for one second to share something exciting from my company, Multiplai. We have been teaching the AI business transformation course to companies since April of last year. We've been teaching two courses every single month. Most of them are private courses. And we had hundreds of companies take the course and completely transform their businesses with AI based on what they've learned in the course. But this course is instructor led by me and it requires you to spend two hours a week, four weeks in a row at the same time of day, working with me and my team in order to learn these kind of things, which is may or may not be comfortable for your time schedule and other commitments. So I'm really excited to share that we now have an offline self paced version of the same course. You can log into our platform and there's a link in the show notes. So you don't have to look for it or try to figure out or remember what I say. Literally just open the platform, which you're listening right now. And there's a link in the show notes, which will take you to get to the course. The course is eight hours of video of me explaining multiple aspects on how to implement AI in your business, going from an initial introduction to AI. If nothing. All the way to hands-on experimentation and exercises across multiple aspects of the business from data analysis to decision making, to business strategy, to content creation. Literally every aspect you can use AI in your business today, including explanation. Tools and use cases for you to test as well as a full checklist in the end of how to implement AI successfully in your business. So if this is something that's interesting to you and you don't have the time to spend time with me in an instructor led environment, you can now do this on your own. You can now do this on your own and drive significant efficiencies to your business using the step by step course that we now have available and now back to the news. the next piece of news, Which will shortly tie back to the scaling and efficiency of the global need for AI. But the first piece of news from Exode. ai is that they're planning to release Grok 2. So Grok is their AI model, probably in August of 2024. The rumors from Grok itself is that it's going to surpass the capabilities of OpenAI's GPT 4. 0 and Anthropic Cloud Sonnet 3. 5, the two leading models in the world today. Now, they're also claiming that it's going to have improved coherence and factual accuracy beyond the existing models. So my thoughts about this is that a most of the announcement by Elon Musk has been exaggerating what he's going to actually deliver. That was true roughly on any big announcement he ever made in the past. That being said, they have been investing a lot of resources in developing these capabilities which means that even if they're not going to deliver a model that is better than the leading models right now, I am certain it's going to be a huge step ahead of what they have right now. And I'm even more certain that in the longer future, they will be a very significant model. Layer in this field. Why do I think that? that's going to pass back to the previous piece of news. So Charles Lang, the CEO of super micro, which is the company that is building the AI giga factory for Elon Musk's companies is claiming that they're now working on a huge facility. That's going to use 350 million 1000 NVIDIA GPUs that are going to be liquid cooled to get probably one of the highest performances in any AI supercomputer in the world today. So this infrastructure that they're building right now, that they're saying may come online within the next year months. So that may mean this year or probably early next year is going to become available to Tesla and XAI to develop their next capabilities. So as I mentioned, We should expect Grok 2 to be significantly better than Grok 1, maybe, as good as the leading models today, maybe not, we'll know as soon as it comes out and we'll be able to start testing it. In the long run, I am certain they're going to be a significant player, A, because that's the way Elon plays, he doesn't do anything small, and B, because they're investing huge resources to build the most capable computing power for that process. Now, to continue on the topic of the needs of AI systems and how significant it's going to be, in a recent research, some people estimate that the tech sector will require 20 percent Of global electricity by 2030. So that's within five or six years. This is a very short amount of time. Now, a big chunk of this growth in demand comes obviously from AI needs for both training and inference. Inference is the output. So while there's going to be improved efficiencies. So while there is going to be improved efficiencies, a lot of people anticipate the needs to outgrow these efficiencies. Let's talk about a few efficiencies that became available or known this week. So the first one comes from Microsoft. Microsoft revealed a new AI chip design that is called Minference instead of inference. And it is a new chip that is designed to do significantly faster inference that the current GPUs can. The goal of this is to run it on edge devices like smartphone and cameras and IOT sensors. So basically on device AI capabilities that are a lot more advanced and capable than is done today that obviously will reduce the need to sending all that data to the cloud into data centers, which will Dramatically improved performance as far as speed and will reduce the amount of bandwidth that is required, which in many cases either doesn't exist or is far from optimal because of the latency that it creates. Microsoft is claiming that this new architecture was slash the inference time by 90%. So it will take only 10 percent of the time compared to current GPUs. And obviously also 10 percent of the power, which is a huge improvement compared to what we have right now. There's another company that has developed something similar. We spoke about them several times in the past. That's Grok with a Q. So they are a hardware company that also developed a chip that is extremely capable and a lot more efficient than GPUs when it comes to inference. So that's another big benefit as far as providing those efficiencies that we talked about before. In another relevant topic to efficiency, Google DeepMind has developed a new training method that they call GEST, which stands for Joint Example Selection Training. The goal of this technique is to balance between the model performance and its energy consumption, which is going to drive a much more efficient model training process. Now in the research, they're showing that just matches the performance of existing models using traditional training with 13 times fewer training iterations and 10 times less. Energy consumption. So this is on the training side. The previous news was on the inference sides on both sides, their efficiencies to be gained. And another piece of news that relate to that is meta just introduced an efficient language model that will run on device. They call it mobile LLM, and it's Just coming out of research from the meta AI lab run by Yann LeCun. Now this model, similar to what we talked before, can run locally on smartphones and tablets and other local computers, which will enable much faster and more private AI capabilities to run tasks without the need for cloud connectivity or the huge behemoth data centers to be able to engage with them. And as everything with meta, they're planning to release these new models as an open source for people to use in the research to implement their new capabilities. So to sum it up, there are indeed a lot of developments, both on the hardware and software side, both on the training and inference side that are going to dramatically Improve the efficiency of this process. The question is, will that contradict the hunger? These big players have to train bigger and bigger models, and the jury is still out on that. My personal belief, and I will connect it to actual facts in a minute, is that at least in the near future, the need for growth and the fierce competition between the big players is going to be more significant than the efficiency gains that are driven by research right now for two different reasons. One, as I said, the competition, and the other is the fact that this is still research and. Many of these things will take months, if not years, to deploy compared to the technology we have right now that is already available and is being deployed by these companies. Since we started talking about Meta and their LLM, they have introduced a whole new set of AI features to WhatsApp, their chatbot. The goal is to have AI more embedded into the WhatsApp environment, providing useful capabilities for users. This enables to have a more natural conversation with WhatsApp to be able to find specific messages to automatically write specific messages to convert more voice messages to text and even to translate text from one language to the other. So you can communicate with people from other countries who speak different languages. And so a lot of new useful practical AI features that are coming to WhatsApp in order to enable us to have easier and more transparent communication across people using AI. Now we'll switch the discussion to a different company, but we'll start with the same topic of where AI is it going to drive more demand for power and resources in the near future or the other way around? So Dario Amadei, the CEO of Anthropic in his recent interview shared something that he shared several times before that he believes that the training of the next model in 2025 is going to cost around 10 billion. Obviously most of that money comes to computational needs that is directly correlated to the amount of. Our that it will require, Dario, if you haven't heard him on previous interviews and you have released a lot of interviews recently, all of them are really interesting. He's very clear that he completely believes in the scalability of these models. And the bigger and more data you're going to give these models, the better the outcome is going to be. He is claiming like many others in this industry, that they don't see an end right now for the scalability of the training, which means they're going to try and train bigger and bigger models, which will require more and more computing power and resources. Now, if you're asking, how do they get a lot more data than what they're getting today? some of that data, more and more of that data is synthetic data, meaning it's data that is generated by the AI models themselves in order to train the future model. They were able to prove both in research and in practicality that this actually does improve the capabilities of the model. But that also means that you need more AI capabilities in order to generate the data that will train the future models that will cost 10 billions to train. We heard a lot of other people saying, throwing numbers around of a hundred billion dollars to train models and so on. So this race is definitely on. And that's why, as I mentioned before, I definitely think that at least in the near future, we will see a huge increase in demand for resources, whether it's power or computing power in order to fuel this fierce competition between the big players. It also means something else that there's going to be very few players that will be able to play the game at its most advanced capabilities, which is also not a good thing. We're talking about a handful of companies and probably governments that will be able to continue running at the edge of this technology while everybody else will fall behind. Now, when we say fall behind, there's still going to be a lot of companies developing smaller models that are going to be very well tailored for very specific needs, and they will have their niche and their place in the world. But as far as competing at the top levels, at the top models, pursuing AGI, then ASI is going to be held very tightly by a very short list of companies. Now, to be fair, Dario Amadei calls for increased collaboration across governments and countries in order to make sure that doesn't happen and that everybody in the world benefits from advanced AI capabilities, At least as much as possible. But the reality, at least as it seems right now, is that this is not where the wind is blowing. Now, let's continue with Anthropic they have been on fire with their product releases in the past few weeks. And in a demo that they released this week in a video on X, they're showing a huge improvement on the stuff that they're doing on the enterprise level. Anthropic was very clear all the time that they're focusing on the enterprise side of their product. And in this recent release and in this recent demo, they have shown some amazing capabilities. So most of you hopefully know the Anthropic Cloud chatbot. It's definitely my favorite. And it became even a bigger favorite with the release of Sonnet 3. 5 that actually has amazing capabilities, including projects, which is developing these mini automations like GPTs for open AI and artifacts, which is a side panel that shows you what the AI is doing and can write code and execute it right there. And then within the model, so really amazing releases, but now they've released a new version of their Anthropic console, which is built for developers, but really anybody can use, and it's actually running different than the chatbot, it's not running on the chatbot platform. It's actually running against our API, meaning you're going to pay per token, but you can do really cool things with it. One of the things that you can do is it can help you write highly detailed prompts. So if you don't know how to prompt very well, you can go into the Anthropy console and tell it what is the task that you're trying to complete. Just literally describe it in natural language and it will write a highly detailed, very long prompt. Prompt that you can use either right there and then within the console, or you can copy and paste into technically any other large language model, including anthropic clod. But the other capabilities that they've shown is that you can create these tools with test variables instead of actual parameters. The demo is showing an example on how to categorize customer service requests. And there's a parameter for the customer service complaint as a parameter, but then within the console itself, you can generate fake responses. You can ask the model itself to generate customer service concerns, and then you can test the really long prompt that it has created for you to see what the responses that it has created. You can then score these responses and use the feedback in order to create an even better prompt. So it allows you with Indian thropic console to complete a complete work loop, including creating. environment, including creating the test variables, including creating the test use cases, and then testing the model and all of that. So it's an incredibly powerful capability to any development environment, whether you're developing code or developing these kinds of processes within an environment, as I mentioned earlier, These are really amazing and capable deployments by Anthropic in the past couple of weeks. And another capability that Anthropic rolled out just after they released Sonnet 3. 5 is the ability To share your conversation from the chat bot, but it works differently than sharing conversation from chat GPT. And the difference is that you can continue from the point that the conversation was left off. So if you're sharing a chat GPT conversation with somebody, they can see the outcome, but that's the only thing that they can do, but in this new feature releasing Claude, it allows people to quote unquote remix, that's how they call it, meaning if I share with you one of my prompts, you can see the outcome, but you can also click remix and then make changes and continue developing the prompt or the process that I was working on from that point, the goal of this is obviously to Drive collaboration and community capabilities within the cloud platform, which I personally really Think about this in a business context or an enterprise context where you can collaborate with other people in developing better and better prompts for different processes within the company, where several different people can just follow up on the work of a specific individual and there's continue developing the capabilities of the model on a specific task. From my perspective, another very exciting feature from Anthropic. Now, since we spoke about most of the big players and the capabilities that they're releasing on the large language model side, let's switch to something interesting. A company called Solos has announced another set of smart glasses. They call it the Air Go Vision. And it runs AI capabilities from Google, Gemini and OpenAI GPT 4. 0. These glasses have a camera and they also have obviously a microphone and speakers. So you can have a conversation with these chatbots while the chatbot has access to see. the world around you and to hear what you're saying and probably what other people are saying as well. So because it can see the world and can hear what you're saying, you can prompt it with natural language to explain to you what it's saying, provide you information about the environment, education, shopping, comparison tips, and literally anything you want from what the camera can see. Now, they are not the first to release this kind of headset, by the way, the headset costs 250. We also have seen similar things coming out from Ray Ban in their partnership with Meta. My belief is that these will become cheaper and more and more capable in the future. In the next few months, and we're going to start seeing more and more people using them to see the world. If you have seen the demos of GPP 4. 0 and the new Gemini from Google, when they've done their demos, you saw that these tools will be extremely powerful powerful in their ability to analyze the world on some aspect. This is amazing. So if you want to do any work at home as far as small renovations, and you're not really sure how to do that, you can put the glasses on and tell it what you're trying to do. And it will walk you step by step on how to do that. The same thing with assembling different things, the same thing with troubleshooting, things in the house or at work. If you are in construction, you can give these to any people who are working for you. To walk them step by step and reduce the amount of mistakes they're doing when they're building stuff in construction or in manufacturing and so on. So there are huge, amazing benefits in using these kinds of tools, especially as I mentioned before, when the on device capabilities will be better, which means less latency, Faster results and potentially more specific to the need that you have. The problem with all of that is that we're going to have more or less everybody around us record everything that's happening and being analyzed with AI, which has a lot of privacy questions, because we're not supposed to both from an ethical perspective, as well from a legal perspective, record or spy around other people without their consent And this equipment obviously breaks every one of those laws, whether they're ethical or actual laws. And that's a very serious problem. That being said, I don't see how that is not evolving very quickly. And as I mentioned, I anticipate this to be available and used by many people, eventually everybody within a few years. Now I've shared with you in the last few weeks and in several different episodes, the competition between us and Chinese companies On leadership in the AI world and a new model called Sense Nova 5. 5 were just released by a company called Sense Time. It's a Chinese companies that are claiming that their model rivals GPT 4. 0 on, and actually does better than it on several key metrics of performance. Now, in addition to the fact they're claiming, again, I haven't seen any research that's done a proper comparison, but let's call it similar level of AI capabilities to GPT 4. 0, it also does it with significantly less computational resources, which makes it a very big benefit as well as a benefit to the world when it comes to global warming and the power consumption that these models need. Now the model will be released as a chatbot as well as through APIs, so anybody can build on top of those capabilities. So it's becoming very clear that China is not staying behind when it comes to using and developing AI capabilities. When it comes to using AI capabilities, a recent research by the U. S. A. I. Analytics and software company sass in combination with common perks research company have done a study where they look at spoke with over 1600 decision makers across multiple industries across the entire world. And what they have found is that 83 percent of Chinese responders said that they're currently using generative AI in their work. That was higher than the 16 other countries that were researched, where the U. S. being number two with only 65 percent of responders saying that, and the global average was about 54%, so about half the companies. Now, when it comes to having an actual implementation process of generative AI into organizations, the picture is a little different. So North America, 20 percent of companies said they are implementing generative AI from a company infrastructure and process perspective, followed by AIPAC with only 10 percent of companies, last time with only 8 percent Europe was 7 percent and South West Europe with 7 percent as well. So what you can see here is that while the individual uses of people, of AI is relatively high, the, Corporate level, the actual company implementation of it is still very low. And it's in early stages. And I even think that the 20 percent reported by the U S is probably overstating the actual situation based on all the companies that I get to talk to when I speak on stages. And when I consult to businesses, this seems to be a little high to me, the numbers of around 10 percent or lower, like the report for the rest of the world is. Another question that they asked is how many companies have implemented Gen AI use policies. And these numbers sounds completely unreal to me, but they're saying that APAC is at 71%, North America at 63%, and the rest of the countries are lower than that. I get to speak to a lot of CEOs and business leaders. And from my conversation, this number does not cross the 20 percent upwards and probably don't even cross into double digits. So I'm really surprised with these results of this research. I'm not sure exactly what they asked and I'm not sure exactly how they analyze the results. Maybe they're in the process of doing that, but based on all the people I talked to, the number of companies who actually have that in place is very low, which is really sad and scary compared to the other number of 60 percent of your employees are actually using it at work right now, if you're in the U S based on this current survey, the last piece of information from that research, which is not surprising is how many of these companies are planning to invest significant funds in generative AI in the next fiscal year. The numbers for most of these are above 90%. I'm really surprised in this particular case with the other four or five, 6 percent that are not planning to do that next year and probably this year as well. Now, since we started talking on China, let's continue on the China topic. China just did something very interesting and they're the first company in the world who published official guidelines for the development and usage of humanoid robots. So the goal of this is to promote responsible and ethical advancement in humanoid robot technologies, which is highly needed. Now the document that they released outlines key principles and requirements for the design, production, testing, deployment, and usage of humanoid robots in various sectors, including manufacturing services, entertainment, and so on. The guidelines are, first and foremost, are emphasizing the importance of safety, reliability, and controllability of humanoid robots. Basically our ability to know what they're going to do and to make sure that people around them are staying safe. This is really important. And maybe on humanoid robots, we'll be able to reverse the process that we're seeing on AI. So on just generative AI capabilities, the technology is running way faster than the regulation is, which is a very big problem. Humanoid robots are coming and they're coming very fast. So we'll probably start seeing them definitely in factories in 2025 and beyond probably everywhere. I shared with you last week, they're already doing several different operations in BMW facilities, using the figure one robot and in Tesla facilities, using their own developed Optimus robot. And so this is happening and it's happening fast and having a framework that will keep us safe from these robots is extremely important. As a teenage, I was a huge fan of Isaac Asimov's books and if you don't know Isaac Asimov, he has several different series of books. One of them is called the robot series where he talks about humans collaborating with robots in the future. And these robots has the three basic laws. If you want the prime directive for these robots that they can never break. And the rules are, and I'm quoting from Asimov right now, a robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings, except where such orders could conflict with reality with the first law. And the third one is a robot must protect its own existence as long as such protection does not conflict with the first of the second or the second law. I always like these rules. I never thought I will get to have to use them and see them in action in real life, but I think we have to come with some system that will allow us to be safe from both AI capabilities as well as robots, and we may not need the exact rules that Asimov have set in place several decades ago, but coming up with something like this that would be agreed to with anybody in the world who developed these systems is critical to reducing the negative potential impact of having these systems deployed, and I don't think anybody questions the fact that they will be deployed in the next few years. And a few additional interesting pieces of news for this week. One hugging face, which is the biggest platform that shares open source AI models has achieved profitability. So it's a relatively new platform that is focusing on free open source stuff. And the fact that they've achieved profitability that fast is really promising to the open source world. So kudos for hugging face for a getting to that really important milestone and be that they're able to do this while focusing on open source and free access. And they're making their money through enterprise solution that provide additional capabilities beyond the free stuff that all of us have access to. Another really big piece of news, as far as I'm concerned from a company, we don't talk about a lot comes from writer. So there's two companies called writer. One is spelled properly W R I T E R and the other spelled R Y T R. Both of them provide an AI writing assistant platform. So writer that's spelled properly just dropped a very Major update to the platform. And the update includes the capability to do your own rag architecture within the writer platform. So rag stands for Retrieval Augmented Generation, which basically means you can upload your own documentation and data into it and be able to ask questions and get answers and get written content based on the data that you upload. Now, the astonishing part of this release is that writers new platform allows it to analyze up to 10 million words of uploaded data. Now, to put things in perspective, the most capable platform right now, as far as context window and the ability to upload a huge amount of information and still get accurate answers from it comes from Google's Gemini 1. 5 pro, which is allowing you to upload 1. 5 million tokens, which is about a million words. They are for some companies that released a beta of 2 million tokens, which is about 1. 5 million words. And Ryder is claiming 10 million words, which is about 13 to 14 million tokens, which is 10 X almost What Google has available to us right now. Now, to be fair, Google has claimed that in their research labs are already running on a 10 million token platform, but they haven't released it to the public and Ryder has. Now, how well does this actually perform? I'm sure we'll start seeing people test it out and compare it to other platforms such as Gemini. But just the fact that technology is getting there, it doesn't matter who's releasing it or how accurate it is right now is telling us that very quickly, those limitations that we have right now on the amount of data that we can upload into these platforms and get accurate, relevant answers fast, these limitations are going away. So 10 million words allows you to probably take most of the data in your company, definitely technical or process data in your company and upload it and be able to get answers for it or generate new documents based on it or marketing materials based on it or user guides based on it or whatever you want based on that data, In seconds, instead of weeks and months. The other interesting part of this release is that they're really focusing on transparency. So in addition to the outcome, their new platform shows users, the steps and the information that the AI is using and the reasoning that it's using in order to generate the content that it's generating, and that level of transparency is also very important in building trust among users. and understanding of what's actually happening in the background that will also help reduce hallucinations over time because people will be able to see what the AI is actually doing while it's doing it and potentially stop it before mistakes happen. That will also obviously help writer themselves to fix the models as they do wrong things. So I really like writer. I don't use it a lot, but every time I do, I really like the process that is built around creating content with a platform. It takes you step by step and gives you full control on every step of that process. And this new addition is a huge game changer, not just for writer as a writing platform, but for the technology of rag and making it accessible to each and every one of us for analyzing and having a conversation with company data. And the two last pieces of news come from Amazon. One is that Amazon formally released Rufus, which is it's AI chat bot built for the Amazon platform to all users. So it was in test for limited amount of people so far. And now it's going to be available and connected to everything Amazon. So you'll be able to use it on the website, on the mobile app, on Alexa and so on and ask questions and get product recommendations and comparisons just by having a normal voice conversation. Probably over time, you'll be able to get to do a lot more than that, such as review your deliveries and cancel and customer service and so on, all within the Microsoft, all within the Amazon environment while using generative AI capabilities. The other piece of news from Amazon is actually more interesting and more exciting. So as part of Amazon Web Services, AWS, which is one of the three largest hosting platforms for data in the world today, together with Google Cloud and Microsoft Azure. So AWS just launched what they call App Studio. It's a new service that enables developers, which now means basically you and me and anyone to create applications using natural language descriptions. So similar to other coding platforms, you can write in simple English what is it that they're trying to develop? What it needs to connect to, what the outcome needs to be and so on and so forth, and it will write the code for you, but more importantly, in this particular case of AppStudio, it's already connected to everything else in the AWS universe, which means it doesn't just create the code, it also can deploy and connect and do everything that it needs in order to have a full functioning application connected to everything you need in order to have it run within your enterprise. Which means you can go into AppStudio, explain in simple English what you want and get in return the complete source code documentation and deployment configuration that will allow you to use the application that was developed on its own by AppStudio. I shared with you several times in the past. I definitely see that as the future where more and more software is going to be created on the fly as we need it instead of huge types of software that do a gazillion things we don't actually need. So I definitely anticipate a step in the not too far future where a section of the app store or other platforms where you can download software is not going to be existing software, but actually a place where you can go in and describe your needs and you will generate a product application for you on the fly that will do just the things you need connecting just to the data sources you need and being very efficient for you versus using a generic platform that somebody developed to do a million other tasks that you don't care about. That's it for this week. This was a relatively long episode, but as I mentioned, a lot of really small, interesting things have happened, some bigger things have happened, and we're going to continue updating you on all of those things. We'll be back on Tuesday with another fascinating interview with an expert diving into how to use AI in your business with a step by step process. And until then, have an awesome weekend.