Leveraging AI

87 | Magical announcement from OpenAI, Microsoft developing their own leading LLM, X.ai raises $6B, and more news from the week ending on May 11

May 11, 2024 Isar Meitis Season 1 Episode 87
87 | Magical announcement from OpenAI, Microsoft developing their own leading LLM, X.ai raises $6B, and more news from the week ending on May 11
Leveraging AI
More Info
Leveraging AI
87 | Magical announcement from OpenAI, Microsoft developing their own leading LLM, X.ai raises $6B, and more news from the week ending on May 11
May 11, 2024 Season 1 Episode 87
Isar Meitis

In this episode of Leveraging AI, Isar Meitis, takes you beyond the headlines to uncover the transformative power and potential perils of AI in the business landscape. From Sam Altman's cryptic tweets to AI's unforeseen impacts on the economy, we dive deep into what executives need to know.

Learn how to navigate these changes strategically and ethically as we explore:
- AI's Big Reveal: What's Next After GPT-4? Discover the implications of OpenAI's secretive upcoming announcements.
- The Socioeconomic Shifts: Preparing for the AI-driven job landscape with insights from recent studies predicting significant job displacements.
- Legal and Ethical Conundrums: Unpacking the latest controversies and legal battles surrounding AI and data usage.
and more

Don't miss this crucial conversation. Tune in, gain invaluable insights, and ensure your leadership strategy is future-proofed against the tides of AI innovation.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Show Notes Transcript

In this episode of Leveraging AI, Isar Meitis, takes you beyond the headlines to uncover the transformative power and potential perils of AI in the business landscape. From Sam Altman's cryptic tweets to AI's unforeseen impacts on the economy, we dive deep into what executives need to know.

Learn how to navigate these changes strategically and ethically as we explore:
- AI's Big Reveal: What's Next After GPT-4? Discover the implications of OpenAI's secretive upcoming announcements.
- The Socioeconomic Shifts: Preparing for the AI-driven job landscape with insights from recent studies predicting significant job displacements.
- Legal and Ethical Conundrums: Unpacking the latest controversies and legal battles surrounding AI and data usage.
and more

Don't miss this crucial conversation. Tune in, gain invaluable insights, and ensure your leadership strategy is future-proofed against the tides of AI innovation.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to a weekend news edition of Leveraging AI, the podcast that shares practical ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Isar Mehti, your host, and we opened last week with a quote from Sam Altman, so why not do it again? This post came from his tweet on X about the event they are planning on Monday. And the quote is not to GPT five, not search engine, but we've been hard at work on some new stuff we think people will love feels like magic to me, Monday, 10 AM Pacific time. And let's dive into what that means and the rest of the news. Last week, I've shared with you that there are a lot of rumors about a search engine coming from OpenAI and that they're planning to announce it just before Google's big event. And that's what everybody thought they are going to share. And yet Sam clearly says that's not what they're going to announce this coming Monday. So there are no, I think, additional rumors, at least not one that I know of that relate to what they're releasing. If it's not GPT 5 Or if it's not a search engine. So I guess we'll all have to wait to Monday morning to learn what this is all about. Very, very exciting stuff. When Sam Altman says about something that it's like magic, it is going to be dramatic. Staying on topics on related to open AI and their new releases. We heard Sam Altman saying several times that GPT 4 we look like a joke once new models come out. And this week, Brad Lightcup, was saying similar things. He basically said that current versions of Chachapiti going to look rudimentary within the next 12 months. He's saying that new models are going to allow handling significantly more complex tasks and establishing deeper relationship with the users acting as valuable teammates and problem solving partners to anyone who's using these models. You heard me say several times before that the two things that scare me the most in the short term when it comes to AI deployment, one is the fact that the truth is dead. there is zero ability for us to know what is true and what is AI made on any digital communication, whether it's on a private chat with somebody or even a video chat or any digital media and so on. That's my number one scare. My number two scare is the economy, because I think it's going to have a profound impact on our economy. And I think there's going to be a lot of jobs lost way before jobs are going to get created. And in a panel discussion this week at Brookings Institute, Sam Altman basically shared the same kind of thoughts. He expressed his worries that the speed and magnitude of socioeconomic changes that are going to be driven by AI are not something we are ready for. And in his mind, we're not planning for it fast enough. In a recent research by McKinsey, they anticipate that AI will affect 60 percent of jobs in advanced economies. Half of those will be completely automated, and the other half will be assisted by AI. That will lead, based on this research, to reduced hiring, lower wages, And the significant job displacement of nearly 12 million us workers by the year of 2030. Now, not taking anything away from McKinsey. I believe they're undershooting the estimate by a lot. And the reason for that, I think they're doing their estimate based on current AI capabilities, while AI agents are planned, most likely this year, how good they're going to be, I don't know how much are we going to trust giving them the data and access to different systems to make them impactful. That's going to be the decision of each specific organization, but these agents are coming and agents will be able to do Very complex task in a very efficient way that will dramatically reduce the need for employees who are doing this. That's not even taking into account the fact that AGI is planned based on most people who are involved within the next three to five years, meaning before 2030. So if I put these two things together. Agents being available this year and probably getting really good next year and then adopted next year and into 2026, and then AGI shortly after we have a perfect storm that will take significantly more jobs than the 12 million McKinsey is anticipating. I don't think enough is done. If you look at what's happening in Congress right now, as an example, they have about 80 bills that has to do with AI decisions and they've passed zero out of those 80 bills. So I don't see the future of that coming from government. The only way something positive can happen out of it is by some kind of collaboration between the leading groups who are developing and deploying these models, and I don't see that happening anytime soon either. again, my personal concern and now backed up by Sam Atman, so I guess I'm at least somewhat right, is, is that this is going to have a very significant negative impact on the economy because it's going to take jobs of people who are making 150, 000 to, to half a million dollars a year, which are the people who drive the U S and global economies. Staying a little bit on negative news from open AI. We talked last week about a new lawsuit against open AI. There's another one from this week, this time from the Authors Guild. It was revealed by them that OpenAI had two data sets called Books1 and Books2 that included about 100, 000 published books. and obviously the Authors Guild is claiming that OpenAI was training on this copyrighted data without receiving any authorization to do that and without compensating the people who wrote these books. Now, the people who worked to put these two databases together are not working for open AI anymore. And open AI was trying to keep them secret, but eventually they released that information as part of the lawsuit to the Author's guild, that includes the fact that they admitted that they were training on these two data sets, books, one and books two as part of the training for GPT 3, and that accumulated to about 16 percent of the entire training content of GPT 3. They're claiming that was not trained after that, and that the last time they used it was late 2021, and that they've deleted them in the middle of 2022. I don't think that takes anything away from the claim, but I also don't think that matters at all. At all, other than that's going to be another group, they're going to write a big check to and make all this thing go away, whatever these lawsuits end up with, they're all going to go in the same direction that is going to decide whether training on data is fair use or not. And it's probably going to involve somebody else or a lot of somebody else's big checks to compensate them and then potentially coming up with some compensation mechanism for the future. On the flip side, OpenAI just announced that they're developing what they're called a media manager, which is a groundbreaking set of tools that they're planning to launch in 2025, that will allow creators of content and owners of content to control how their data is being integrated into machine learning and training processes of large language models. So right now your only option to quote unquote block the training is in the robots. txt file of a website, which is a not granular enough, and B is not always relevant because your information may not be on a website, and so this tool will allow people to manage their content when it comes to training on the data. I don't think this tool applies to anybody else right now. It just applies to open AI. So that's problem number one, but problem number two, it might be too little and too late because they've already trained on a lot of that data, which means this will probably apply only to new creations, which is still a move in the right direction. And in another move to mitigate all this legal issue and ownership issues, and even providing more capabilities to open AI and chat GPT, just like in previous weeks and months, chat GPT and everybody else in the industry started doing these different licensing deals with content sources, and that's what The latest deal announced by open AI is a partnership with stock overflow. Those of you who don't know what stock overflow is the biggest platform in the world for coders to exchange information and ideas and solutions, and the agreement provides open AI, the access to all of Stacks Overflow database and technical knowledge that will allow people to search for these kind of solutions and conversations within the chat QPT platform. In addition, chat QPT, in order to allow you to verify and dive deeper, we'll provide citations to the specific post on Stack Overflow, which means users can verify the results on ChatGPT and dive deeper to see the entire conversation on Stack Overflow. Stack Overflow on the other side of this is going to use OpenAI's models to accelerate the development of Overflow AI, which is an internal set of tools that will allow people on the Stack Overflow platform to do similar things. The reality is Stack Overflow did not have much of a choice. Stack Overflow's traffic has been crushed since the launch of ChatGPT. More and more people are going to ChatGPT anyway to get these kind of answers and the traffic to the Stack Overflow website has been declining consistently since the end of 2022. And so if they want to stay alive, they have to add AI capabilities and maybe they're staying alive through chat. GPT is one of their ways to deal with the situation. That being said, a lot of contributors to stack overflow are furious with the current situation because they're saying they contributed in this information to help other people in their coding and software development journey and not to assist Stack Overflow making money by selling it to open AI, but some of these people went as far as trying to delete their inputs into Stack Overflow and Stack Overflow prevented them from doing that. After they deleted their content, it was reinstated without allowing these people access to that, I anticipate some lawsuits from that direction as well, as far as who actually owns the content when you post it on different discussion forums. and again, I don't think we've seen the end of that either. Staying within the realm of open AI news and rumors. I told you a couple of weeks ago that a new mysterious model called GPT 2 chat bot Suddenly appeared on the ElemSys org chat bot arena, and that people were saying this bot is as good and sometimes better than GPT 4. It wasn't clear where it came from, but then it suddenly disappeared. this week, two new versions of it suddenly showed up. One of them is called I'm a good GPT two chatbot, and the other one is called, I'm also a good GPT two chatbot. It's again, unclear where they came from, but in a tweet from Sam Altman, he mentioned I'm a Good G pt, two chatbot a day before it suddenly appeared on the arena, so potentially they are related. Maybe it's connected to the announcement that they're going to make on Monday, but nobody really knows. I think we will probably know a lot more in the near future. Now we all know that the biggest partnership OpenAI has is with Microsoft, which has made a huge bet on open AI. There's been several documents released in court that are sharing that Microsoft really did not have much of a choice because they felt very far behind Google in its race for AI and even meta and their way to close the gap fast. was to invest a lot of money in open AI and make that the backbone of everything that they're doing. But this week they have announced that they are developing an in house model that will compete with GPT 4 called MAI 1. And this model is going to have about 500 billion parameters. Nobody knows what chat GPT four was trained on, but the numbers are around 500 billion to a trillion. So it's in the same, at least ballpark, it's way bigger than anything Microsoft developed so far. So Microsoft has released a few smaller models as open source recently, but this one is supposed to be a flagship kind of model. The person who is in charge of developing this is Mustafa Suleyman, who is one of the co founders of deep mind, and recently he was the founder and the CEO of Inflection, a AI startup That Microsoft acquired without really acquiring it. That just bought the people and the IP, but not the company. But they're claiming that this new model is not built on inflections data, but they're actually starting from scratch just with the team led by Mustafa Suleiman. The questions around what does that mean for the relationship with open AI? They're saying it means absolutely nothing. They still have a lot of trust in open AI and are going to continue using their models. In my eyes, they don't have much of a choice because in the agreement with open AI and in open AI's Founding papers, they're clearly saying that once they reach AGI, which is very vague, it doesn't really clear what that means they will not release that model to any commercial group, including Microsoft, which basically means that open AI at a point where their board decides that their reached AGI can pull the plug on future deployments in Microsoft, Microsoft obviously made a huge bet on AI and it's paying very well. And everything that they're doing right now is AI focused. There's zero doubt in my mind that their entire operating system is going to be AI based in the next variation of it. So they have to have a model that they control completely and that's their move in that direction. What does that exactly mean on how that's going to work between the Chachapiti models and the MAI models? And this is again, just version one. I'm sure there's going to be version two or three and four. That wasn't clearly defined. And I'm sure we'll find over time. Now, if we're already talking about Microsoft, Microsoft is pushing a lot it's usability of co pilot by adding more and more co pilots and co pilot features, and this week they've released a tool that allows users to improve their prompts in several different aspects. One of the features is called auto complete. It allows users to ask Copilot to, as the name suggests, auto complete their prompts in order to make them more comprehensive and more accurate. Another one is called Rewrite. It allows you to write a complete prompt and then ask the Copilot to rewrite the prompt to make it, again, more comprehensive, more effective, which means you'll get better results. And the third one is called Catch Up, which basically recommend users what additional things they should add to prompts or to the chat in order to get better productivity and in preparation for the next steps in there. This is a great addition. And again, not surprising. I said that several times before on this podcast. I think the concept of prompt engineering will become less and less important as these tools get to know us, our companies, our needs, and will be able to anticipate our needs and hence write better and better prompts for us. This is something that I'm sure we're going to see across all the models. The recent addition of long term memory across chats in chat GPT is definitely something that will enable this kind of functionality. And this move by Microsoft is also moving in the same direction. But in addition, this is a move towards agents. Why am I saying it's a move toward agents? Because it's a tool that at this point recommends additional prompts and fixes your prompts, but it is a step in the direction of making all these prompts instead of the user, meaning fully autonomous agents that can take actions and analyze things and do much more complex things. Now, when these agents show up, which, as I mentioned earlier in this episode is most likely this year and probably with wider adoption next year. The only question is going to be how good they're going to be. And my guess is they're going to be pretty good. So then the other question left is how much will you trust them with a, your data and be access to systems to take actions on your behalf. Those who will give them more access and more data will achieve very fast results at a higher risk. And so I'm sure there's going to be a huge discrepancy depending on the size of the company and the specific industry that different companies are in. going from Microsoft, introducing a new model to an existing model that is growing fast and now becoming a lot more available. Elon Musk's Grok, which is XAI chat bot that was previously available. Only two plus users on X now is available more or less to anybody on X and it's creating summarized trending topics on a sidebar on the X platform. It also allows people to search additional topics and get responses. From the bot within the X platform without being a plus user, these news have been actually pretty good and accurate and it's providing a great summary of what's actually happening as far as the top news and the search allows you to dive even deeper. Grok was also found very powerful by people who tried it to summarize things that are happening right now. That was not a part of its training model. A recent experiment was done in order to provide a detailed summary on the let loose event that Apple just did. And he did an amazing job on summarizing information in almost real time based on these news. So while people were initially questioning Grok and its ability to actually be productive for anything, it's showing more and more capabilities in summarizing things in near real time and providing users information to that data in a way that's easy to digest. Staying on Elon Musk, Tesla just showcased the Tesla bot. Optimus performing various factory labor tasks, including sorting battery cells into containers and using several different tools on the actual assembly line of Tesla. Now this is a very early version of this. So while these jobs are relatively limited right now, there is zero doubt in my mind that Optimus will be involved in more and more things. Tesla previously showed Optimus doing things like handling laundry and sorting items on shelves and showing additional residential related capabilities. My two small additions to this. I think That Optimus might be the biggest revenue generator of Tesla in the next few years. And the reason I'm saying for that, I think there's going to be a lot more Optimus robots created than Tesla's at probably the same price point and maybe even higher. With fair margins across, as we mentioned, both residential and manufacturing capabilities. And if you remember, just a couple of weeks ago, I told you that Hyundai, the owner of Boston Dynamics, the potentially most advanced robotics company on the planet have shown Atlas, their humanoid robot performing different tasks in the assembly line of the Hyundai automotive factories. So two examples, both of them are using autonomous robots that are doing factory jobs. So going back to my statement from earlier regarding the risk to the economy, this goes now beyond white collar jobs. This goes into manufacturing and any physical labor you can imagine these robots will be able to do probably within the next three to five years. We'll see a lot more of them involved in more and more tasks that we know to be done solely by humans at this point. And still staying with Elon and X. X just announced that they're doing another significant raise. The round is expected to be a raise of 6 billion at an 18 billion valuation. But some of the leading funds in the world, Sequoia capital founders fund, and even cost less venture and thrive capital, all of them are in there in a huge raise. This is. An insanely high amount of money and that's insanely high valuation. But I think these people are betting on Elon more than they're betting on Grok itselF. and they're also obviously betting on the huge amount of unique information that Elon has access to between X, the platform previously, Twitter, the data that's being collected from Tesla's All around the world, as well as now the robots. So Elon has a lot of really unique information. And if you combine that with the access to Starlink and potentially internet from all around the world and the data from that, and then later on data from Neuralink and connection to human brains, you understand that the potential here is very interesting. And if you're on the topic of founding rounds, Mistral is just raising another round at a 6 billion valuation from some leading funds, mostly in Russia and Europe. It was unreleased how much money they're raising, but the rumors talk about 600 million again at a 6 billion valuation. Their previous round that happened just this recent December. So about six months ago, they raised 487 million at a 2 billion valuation. So they're now raising six months later, more money at triple the valuation they had back then. Those of you who don't remember, we talked about Mistral many times. Mistral has been the leading force behind open source model other than meta, and they have been releasing very powerful open source models into the world. And they have announced earlier this year, a partnership, both with Microsoft and with Amazon to release their models on their web platforms. Speaking of interesting companies a new company called append. It's a Canadian startup that grew out one of the local universities just came out of stealth and they have released a AI based search engine, something like perplexity. So you heard me talk about perplexity many times on the show. I use perplexity every single day, probably to more than 50 percent of my searches instead of Google. And append is another startup trying to do the same thing. The goal is to give people AI AI chat kind of interface that allows you to have a conversation with the data, but base it on actual grounded information from the internet with links to go and check the facts and to deep diver into these topics. I must admit that while this is a great platform, I don't see a huge, Future for that a, because perplexity already took that spot, but also because I think this is going to start a very big battle with Google. So let's put a few things together that we know perplexity has been taking more and more searches from Google. Microsoft is moving to do the same thing with Bing. There are rumors of Chachapiti adding search capabilities. And so Google. Has to start fighting back and fighting back very aggressively, because if they lose even a few points of global domination in percentage of search done globally, their stock is going to take a very serious hit. And so I think it's the time that we'll see Google like removing the gloves And making some aggressive moves to protect their hand that lays golden eggs. Now, what does that mean to the future of search or SEO? I don't think anybody knows exactly, but I think it's very clear to everyone right now that it is not going to be the same search we were used to. And the vast majority of communication with data on the internet is going to be AI chat, right? Based, what does that mean to the entire global ecosystem of websites and affiliates and traffic and tracking it and monetization that is connected to all of that? I don't think anybody knows, going back to my discussion about implications to the global economy. Staying on new releases, IBM just released an open source set of models that is supposed to help developers and coders to write better code. Their set of models is called Granite, and they are ranging between 3 billion parameters to 34 billion parameters, and it's trained on 116 programming languages. They're claiming that even their 8 billion model now does better coding than Meta's recent release Lama 3. very capable coding assistant. I told you several times before, Coding is probably the first task that AI is going to replace a lot of humans at because it's a very well defined, structured universe versus everything else that is like open ended and very unclear. So we see more and more, and we're going to see more and more of these programming capabilities and co pilots that allows people to write code at scale and eventually troubleshoot it and deploy it. Without the need for as many humans as needed for these tasks right now. And speaking of releases of models, just an interesting piece of news, Microsoft just released a new Chachapiti based model to the U. S. intelligence agencies. And the interesting thing about it, it runs on a supercomputer that is disconnected from the Internet. So the idea here is to create a model that will be trained in order to analyze data that these intelligence agencies are interested in. having it in a box where nobody other than these agencies have access to. The reason I find this very interesting beyond just the news itself is what it means to other companies that want to run in a protected environment. It means that you can run very powerful models locally trained for your specific needs, not having it connected to the internet while still gaining a lot of benefits from that. And I think we'll see more and more of these in the future in more secure environments, whether it's banking or legal or healthcare. And this will allow companies in these fields that have very serious privacy and security concerns to run very powerful models, while not connecting them to the web. That's it for this week. If you learned from this episode, if you're enjoying this podcast in general, please rank us on your favorite podcasting platform. So you can do it right now. If you're not driving, pull up your phone, click on the review button, give us whatever star review you think we deserve. And, write a review in your own words or what you're learning, what you're getting, and also share it with a few people while you're out there that you think can benefit from this as well. This is your way to help other people have better knowledge about AI that hopefully will help all of us deal with this transformation in the most positive way. On Tuesday we'll be back with another fascinating interview and until then, have an amazing weekend.