Leveraging AI
Dive into the world of artificial intelligence with 'Leveraging AI,' a podcast tailored for forward-thinking business professionals. Each episode brings insightful discussions on how AI can ethically transform business practices, offering practical solutions to day-to-day business challenges.
Join our host Isar Meitis (4 time CEO), and expert guests as they turn AI's complexities into actionable insights, and explore its ethical implications in the business world. Whether you are an AI novice or a seasoned professional, 'Leveraging AI' equips you with the knowledge and tools to harness AI's power responsibly and effectively. Tune in weekly for inspiring conversations and real-world applications. Subscribe now and unlock the potential of AI in your business.
Leveraging AI
129 | $10.6B of cash to OpenAI, ChatGPT Canvas, Priceline's AI travel agent, Liquid AI release it's revolutionary AI model, and many other important news for the week ending on October 4
Is AI Changing Faster Than the Internet? A Deep Dive Into the Future of Work and Business Efficiency
How quickly is AI changing your business? Spoiler alert: faster than the internet and personal computers ever did. But what does this mean for your company, your leadership, and the future of work?
In this AI news episode, we explore the latest trends and breakthroughs in AI, from OpenAI's staggering new valuation to real-world applications in blue-collar industries and Fortune 2000 boardrooms.
I will also share key insights from the AI Realized conference, where top leaders like Lenovo's CTO and Airbnb's head of product unpacked how AI is reshaping entire industries. More importantly, you'll learn why AI adoption is not just about the tech—it’s about leadership buy-in, experimentation, and the human factor.
So, how can you start leveraging AI to boost your company's productivity today?
In this session, you'll discover:
- The surprising ways AI adoption is outpacing both PC and internet revolutions—and what that means for your business.
- Key insights from a Federal Reserve and Harvard study on generative AI use across industries, including surprising data on blue-collar adoption.
- How Fortune 2000 leaders are integrating AI with tens of millions in investments, and why leadership buy-in is the deciding factor for success.
- Practical strategies to launch AI projects at any company size, from quick wins with low-code automations to larger infrastructure overhauls.
- Updates on OpenAI’s $6.6 billion funding round and what it signals for the future of AI-powered growth.
This episode is a must-listen if you're looking to make AI a strategic asset for your business—without getting lost in the tech weeds.
About Leveraging AI
- The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/
- YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/
- Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/
- Free AI Consultation: https://multiplai.ai/book-a-call/
- Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events
If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!
Hello, and welcome to a weekend news episode of the Leveraging podcast that shares practical, ethical ways to improve efficiency, grow your business and advance your career. This is Isar Mehtis, your host. And like every week, a lot of stuff happened in the AI world. Which includes the raise by open AI that finally happened release of some really cool features from open AI, new technologies that are very different than AI technologies we had before that was released, and a lot of other good stuff. And in addition, I'm going to share some insights from the AI realized conference I was speaking at this week. So let's get started Before we dive into company specific or tool specific news, I want to share with you some interesting results from a study that was held by the Federal Reserve Bank plus the A couple of universities, including Harvard Kennedy school. And they were looking at the rate of adoption of AI technology and what they've learned. And it's not surprising is that generative AI adoption rate is significantly faster than the PC adoption was, or the internet. And I'm going to say in a minute why I think it's not surprising, but here's some of the findings. 39. 4 percent of American ages, 18 to 64 reported using generative AI just two years after ChatGPT became publicly available with 28 percent saying that they're using it at work. This adoption rate significantly exceeds the 20 percent adoption of PCs that took three years to get to. So a year more and a lower adoption rate. So that shows you how fast this technology is being adopted. Another thing that I found really interesting in the survey is that they found that one in every Color workers in construction, installation, repair, transportation, and other blue collar jobs, one in every five uses AI on the job. So that's 20 percent already on jobs that are not white collar office jobs. In white collar jobs, Mostly in management, business, and computer related occupations. The usage rate at the usage rate on businesses exceeds 40%. Another thing that wasn't surprising, but it's very important to pay attention to is that the air usage mirrors the workplace inequality trends that already exists. So workers with bachelor's degrees or higher are twice as likely to use a eyes compared with those without so 40 percent versus 20 percent younger, more educated and higher income workers show higher adoption rates compared to the rest of the population, which again, if not addressed, will increase inequalities in our population. Now, what are people using it for? 57 percent are using it at work for different writing tasks, which is not surprising. 49 percent use it for information search and research and 25 percent said that they're using it for all the other tasks that were listed, such as study, administrative work, data interpretation, and so on. Now, the last interesting point is that they try to do a quick math on the impact of that on the overall market. So their estimate is that between 3. 5 percent of all U. S. Work hours are currently being assisted by AI. And using that parameter, they were trying to calculate how much is that going to increase the productivity. And they got to the conclusion that it's going to increase the productivity of overall job market between zero 0.12% to 0.87%. Now that's sounds like a very small number, but if you multiply that times the size of the US economy, which is$27.4 trillion at the end of 2023. That by itself is 137 million that are generated in efficiency. But this is just from the small day to day tasks that people are using generative AI for that doesn't take into consideration the big, more significant things that people are using AI to do that bigger companies or even midsize organizations are implementing to get efficiencies. as I mentioned, I was in a conference called AI realized this past week, I was one of the speakers, but the conference as a whole had an amazing list of speakers, including people like Ted Shelton, who is the CEO of inflection AI, Jeremiah Ouyang, who is one of the managing partners of bleed scaling ventures, which is Reed Hoffman's fund, Haling Fang, the head of product at Airbnb, Kurtogulu, who is the CTO of Lenovo, and many other people at that scale. And the audience was senior leaders from Fortune 2000 companies who are coming to learn and share and About how AI is implemented at the enterprise level. And there were a few very interesting findings in that. The first thing is there's obviously very significant investments. And when I say very significant investments is tens of millions of dollars in next year's budget in AI implementation. The other thing that was very obvious from everybody's presentations is that the human factor plays a very big role, whether it's training, experimentation, building excitement, or getting buy in from senior leadership. All of these play as an important role. And in many cases, more important than the actual technology itself. And touching on senior leadership, many of the participants, again, that are implementing AI at the highest levels in the largest companies are saying that without sponsorship and without buy in from senior leadership, you are bound to fail to the point that you should pick a different kind of approach and find a sponsor that will support you versus the best use case without support and without sponsorship is not going to be successful. So that's something to take into account. I see that when I work with companies and I work with a very wide variety of companies, whether through my courses and my workshops, as well as the consulting that I'm providing, it is very obvious that the biggest difference in success of implementation of AI in companies is not the size of the company. It's not the industry and it's not the technology. It's how much senior leadership are actually bought into the idea that AI is going to make a difference. And that is the number one factor on success. So if you're in an organization in where leadership does not see the value in that either try to convince them by bringing education and providing them additional resources or potentially consider a different workplace, because that company is going to stay behind. Another thing that was very obvious, it does these, that there are two kinds of projects. There are small and quick projects that you can get quick wins on and that you can provide many people in the organization, the ability to experiment with that's building automations, GPTs, low code and no code automations and stuff like that. You have to define the boundaries. You have to take care of data safety, but beyond that you can let people experiment. But then there are the big projects that require a lot of infrastructure, a lot of resetting, reshuffling, and cleaning of data, and that require a lot more time and both are important and both need to happen in the right time. The main thing that everybody said is that regardless of the size of the organization, you have to get started. And the only way to get started is actually to get your hands dirty and experiment with these tools, do it in a safe way, do it without risking the data. But just learning about this is not going to get you or your company ahead. And the very first thing you got to do is literally pick small, low hanging fruit use cases and just go for them and try to implement them. And that's the way you're going to learn and iterate and start seeing results. And now to the regular, and from that, let's go to open AI. Open AI finally completed their funding round. They have raised 6. 6 billion at 157 billion valuation. Now that sounds like a pretty high multiplier, but the reality is As part of the announcement they released that they're projecting a revenue jump to$11.6 billion in the next 12 months. So if you consider that as their revenue, then 157 billion is about a 15 multiplier, which now makes a lot more sense, but$157 billion makes them one of the most value, one of the most valuable private companies in the world, less than two years after they've launched the first. Product that was completely free in the beginning. So that is an incredible valuation that I don't think ever happened before. Some of the investors were existing events, investors like thrive capital and Kostler ventures and Microsoft, obviously some of them were new like Nvidia, and then. Altimeter Capital, Fidelity, SoftBank, and Abu Dhabi's MGX fund. Now Thrive Capital had a very interesting commitment. They invested 1. 2 billion in this round, but they also negotiated the opportunity to invest an additional billion dollars in the next year at the same current valuation, if OpenAI hits the specific revenue goals. So that means that there might be another billion dollars in that pile in the next 12 months. Now, we've also learned that open AI made 3. 6 billion this year, and that their projected losses is supposed to be over 5%. Billions. So that tells you that this amount they raised is not enough money for the long run. So if they lost 5 billion a year, this year, and they're training significantly bigger models in the future and they're hiring more people and there's more inference happening. So all of these things require more budget. If they raise 6. 6, that's just not enough money, which tells us they will need more money. So as a first step, a day after, They've made that announcement. They've announced that they've got a 4 billion line of credit, which gets them to just over 10 billion in cash that is available to them right now. From the obvious suspects that are the major financial institutes, including JPMorgan Chase and Citi and Goldman Sachs and Morgan Stanley and many others. But some of the, again, the biggest, Banks and financial institutions in the world that have provided this line of credit to open AI. So compared to last week, open AI has 10. 6 billion more to spend on future growth opportunities, which include all the things that I mentioned before, training new models, deployment, customer service, hiring talent, and so on. Now, another interesting report came that one of the requirements that open air has put to the investors is to provide them exclusivity and basically prevents them from investing in their competitors. They named five specific companies, which include XAI, which is Elon Musk's company, Anthropic, Perplexity, Glean, and Ilya Satskaver's new company sSI, which stands for safe super intelligence. Musk obviously called them evil for putting these terms in the agreements. And I tend to somewhat agree, but it's not the first time that stuff like that happens in the tech world. Uber has done similar things and other companies things. In the past, that being said, the reason I think it's evil is it prevents competition, which is never a good thing. And it's definitely preventing competition from smaller companies. So if you think about the size of perplexity and the size of Anthropic and the size of Glean, and definitely Ilya's company that hasn't even started yet, that will take away some of their ability to raise money, at least from these specific players, which will reduce their ability to compete and develop other innovative ideas, which we can all benefit from. So I don't see that as a positive move. Now the one company that was considered as one of the investors and ended up not investing is. Apple, right? So that's a little surprising to me. There were a lot of conversations earlier this year. And as we know, open AI is powering some of the new Apple features or presumably supposed to, because these were not released yet, even though the new iPhone was released. So something is definitely not right in the relationships between Apple and open AI. I need to assume, and I don't know that for a fact, but I need to assume that the term oil that's happening in open AI is not something Apple wants to be involved with. And maybe it's not something they're willing to bet on. So I got to go back to that. Like they just raised over 10 billion. Billion dollars after less than two years of releasing their product and about 10 senior executives, including some of the founders left in the past few months, knowing that this is happening. This is not a good sign for the culture. And for the leadership style that is happening in the company right now, if one of two people leave, okay, that happens when most of your senior leadership departs, when you're making these amazing strides forward, both from a technological perspective and from a financial perspective, it is not a good sign. So Apple is not in this particular game. But the interesting thing to me is that while this is a huge amount of money, it's And it's going to leave them at the leadership of the AI development in the world. It's a very small amount of money if you're looking at the competition. So while open AI themselves have to go out and raise this money in order to basically stay alive, I want to take you to the three leading, the three other leading companies in the game, which is Microsoft, Google, and Apple. If you take. Microsoft's EBITDA in the past 12 months, it's 131 billion, which means they make about 360 million of EBITDA every single day on average. That's 2 billion of new cash every single week. Alphabet, which is Google is a little smaller with 270 million. 4 million a day or 1. 5 billion a week. An Apple is at the same ballpark as Microsoft with 344 million a day, which again, stands up just over 2 billion every single week. So these companies have. A lot more cash than open AI has, they have the compute power and they have the distribution. So three things that open AI does not have. So they now have cash, they have 10 billion, but again, all these companies make 10 billion in five weeks. Every. Thank you. Five weeks. So their ability to develop their own models and use them with their own distribution, overlaying it on top of their own data is something that open AI doesn't have. So open AI right now is like the dream kids and the superstar of this universe. But I don't think that will last very long. I do think that Microsoft on their own and Google on their own and Apple on their own will. Surpass the success of open AI in the future, just because of the things that I said, they have access to the money. They have access to data. Google has access to more data than anybody else, and they have access to talent, compute all the things you need in order to make this successful, which open AI are dependent on these other players and external financial support in order to get. Now, this whole raise is tied to open AI, changing their structure to A for profit organization, which might be one or some of the reasons for the turmoil and all the people that are leaving that signed up to work at open AI because there were a nonprofit that was working to help humanity make the most out of AI technology. I'm sure there's other stuff and I'm. Hope that over time, we will learn more about what's happened and what's still happening behind the scenes. But there are specific terms in the money they raised that says what will happen if they cannot make that change. But as it looks right now, that's the direction they're going to go. There's also been rumors of them changing their logo. So the spiral looking logo that we all know and learn to identify, they're considering changing it to just a round circle, basically the letter O and we know they have a sweet spot for the letter O because you have the new GPT called O, the new model is supposed to be called Orion, which starts with an O. whether they're going to do it or not, I don't know. I will be surprised if they would, because everybody now knows and are familiar with the chat GPT and open AI logo. And. I don't think you want to give that kind of brand equity away. Weirder things have happened, including at OpenAI. So we need to wait and see whether that's going to happen or not. But there's also been very tactical, practical, exciting news from OpenAI this week. Actually, two. One, they've introduced an API that is a real time API that allows Other companies to basically build conversational models, basically like the advanced voice mode or third parties. And there's already companies are using it. We're going to talk about it in a minute, but the other thing they introduced is canvas. Canvas is a new user interface for some of the functionality within chat. UPD it's basically but those of you have been using clone artifacts, it's basically the same thing. It's a side by side view where on the left, you have the actual chat and on the right, you can see the outcome, whether it's the text that you're producing, or if you're writing code, the code that you're writing. But they took this idea from Claude basically copied Claude artifacts and took it to a whole new level. And what they added is they added these tool tips and snippets that pop up on the right side. So basically on the outcome, if you're writing code, you can through this little pop up menu, change the code to multiple coding languages, which I find absolutely magical. So you can change it to Python and to CSS and to HTML and to other and C plus, and other languages. Coding languages, literally by dragging this slider, you can select different modes, whether you want to debug the code, you want to write comments on the code, you want to optimize the code. All of these are little pop up when you touch different areas of the code or as a main thing for the whole thing, but it just pops up on the right side of the screen. When you're working with code, this is an amazing functionality that happens when Within the LLM environment without paying any additional money rather than just the stuff that you're already paying for. And it's going to be later on rolled on into the free functionality as well, probably with limitations on how much you can use it, just like all the free stuff that they've released. So it's an incredibly powerful functionality in the regular tech stuff. When you're not writing code, it allows you to summarize sections, make them shorter, make them longer. So basically make them ready for release and a lot of other cool functionality. You can highlight just specific sections and a tooltip opens just for that. Amazing, amazing. It is really helpful. It's a completely different ways to use the large language model to the point that you're thinking how the hell did we use it before? It looks like the middle ages. And so if you didn't try it yet, go and try it out. If you have the paid version, go Any of the paid versions, you should have access to it. But like everything with open AI, they're rolling it over a few days. So if you don't have access to it yet, you'll probably have access to it by early next week. But now going back to the real time API, Priceline, the online travel giant has released an AI voice assistant called Penny that is powered by this real time API from open AI. So it is basically similar to using the advanced voice mode with a just released last week, and it allows travelers to engage in natural conversational language with anything you want to know about travel from Priceline. And this is an extremely helpful way for travelers to learn about what would be better hotels, what, show me just the stuff in this and that rating. I'm looking to go to this conference that is in this category. Convention center. What are going to be the closest hotels that are four stars? Like literally anything you want, like having your own personal travel agent that can speak in multiple languages. Now, while this is specifically right now for hotels, they're planning to expand it to flights and rental cars and vacation packages later this year. On, but if you broaden this even further, that's going to be the user interface that we're going to use to engage with any company. So if you think about going to websites and visiting them to start to find information about specific companies, it makes no sense. It's a very ineffective. Way to learn about what a company does, what services are provided and so on. And I think this direction of natural speaking, and then later on, you will speak to your agents and your agents will go to visit those websites for you because they can visit multiple websites and come to you with answers is, what going to become the common way to engage with technology. But as I mentioned, if you want to test it out on Priceline, you can try it right now. The other cool thing is obviously this supports 120 languages. So we talked about many times before that the concept of contact center and call centers is probably the thing of the past, like the chances, the concept of having multiple people sitting in a room in front of a computer with headphones and talking to people is going to disappear because these tools can do it extremely well in any language, 24 seven. They're never pissed. They're never not focused and they'll be able to connect to a lot more data than a human can. And so I really think that connect centers will disappear. At least the way we know them within the next five years, potentially faster. And by the way, if you take that beyond the contact center, you can also use it for internal usage as well, such as training and HR questions that people have. And providing reviews to employees based on their actual results that they're providing and so on and so forth. And of course, external stuff other than customer service, like outbound sales and inbound sales and BDR work and customer success and data analysis and so on and so forth, more and more stuff is going to be done via the platform voice just by talking to the computer and getting the answers you want. Now, several interesting pieces of news came from Google this week. One of them that I found really fascinating is they've just released as an open source, a new model they called alpha chip. And what alpha chip does is it designs computer chips And it can design a new computer chip in hours instead of what will take humans weeks or months. Now, what Google also shared is that's what they've been using to develop Google's tensor processing units, which is their AI chips that is running all their AI capabilities on the Android phones and on data centers that they built and so on. So the most advanced AI chips that Google has are developed using this technology. The other cool thing about this is that this, the way this model works is they basically design a game or a gaming functionality in which the computer is trying to Make the most amount of score in that game, placing different chip components in different variations to maximize the score. And that's how they audit to design better, faster computer chips. So I find this to be really interesting. Now this has already been adopted by other companies like MediaTek, which is another huge chip development and production company. And so the future. Of new computer chips is already being assisted by AI to develop the next chip that will allow to develop the next AI and so on and so forth with a perfect flywheel. So everything we're seeing right now that is running extremely fast is going to happen even. Now, Google added some new features to Google's to Gemini and Google sheets, such as the ability to create fancier tables than you could before. And you can do this straight from the sidebar on the side. They also keep on updating notebook LM. Those of you who haven't used notebook LM yet, and we talked about this in The last show and the show before that, but notebook LM allows you to take whatever data that you have. And now also YouTube videos, but basically any link, any document, any website, and multiple of them together, connect them to quote unquote notebook. But all you have to do is literally drag the files or copy the links and that's it. And then you can use notebook LM to summarize it, to create a learning guide for you, To create an FAQ for you and also to create a mini podcast. That is a conversational podcast between a guy and a girl that are talking about the topics that are within the documents and the websites. And so on. I've been using it now almost every single day. And the way I use it is when there's articles that I really want to learn about, but I don't have the time to read all of them and that happens every single day because the air world moves so fast, I literally upload them to notebook LM, and I create this podcast. And then the next time I go for a walk with my dog or go for a bike ride, or just take my kids to school and I'm in the car, I listened to them one after the other and getting in five or six minutes, something that would have taken me a lot longer, but forget about the, a lot longer. The fact that I'm in my car and I cannot read the thing allows me to actually summarize. A lot of data very quickly in a really fun way, because I like listening to podcasts Anyway, so if you did not try this yet, it is going to change the way you consume content. We have been talking a lot on this podcast, on the importance of AI education and literacy for people in businesses. It is literally the number one factor of success versus failure when implementing AI in the business. It's actually not the tech, it's the ability to train people and get them to the level of knowledge they need in order to use AI in specific use cases. Use cases successfully, hence generating positive ROI. The biggest question is how do you train yourself? If you're the business person or people in your team, in your company, in the most effective way. I have two pieces of very exciting news for you. Number one is that I have been teaching the AI business transformation course since April of last year. I have been teaching it two times a month, every month, since the beginning of the year, and once a month, all of last year, hundreds of business people and businesses are transforming their way they're doing business because based on the information they've learned in this course. I mostly teach this course privately, meaning organizations and companies hire me to teach just their people. And about once a quarter, we do a publicly available horse. Well, this once a quarter is happening again. So on October 28th of this month, we are opening another course to the public where anyone can join the courses for sessions online, two hours each. So four weeks, two hours every single week with me. Live as an instructor with one hour a week in addition for you to come and ask questions in between based on the homework or things you learn or things you didn't understand. It's a very detailed, comprehensive course. So we'll take you from wherever you are in your journey right now to a level where you understand. What this technology can do for your business across multiple aspects and departments, including a detailed blueprint of how to move forward and implement this from a company wide perspective. So if you are looking to dramatically impact the way you are using AI or your company or your department is using this is an amazing opportunity for you to accelerate your knowledge and start implementing AI. In everything you're doing in your business, you can find the link in the show notes. So you can, you just open your phone right now, find the link to the course, click on it, and you can sign up right now. The other piece of news is that many companies are already planning for 2025 and we are doing a special webinar. on October 17th at noon Eastern. So that's a Thursday, October 17th at noon Eastern. We're doing a 2025 AI planning session webinar. When we are going to cover everything you need to take into consideration when you're planning HR budgets, technology, Anything you need as far as a I implementation planning in 2025, we're going to cover all the things you can do right now. So still in Q four of 2024 in preparation to starting 2025 with the right foot forward, but also the things you need to prepare for in 2025. If that's something that's interesting to you, find another link in the show notes that's going to take you to registration for the webinar. The webinar is absolutely free, so you're all welcome to join us. And now back to the episode. Now, the other very interesting thing that DeepMind released this week is a new tool they call AI Lab Assistant, which helps scientific researchers do their work with ai. The ai, helps predict experiment outcomes and assisting in the research phases. Of the scientific process. The reason I find this important is obviously that's part of the promise that I will allow us to make new scientific discoveries solve problems. Solve for different diseases that exist right now, solve for global warming, solve for clean energy and a lot of other stuff that we're experiencing right now. Now, Microsoft also revealed a lot of new updates this past week. There's a new voice interface, basically the same thing that we have from the advanced voice mode in ChachiPT. It has four different voices you can pick from. They've introduced what they call Co Pilot Daily, which is basically a Personalized daily briefings on topics that you're interested in, similar to what Alexa does. If you know the Alexa daily brief, it's the same thing. So you pick specific topics from news and finance that you want to know about, and we'll give you a daily summary in one of the voices that you pick. They've introduced co pilot vision, which is an experimental thing right now, but it allows you, allows visual understanding of people. They introduced a model that can think deeper, which I assume is based on OpenAI's O1 model, and they've introduced visual search, which allows you to do better image analysis for specific various tasks. So a lot of updates for Copilot. They've also. Made updates to Mike, to windows 11. So they've introduced recall, which is something they demoed when they've done their previous release. So that's the ability that is, you have to opt in, in order to get, but if you do that, it basically records your PC screen all the time, not a full recording, but every few frames, and it can recall every single frame. Any piece of information that has been on the screen so it doesn't matter which software using what logins you used and so on. You'll be able to ask it, Hey, I remember having a chat about topic X, but I don't remember where it was. And it will be able to find all the relevant information for you. Retrieve it very quickly. That obviously comes in, it comes with a whole cat of worms, that. People may or may not be willing to live with, but the feature is available now in Windows 11. Now they've also added new functionality to Photos and Paint within Windows 11. So now both of these tools have the ability to remove stuff from images, similar to what exists on Google Android phones, and that exists in Photoshop. So now you can do this straight in Photos and in Paint. And they also allowed Outpainting so you can take an image that doesn't have a background and basically create a background for it with AI similar functionality exists in other tools as well. Usually tools that cost more money than just comes with your operating system. And they also added the capability to do natural language file search across everything in Windows and OneDrive, similar to the functionality that already exists in Gemini in Google Drive. So all these tools are going in the same direction where you'll be able to use natural language and very quickly voice in order to search and find the information that you need very quickly. And Microsoft also added some new functionality and capabilities to Bing, like improved search and AI generated summaries of multiple sources at the same time, as well as improved privacy for different aspects of their AI tools across everything that they're doing, mostly to be compliant with the EU and UK privacy laws and AI Act. Now, as I mentioned before, one of the things that Microsoft introduced is. Co pilot daily, which is basically a summary of weather events and news. And the interesting thing about that is that they are offering it from specific partners, and these are Reuters and Axel Springer's Axel Springer and Hearst magazines and USA today network, and a few other sources. And they're going to pay those publishers. For the content that is going to be delivered through co pilot daily. This aligns with what we've seen as a trend from companies like open AI and Anthropic and so on signing different kinds of licensing deals with different publishers in order to get access to their content, but also giving them the lifeline that they need because many of these publishers are struggling to sustain a profitable business model. So this might be a win situation for everyone. Now we spoke about most of the big companies and we didn't talk about meta yet. So meta just launched something very interesting this week, or it's a research paper that is called, that is talking about what they're calling backtracking. And backtracking is basically the ability to allow large language models to go back and fix or inappropriate model. So the way it works, as it sounds, The model reads or go overs or reviews the content that it generates. And if it considers it either unsafe or inappropriate, it will go back and we'll delete that content and we'll regenerate something else in order to reduce the unsafe or inappropriate content, this produces very good results. So they've tested it on Lama three, eight B, and that reduced the unsafe outputs from 6. 1 percent of the content to 1. 5%. So that's a huge decrease. And they also tested it on Gemma two, and that reduced the unsafe outputs from 10.6% to 6.1%. Again, a huge decrease as well. So this concept is working. I assume over time they'll be able to do the same thing for hallucinations. So be able to go back and check the actual content and be able to fix it if it's not accurate. But even if it just starts with making the content that comes out of it safer and more appropriate, it's a great step forward. And the interesting thing is that they're claiming it has only a minimal impact on the overall generation speed of the content, which is obviously great news, another company that we didn't mention yet is Anthropic. So as we mentioned last week, Anthropic is also pursuing raising additional money. But the other thing that they did this week is they hired Dirk Kingma, which was one of the co founders of open a I. So he's another person and not the first that has been in a senior position at open a I that has moved to anthropic. Now, in addition to being one of the founders of OpenAI, he has a PhD in machine learning from the University of Amsterdam. He's a former doctoral fellow at Google. He was leading the research of some of the generating AI capabilities in OpenAI, like DALI3 and ChachiPT. He's also a, Angel investor and an advisor to several AI startups. And he played a role in Google's brain before its merger with DeepMind. So he has incredible deep knowledge and understanding and everything AI. and he also moved from open AI to Anthropic just like many others, But the three maybe most known one is Jan Lieke, who was OpenAI's safety lead. John Shulman, who is another co founder who jumped ship to Anthropic and Mike Krieger, who moved over from Instagram and not from OpenAI, but another big name that moved to Anthropic in the last few months. The next company we're going to talk about is NVIDIA. NVIDIA just unveiled NVLM, which I assume stands for NVIDIA language model, which is a multi modal open source model that they've just released that is showing very promising results. So just like all the other releases from all the other companies, It has several different sizes of models to pick from. They're all multimodal so they can understand images and text and video. The most advanced one has 72 billion parameters. And it excels at understanding images and maintaining strong textual performance per NVIDIA. Now it's performing really well on some of the top benchmarks that are out there, which again doesn't mean that it's bad. Much because these companies knows how the benchmarks work and they can train for the benchmarks, but it still shows that it's a solid model that can compete with the leading bunch, at least on specific topics. As I mentioned, mostly on image understanding capabilities. They have released the weights on hugging faced and the code, but they have not released the training code yet, even though they're saying they will release that. That being said, it's currently not for commercial use. It is not, you're not allowed to modify it for resale and it's purely intended for research and hobbyist experimentation at this point, but a very interesting move by Nvidia showing again and again that they're not just a hardware company, but they're doing very advanced things on the software side as well. Now, speaking about new and interesting models that was released a company that we talked about several times in the past called liquid AI is finally releasing their models to the public. So liquid AI is one of the few companies that are developed a technology that is not transformer based. They're calling their technology liquid foundation models or LFMs. And it runs in a completely different infrastructure and architecture than More or less every text based model that we know either today. So the GPTs of the different kinds, which is what we know from all the different companies are running on what's called generative pre trained transformers. That's the acronyms of GPTs. And that's based on an architecture that was developed originally by Google. So liquid AI is one of the few companies that have developed a technology that runs on a different kind of AI and they have also released three different sizes of models. The very interesting thing about these models, is that they use significantly less memory to get the same results, which means you need less compute, less cooling, less water, less money in order to get to the same outcome. The other thing that these models promise, and the original promise, when I started reporting about them about six months ago, was an unlimited context window. But in this current release, they're releasing a 1 million tokens context window, which the only tool that currently has more than that is Gemini 1. 5 pro, but presumably they can go way beyond that. And it will be very interesting to see how many companies start experimenting and actually building specific tools and specific applications on top of that, so we can really compare how it performs in real life scenarios and use cases compared to the existing models. It is also multimodal, so it knows how to take in audio and video and text as inputs and work with all of them in the same level. This is a spin off from MIT and they're going to do their full launch on October 23rd at MIT. Put that on your calendars. I'm sure it's going to be really interesting to watch and even more interesting, as I mentioned, to follow up as companies start implementing this technology to see what it's going to do and how it's going to be different than the existing large language models. That's it for this week. I know when I say that's it. That was a lot and really lots of big news for more or less every one of the big players. We'll be back on Tuesday with another how to episode, we're going to dive into a specific use case like we do every single Tuesday, if you enjoy this podcast, please pull out your phone right now and give us a review on your podcast. Podcasting plat on your podcasting platform, whether it's Apple podcasts or Spotify that helps us get to more people. And that's your way to help more people get educated about AI. And also share this with people that can benefit from this. I'm sure, two, three, five, 10 people that can benefit from this podcast as well. So click the share button and just send them a link to this podcast, write a few words on why they should listen to this. I would really appreciate that. And until Tuesday, have an amazing weekend.