Leveraging AI

97 | Apple integrating AI into everything Apple, Open AI will archive AGI by 2027 and hired the previous head of NSA to their board, two amazing text to video models are available to the public, and many more AI news from the week ending on June 14

June 15, 2024 Isar Meitis Season 1 Episode 97
97 | Apple integrating AI into everything Apple, Open AI will archive AGI by 2027 and hired the previous head of NSA to their board, two amazing text to video models are available to the public, and many more AI news from the week ending on June 14
Leveraging AI
More Info
Leveraging AI
97 | Apple integrating AI into everything Apple, Open AI will archive AGI by 2027 and hired the previous head of NSA to their board, two amazing text to video models are available to the public, and many more AI news from the week ending on June 14
Jun 15, 2024 Season 1 Episode 97
Isar Meitis

Join our online AI party this Monday, June 17th, from 12 to 2 PM Eastern Time, where 20 top AI experts will share their number one tip for leveraging AI in business. Don’t miss out on this unique learning and networking opportunity. Register on LinkedIn Live or Zoom—details here https://www.linkedin.com/events/thebiggestaionlineeventoftheyea7206358194040778752/theater/

Is Apple's AI Revolution Really Private and Context-Aware?

Apple has just made some bold promises about their new AI capabilities, but are they truly solving the biggest AI challenges?

Privacy and context have long been major concerns in the AI world. Apple's approach to embedding AI across its ecosystem aims to address these, but how effective are these solutions?

In this episode of Leveraging AI, Isar Meitis talks about the latest from Apple's Worldwide Developer Conference, dissecting the announcements and what they mean for the future of AI. From Apple's privacy-focused strategies to the potential of their new AI features, we'll explore the impact on business users and beyond.

In this session, you'll discover:

  • How Apple's new on-device AI operations enhance data privacy.
  • The innovative ways Apple is integrating AI to provide context-aware responses.
  • The potential and limitations of Apple's new AI features set to release this fall.
  • Insights into the latest AI tools for text-to-video generation and what they mean for content creators.
  • The ongoing safety debates within OpenAI and the broader implications for AI development.


About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Show Notes Transcript

Join our online AI party this Monday, June 17th, from 12 to 2 PM Eastern Time, where 20 top AI experts will share their number one tip for leveraging AI in business. Don’t miss out on this unique learning and networking opportunity. Register on LinkedIn Live or Zoom—details here https://www.linkedin.com/events/thebiggestaionlineeventoftheyea7206358194040778752/theater/

Is Apple's AI Revolution Really Private and Context-Aware?

Apple has just made some bold promises about their new AI capabilities, but are they truly solving the biggest AI challenges?

Privacy and context have long been major concerns in the AI world. Apple's approach to embedding AI across its ecosystem aims to address these, but how effective are these solutions?

In this episode of Leveraging AI, Isar Meitis talks about the latest from Apple's Worldwide Developer Conference, dissecting the announcements and what they mean for the future of AI. From Apple's privacy-focused strategies to the potential of their new AI features, we'll explore the impact on business users and beyond.

In this session, you'll discover:

  • How Apple's new on-device AI operations enhance data privacy.
  • The innovative ways Apple is integrating AI to provide context-aware responses.
  • The potential and limitations of Apple's new AI features set to release this fall.
  • Insights into the latest AI tools for text-to-video generation and what they mean for content creators.
  • The ongoing safety debates within OpenAI and the broader implications for AI development.


About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to Leveraging AI, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Isar Metis, your host, and like every week, this week has been jam packed with news. We obviously going to start with the huge announcements from Apple, but there are a lot of interesting things happening across multiple players in the industry. Many that we don't talk about a lot, but there's interesting news about them as well this week. But before we dive into the news, I have my own news that are really exciting. This coming Monday, June 17th at 12 to 2 PM eastern time, we are going to hold the biggest AI party online, probably ever, but definitely for my podcast or people on LinkedIn. In this event, we are hosting 20 of the top AI experts and practitioners, and each and every one of them is going to get five minutes to share his or her number one tip on how to use AI for business benefits. So if you're looking for a way to learn 20 amazing tips from 20 amazing AI experts in only two hours, come and join us this Monday. It's going to be absolutely amazing. These are a group of really amazing individuals. All of them are really cool and fun to be with. So it's promising to be a very educational as well as fun event. And we're doing this to celebrate episode 100 of this podcast. So if you just want to come and celebrate with us, that's another very good reason to join us on Monday. This event is going to happen on LinkedIn live as well as on Zoom. There's a lot of people that are joining us on both places. If you join us on Zoom, you'll be able to also stay after the event for some networking opportunity with the experts, but the seats for that are limited. So it's a first come first serve. But on LinkedIn live, there's as many seats as you want. So if you cannot make it to the Zoom, at least come and join us on LinkedIn. So that's it with my personal news about the celebration of episode 100 and the amazing event we're doing on Monday. And now to this week's news. So the first thing we're going to talk about is Apple intelligence. Apple had their worldwide developer conference this week in which they made the anticipated announcement about all their new stuff, but the focus was obviously their new AI capabilities. We're all expecting that to happen. There were a lot of rumors leading up to the event on what they're going to exactly share. But the outcome is actually very interesting and it's very Apple So Apple shared what they call Apple intelligence, which is basically them combining AI features onto everything that they do across all the different operating systems from their tablets to their Mac computers, to obviously iOS and Siri. It's going to be embedded into almost every operation and every aspect of using the Apple ecosystem. And they're solving two of the biggest problems of any AI platform out there. And these are privacy and lack of context. And I want to address both of these very quickly. A lot of people are concerned with where the data is going when they upload information or when they use one of the large language models and they have the right to do Now, many of these models and companies are claiming they're not using your data for training, but yet you are sending a lot of data to them when you're trying to use their models. And then you are at their mercy. If you want on what they're actually doing with your data or not doing with your data as they're promising. So privacy is a big deal. Apple is solving this in a very unique way. First thing is they are focusing on device operation, meaning a lot of the stuff is happening on your phone, on your Mac, and not being sent to any cloud environment. That obviously means that the data stays local on your device and is not sent to anywhere else. Or if to use the great quote from Apple themselves, we're aware of the data without collecting data. The other option is what they called a private cloud compute. Basically a piece in the cloud that is connected just for you, that is not talking to anything else, that allows the device to use it when bigger, more complex operations are required. The data is sent there and collected. only the data that is required for that particular operation, then it's computed on the cloud, and then the data gets deleted after the event is over. Again, this is Apple sticking to what Apple does best, which is focusing on user data privacy. The other aspect that they're solving is as important as the first one, which is lack of context. All the large language models, the biggest problem that they have is they don't have context. They don't know who you are, where you work, what you like, what your preferences are, what you're trying to achieve, what's your industry, et cetera, et cetera, it just doesn't know any of that. And that's why the whole concept of prompt engineering and writing very long prompts, and our recently open AI adding long term memory between the chats and so on and so forth. And obviously the promises from Microsoft and Google to eventually integrate all of our capabilities into this universe in order to know more about us, but it doesn't exist yet. So Apple jumps right now to the front of the line with integrating to Everything from the Apple ecosystem now for that, you obviously need to be a heavy Apple user. So I use a Mac regularly. That's my computer. I've been using Max for about 10 years, I use an Android phone and I use the Google ecosystem for most of my day to day as far as office needs. So I'm not fully integrated into the Apple environment. I know I'm weird that way, but a lot of people are using all the capabilities from the Apple ecosystem, from writing documents to emails, to I messages, et cetera, et cetera. And the Apple intelligence will be able to collect data from all these places in order to give you more specific answers to your needs based on the information that it's collected from all these places. This is obviously huge and provides a lot of amazing benefits. They've also shared as an example for this, something that they didn't call an agent, but it is an agent. So those of you who don't know what that is. What agents are AI tools that are more sophisticated that knows how to take a task, break it into smaller tasks and execute them to completion, hence completing the bigger task. And the example that they gave is you asking Siri, when is my mom flights landing and what it actually needs to do in the background, it needs to find the message from your mom, whether it's coming on iMessages or email or another type of communication, find the message. Find the number of the flight, go to the web, check the latest information about the flight, then check the traffic that is going to be on the road from your house to the airport in order to give you advice on when to leave in order to pick up your mom. These are multiple steps that you didn't ask for. You just ask for when to pick up your mom after her flight or when she's landing. And behind the scenes, the agent will figure out all the things it needs to do while using information from across multiple aspects and multiple apps that it has access to. So this is the dream. Now, if this dream actually happens, we have to wait and see the release of all of that is in the fall, which tells me that none of this is actually ready. Because if you go back to the Google event or the Microsoft events from the past few weeks, they said some of this stuff is going to be get released in a few weeks. Some of the stuff is going to get released in a few months. In the Apple case, they're saying more than a few months. So the fall is a pretty big timeframe that is probably between September and November. That gives them about six months from now to actually keep on working on what they shared in order to actually make it work. And even then we don't know what actually is going to be released and what is not going to be released. We've been seriously disappointed. In the past year and a half from seeing amazing demos from companies like Salesforce and Microsoft and Google about things are going to release that haven't been released yet. Now, Apple has a pretty good track record in releasing what it's saying, but again, in this particular case, they are not releasing it on the day of the announcement, which was usually the case, but they're giving themselves about a six months Slack. Again, my conclusion is it's not ready yet, and we don't know how much of it will actually be ready by the fall, but the vision is very interesting. In addition, they've shared a lot of AI supported features across almost everything in Apple universe, including photos and the ability to create collections and trips and summaries and automatically aggregate based on people. and voice isolation in iPods to be able to listen to stuff that has a lot of background noise and translations automatically on Apple Watch and new, faces for the Apple Watch from photos that creates really cool desktop, AKA faces on your iWatch And a new calculator that can do math, even if you're handwriting with a stylus on top of it, and a lot of other cool AI stuff that is integrated into more or less everything Apple. So lots of promises, very attractive. I, as a Mac user. So definitely I'm looking forward to that, but we'll have to wait a few months to see how it's actually working. Now there were a lot of rumors before the event about the fact that potentially ChachiPT AKA OpenAI's large language model is going to be the engine behind a lot of it and definitely behind Siri. And the reality is Apple did not do that. It does however, allow you to get access to ChachiPT to use the model straight from your devices. For free, which is now somewhat available in other cases, but with a lot of limitations and again, what are going to be the limitations from using apple devices hasn't been defined yet, but it's very clear. This is a strategic partnership between open AI and Apple with the goal of open AI is getting access to a lot more users that are used to using their Apple devices, and now we'll be able to use chat GPT, and they will be able to learn from that data, even though they're saying that they're not going to train on that data. The person that obviously jumped all over this is Elon Musk. As Elon has a history with open AI in general and Sam Altman specifically, and he basically said that he's going to ban all Apple devices on the different companies that he's working for, because it's a serious data breach potentially for anybody who's using Apple devices with all the information going to open AI, from OpenAI and everybody else obviously jumped back and said, that's not true. They're not going to collect any data and they're going to stay true to the Apple promise for complete privacy of their data. In this particular case, because I know my history with Elon. I think he's just looking for another way to get back at OpenAI versus has any true merit to his allegations. Another big piece of news this week comes from the world of text to video generation. Two different platforms have caught social media by a storm. One of them is called Luma. And they have a new software platform called Dream Machine. So you can just Google Luma Dream Machine, and you can put in text prompts and it generates highly realistic videos. They're relatively short, but they're really impressive. And many people are already testing them and doing really cool things with them. Currently it's free, but as I mentioned, if you just Google Luma Dream Machine, you'll see a lot of examples of people creating really cool short videos with it right now. The other one is called Clink. It's a Chinese video platform. It comes from a Chinese video platform company called Kuaishu and it allows us to create up to two minutes of video and they're highly realistic. There's some really cool videos that they already released of a boy riding a bicycle and a panda bear playing a guitar next to a lake. And they look very impressive, very hard to say if it's as good as Sora or not, but there's a very big difference between that and Sora. Sora is not really available yet. So those of you who don't know Sora, Sora has shocked the world, basically showing highly realistic, amazingly accurate, very consistent AI videos. It was shared by open AI back in February, but they only released it to unique few, including filmmakers and people within the industry. Some of them are actually going to share some short videos made with Sora on the Tribeca film festival that is just coming up, but it is not available to the public yet. Now there are other tools out there like Runway and PicaLabs that have been in this niche for a while of text to video generation. Both of them are not as impressive as the tools that I just mentioned. they were earlier in the game, but they were somewhat left behind at least for now. What does that mean for all of us? It means that the world of video generation from text is going to take a huge leap in the next few months, and it's going to completely explode in 2025. I assume that Sora will be released to the public, maybe with the drive of these new tools, as I mentioned, Kling and Luma. Or maybe just because open AI will decide it's time. But sometime in the next few months, we will have multiple tools that can generate highly realistic, consistent videos from a simple prompt that has huge implications across multiple industries, anything from solopreneur content creators, all the way to large film studios that will have to figure out how to use these tools as well as how to compete with companies who can use these tools without the huge budgets that only the studios have. So like everything else in the AI world, it's really exciting and really scary at the same time. Now, if we're speaking about data and open AI and risk coming from open AI, in the past few weeks, I shared more and more concerns coming from various former and existing employees of open AI sharing about the fact that there's not enough focus on safety in the open AI organization, Specifically putting a lot more weight on revenue and the AGI race versus the safety aspects that might be very serious. So in another development on this topic this week, leopold Astern Berner, and I hope I'm not butchering his last name, who was an employee of OpenAI in the super alignment team, the same team that had Ilya Saskover leading, who's now not in OpenAI anymore, and Jan Dicke, who was a senior researcher there who left open AI, claiming that they're not giving them enough resources. So this guy, Leopold was fired from the super alignment team in April, presumably on alleged leak of information. He's claiming that all he did is shared some of the research about risks with other scientists, which is something that is common in the industry and was common in open AI. That being said, he has written a very long essay about what he thinks the future of AI looks like. He also then shared it on an interview on the Dwarkesh podcast, which is hosted by Dwarkesh Patel. Now in his interview, he's sharing that open AI most likely will achieve AGI by 2027 or 2028, this is just. Around the corner. This is by the way, not different from other people saying that, but this is coming now from a person with deep knowledge on the inside. There's a top researcher and a really smart person. He's graduated from the university of Columbia at the age of 19. He obviously knows more things than many people that are talking about this topic, a, because he was on the inside as well as these, a very smart individual. And the timeline that he's stating is saying AGI will be here way before we are ready for it. However, and I think a lot because of all these really negative publicities and people talking about how badly they're doing with safety, I told you last week, OpenAI started a safety committee on its board, and this week they've announced that they are adding former NSA head and retired General Paul Nakasone to its board of directors. Nakason was the head of the military cyber command in addition to his time as the head of the NSA. Most likely one of the most knowledgeable people when it comes to data security in the world, based on the roles that he held. And so at least it seems that, Open AI are taking these allegations seriously, and I really hope that this new safety and security committee together with the experience of their new hire will seriously put Some safety measures on top of what open AI is doing. That being said, going back to Leopold, Leopold is saying that there's a serious need for an international group and a lot more collaboration between different AI developers, as well as other international bodies In order to make this a global safety initiative. And I really hope that's just coming next because we don't have a lot of time to figure this out. By the way, speaking of open AI and different initiatives and people on the board and big hires, open AI recently appointed Sarah Fryer as chief financial officer and Kevin Whale as chief product officer. Both of them have vast experience in running large companies. And the speculations are that OpenAI are planning an IPO. There have been multiple rumors, recently that have been getting stronger That Sam Altman wants to turn the company into a for profit organization, releasing it from its nonprofit board and ownership. And then an IPO would make a lot of sense. And another piece of news that related to that open AI's annualized revenue has doubled to 3. 4 billion since end of 2023. At the end of 2023, their annualized revenue rate was 1. 6 billion. Now it's 3. 4 up from 1 billion just last summer. So more than three X in one year. This obviously shows the huge demand there is for their services and also for their leading position in the industry as a whole. their partnerships with multiple companies plays a very big role in that, where through Microsoft, they can get a lot of stuff done with larger organizations, as well as corporations that are just deciding to use OpenAI enterprise directly. And we shared several of those in the past several episodEs. And the last piece of news about OpenAI also involves Microsoft and Oracle in a new three way partnership, OpenAI will be able to use Azure AI platform on Oracle cloud infrastructure. For inference and other needs. This means that it will provide open AI, more compute capacity. And when I say inference, it means the generation of new content. So not training the models, but actually using the models, they will be able to use open AI. Azure. So again, a Microsoft platform, but instead of running on Microsoft infrastructure, it will run on Oracle infrastructure, giving all the parties, basically something to be happy about. It means Microsoft can probably free some of each servers for other stuff that it needs. It allows open AI to continue growing and potentially avoiding some of the outages that they had in the past few weeks. And it obviously provides Oracle another huge partner to run on its data centers. And from open AI to Perplexity. I told you several times in the past, I really like Perplexity. Perplexity is a great research tool that integrates very well. The abilities to use a large language model combined with web access and web search. But they have been under attack in the past week and a half, and even more so in the past week by people that are saying they're literally plagiarizing and stealing other people's content. And this week they've been accused by Randall Lane from Forbes that they have been actually stealing content from news outlets like Forbes and Bloomberg and CNBC without attribution. They're saying that the chat bot as well as the new pages feature is taking verbatim. Word to word complete passages from existing articles and using them as their own and as their own summaries without giving proper attribution to the sources that have created these articles. Now, Perplexity's CEO, Arvind Srinivas, basically said, it's not a big deal. It's a product feature that has rough edges and we're working to fix it. The problem with that is that A, it's unethical and B, it's illegal. And they may get in serious trouble for doing that and basically stealing other people's content and making money in the process. Now, to add to that, I'll share something interesting from my own experience. Those of you who don't know, Perplexity shared a new feature called Perplexity pages, which allows you to create a new web page that summarizes content from across the web on a specific topic. It's a very powerful capability. So think about it, like Wikipedia on the fly, where you can create a Wikipedia page with images and connections and links. And all the data from the internet summarized into one document. page just by asking for it. But the interesting thing that happened to me this week in preparation for this, I was looking for information about the new Luma dream machine. And a perplexity page. Again, one of those on the fly generated pages about this topic was ranked number two in Google search. So after all the snippets and so on, there were two organic results. The first one from the company itself, and the second one from perplexity before all the other articles about this topic. So in addition to the fact that they're stealing content, this new feature, he'll drive a lot more traffic to perplexity instead of to the people who generate the original content, because it's already happening. Where's that going to go? My gut feeling tells me there's a lawsuit on the horizon that will force Perplexity to fix what it's doing and to potentially compensate some of these companies for the damages they've already created. Since I mentioned Google and the impacts on search, I want to share something else about Google. So Google had a big event this week, releasing their new Chromebooks, and they've added, as expected, a lot of AI tools into the Chromebooks. This announcement did not make the same buzz as the original Google announcement or open AI announcement or Apple's announcement, but they actually added a lot of really great AI tools straight into Chromebook. And that's not wishful thinking. They're actually in there right now. So there are tools like, Hey, help me read, help me write, help me game, help me hands free, help me create. And all of these are built straight into the Chromebook operating system and the different Google tools that are built into it. The help me hands free feature is really cool. It literally allows you to operate everything in the computer using just gestures and voice operations. And I really think this is where the future is going. If you think about what I just told you, that you'll be able to operate the computer just with voice and gestures. Combine that with the capabilities that we've already seen from the Apple headset and the stuff that can be done over there. with just hand gestures and that the future is not going to be a mouse and keyboard, but a lot more human and intuitive. And many people are saying that today that it's the first time in history that instead of us adapting to how computers want us to work with them, they will adapt to how we want to work with them. And that's going to make our usage of computers a lot more intuitive and built into a lot more things that we're doing in our lives. And from Google to Microsoft, Microsoft just announced that they're cutting the support for the GPT builder on Copilot Pro. For those of you who don't know what GPTs are capabilities that was developed by OpenAI within their chat GPT environment that allows you to create customized, unique AI solutions for specific use cases. In my business, we probably have 20 of them right now that are helping us to do everything on the day to day of the company. And many people have developed them. And Microsoft integrated the GPT building capabilities straight into Copilot Pro. And now they're taking it away. Starting on July 10th, 2024, the company will remove all the GPTs that created and in the following four days, so July 10th to the 14th are going to remove all the data associated with those GPTs. While they're doing that is not totally clear to me, even though they have different excuses why I think it's a very powerful capability. And I assume what they're trying to do is to push people to use their co pilot studio in order to develop a Microsoft based solution for creating these customized, AI solutions, but the reality is the GPT is a lot easier to build than using Copilot Studio and they're losing a capability that I think will just drive more people to go to OpenAI and use it on the OpenAI side instead of keeping them within the Microsoft ecosystem. And from Microsoft to Mistral, Mistral is a French AI company that has been releasing open source AI models that are getting more and more powerful. And they had two big pieces of news this week. One is that they are releasing a new tool that allows people to fine tune and customize AI models faster than ever. They're designing these tools for both technical and non technical users and early users of this saying that they're seeing significant improvement in customization speeds of these open source models to their specific needs. This is obviously a very serious requirement by multiple organizations who want to tailor these AI solutions into their own internal solutions. And the fact it's open source making it even more attractive because you can run it on your own servers without sharing it with others. With the universe, meaning from a data safety and security perspective, the ability to run a model on your environment and now customize it very easily is a huge benefit. Now, in addition, Mistral shared that they have raised 640 million at a 6 billion valuation. That's a huge jump from their valuation just a year ago, when it was just 240 million. Mistral is definitely one of the bigger players and on the open source world, there's really two huge players. One of them is obviously Meta that's been releasing all its models as open source. And the other is Mistral. Both of them are releasing very powerful and capable open source models. I use the Mistral models across several different things that I do, and they're performing pretty well. They're not as good as the top leading models from open AI and Anthropic, but they are very good and definitely good enough for many tasks. And they're significantly cheaper. And as I mentioned, you can also run them on your own servers, making it a lot more secure. And if we're talking about funding rounds, Cohere, AI large company has raised 450 million this year in support of its growth, according to Reuters, The new funding comes at a new valuation of 5 billion, more than doubling the valuation of 2. 2 billion in June of 2023. So another huge jump in valuation in just one year. Cohere took a very different approach than the rest of the company. So they're not trying to build frontier models, but What they have been focusing on is creating a very capable RAG architecture. So they have a solution called command R that allows companies to build RAG solution. RAG stands for Retrieval Augmented Generation, which is basically a way to train the model on your own data get highly accurate responses based on the data that you give it, that has been their growth engine. And that's what they're focusing on. And that gives them an interesting niche in the market, at least until the big players provide the same exact thing. And then we will see if they survive this or not, but right now, definitely there's enough people giving them a lot of money that believe that they have something unique. Another company that we don't talk about a lot, but that are a really big player in the AI universe are Databricks. Databricks has been providing AI infrastructure on the back end for several years now. And this week they've made Two big announcements. One of them is the fact that they completed the acquisition of Tabular, which is another large platform that enables companies to have access seamlessly to data across multiple data platforms like Amazon Web Services and Google Cloud and Microsoft Azure, et cetera. And it provides data connectivity to many of the open source and analytic engines, such as Apache Spark and flanked and Trino and other tools. So the acquisition now gives Databricks, which is a huge infrastructure company, a really big new capability across everything that it's doing and definitely across the open source universe and it puts them in very serious competition with Snowflake, which has been the dominance player in that community. So far, Now, in addition to their acquisition of, Tabular, Databricks has released their unity catalog as an open source this week. Unity catalog is a really important tool that their customers have been using for several years that allows customer to catalog any kind of data, whether it's tabular data, unstructured data, AI and machine learning assets, all in a single environment versus having to address them one by one on multiple platforms, which was the solution that existed before Databricks developed this technology. So they're now open sourcing this really important capability that will allow them to grow together with a much larger community and get a more accepted in the open source world. In an interesting article this week, McKinsey have shared their lessons learned from developing Lily. So Lily is a tool that was developed by McKinsey for the last five years. That is an AI based tool that is helping them gain much faster and more detailed knowledge about their customers, enabling the customers to change their businesses and grow faster. And they mentioned five lessons learned from the development of that tool that I think is important to anyone who is going to start any kind of AI project, whether it's large or small. The first one is you need to define the clear vision and the organization's goals. So before jumping in and chasing shiny objects, You want to look at the strategic direction of your company, your strategic goals, and how they might be impacted by changes in your industry and in your niche, because of AI, this is something I spend a lot of time with my clients on. And it's something we spend a lot of time in the course learning and working on, because I think this is a very important point. So they mentioned that as number one thing. Number two is assemble a multidisciplinary team. That's another thing that is highly important that I work with my clients on, which is you want to get inputs from anybody in the business, because it will allow you to have a better understanding of different needs within the organization that AI can solve. And it also will allow you to have better conversations because people bring different capabilities. And in addition, you will get a champion in each and every one of the departments in your company. So that's number two. Number three is put the user first. They're saying that in the beginning, they focused a lot on the research and development of the tool itself. But over time, they understood that the really important part is that it needs to improve how they work, meaning they had to combine it with the day to day of actual people working in the company, which is highly recommended. So again, instead of chasing cool tools, look for what actual needs you have in your business, what kind of tasks you may not even need to do because AI can replace it or what AI can enhance dramatically that is on the critical path of what you are generating, whether it's a product or a service and focus on that first. The fourth thing that they shared is teach, learn, repeat. In other words, AI solutions are not a deploy and forget kind of solutions. These solutions can get better and better over time, if you invest the time and resources in tracking exactly what's happening within those tools, how people are using them, what they are using, what they're not using, how good the data that it provides, and so on. And by doing so continuously improving these models. I do this internally all the time. I continuously change my GPTs. I improve my automations. I change my prompt library and so on. This is obviously a very small scale, but you can do these kinds of things on any scale in your business. And you should do that in order to get the most out of AI platforms. And the last thing is measure and manage. They have put a lot of metrics in place in order to measure the success and the effectiveness of the tool that they've deployed. In their case, one of the most important KPIs was answer quality, meaning how good was the answer that Lilly has generated and when it wasn't good enough per user ranking and speed, they had to go back and fix the model and retrain it to get better results. In another business, if you're not training your own models and you don't have the resources of McKinsey, just look at usability and look at how good the tool is actually helping you and look for areas. It's not helping you enough and then make the needed adjustments. And in the past few weeks, we did not have any weird news, but this week we do. So in the city of Cheyenne, Wyoming is going to go to vote for a new mayor. And one of the candidates is VIC, which stands for Virtual Integrated Citizen, which is an AI agent. Now, the idea behind it is that an AI candidate or AI mayor we'll be able to address key issues by really taking into consideration huge amounts of data and doing it in a completely objective way. I find this very attractive and interesting. Obviously, critics are saying that there are many unanswered questions on how that's going to work and how he's going to manage the day to day aspects of a complete city. And what are the ethical implications of that? But I think in the long run, what we'll see is probably not virtual candidates, but actually a candidate that will commit to using AI tools to collect more information about the needs of his or her constituents and a tool that will allow her or him to analyze that information using a I to get and make better and faster decisions that I think is going to lead a lot of people to win a lot of votes. That's it for this week. So don't forget if you're listening to this before Monday at 12 p. m. Eastern time, come enjoy our AI expert extravaganza. It's going to be amazing. You're going to learn a lot from the best. You don't want to miss it. If you listen to this after the fact, meaning if you listen to this episode after Monday, the 17th at noon, don't worry about it. We're going to release that information as episode 100 later on in a few weeks. That's it for now. I hope to see you on Monday and have an awesome weekend.