Leveraging AI

95 | AI Business growth lessons from Klarna and PWC, Google roles back AI Overviews, New voice capabilities in ChatGPT, and more news from the week ending on June 8th

June 08, 2024 Isar Meitis Season 1 Episode 95
95 | AI Business growth lessons from Klarna and PWC, Google roles back AI Overviews, New voice capabilities in ChatGPT, and more news from the week ending on June 8th
Leveraging AI
More Info
Leveraging AI
95 | AI Business growth lessons from Klarna and PWC, Google roles back AI Overviews, New voice capabilities in ChatGPT, and more news from the week ending on June 8th
Jun 08, 2024 Season 1 Episode 95
Isar Meitis

Is Google's AI Undermining Trust and Efficiency?

In recent months, Google's ambitious AI ventures have faced significant hurdles, raising questions about the reliability and ethical implications of AI in business. From bizarre search suggestions to accusations of plagiarism, Google's AI initiatives have stumbled, creating ripples across industries dependent on search engine integrity.

In this episode of Leveraging AI , join Isar Meitis as he unpacks the latest AI blunders by Google and what they mean for business leaders. We'll also explore the broader impacts of AI adoption, drawing insights from recent McKinsey and Reuters reports, and discuss how companies can strategically leverage AI to their advantage while avoiding common pitfalls.

As AI continues to evolve, businesses must stay informed and adaptable. Learn from Google's mistakes and discover strategies to implement AI effectively, enhancing your operations without compromising trust or quality.

In this episode, you'll discover:

  • The latest controversies surrounding Google's AI tools.
  • How Google's AI missteps could impact your business operations.
  • Insights from McKinsey's and Reuters' reports on global AI adoption and workforce impacts.
  • Real-world examples of companies successfully leveraging AI, including Klarna and PwC.
  • Strategic approaches to integrating AI in your business to maximize efficiency and growth.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Show Notes Transcript

Is Google's AI Undermining Trust and Efficiency?

In recent months, Google's ambitious AI ventures have faced significant hurdles, raising questions about the reliability and ethical implications of AI in business. From bizarre search suggestions to accusations of plagiarism, Google's AI initiatives have stumbled, creating ripples across industries dependent on search engine integrity.

In this episode of Leveraging AI , join Isar Meitis as he unpacks the latest AI blunders by Google and what they mean for business leaders. We'll also explore the broader impacts of AI adoption, drawing insights from recent McKinsey and Reuters reports, and discuss how companies can strategically leverage AI to their advantage while avoiding common pitfalls.

As AI continues to evolve, businesses must stay informed and adaptable. Learn from Google's mistakes and discover strategies to implement AI effectively, enhancing your operations without compromising trust or quality.

In this episode, you'll discover:

  • The latest controversies surrounding Google's AI tools.
  • How Google's AI missteps could impact your business operations.
  • Insights from McKinsey's and Reuters' reports on global AI adoption and workforce impacts.
  • Real-world examples of companies successfully leveraging AI, including Klarna and PwC.
  • Strategic approaches to integrating AI in your business to maximize efficiency and growth.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

​Hello and welcome to a Weekend News episode of Leveraging AI, the podcast that shares practical, ethical ways to leverage AI to grow efficiency, To improve efficiency, grow your business and advance your career. This is Isar Metis, your host, and we have a jam packed episode today. Since we shared a lot of negative news about open AI in the past few weeks, this week, we're going to start with some negative news about Google, just to be fair. And then there's also enough stuff about open AI this week as well. So let's get started. So what are the news about Google? Late last year, Google released a feature that they called search generative experience. They released it to only people who signed up for that. Myself included, which provided AI overviews on top of the regular search results. In their event two weeks ago, Google have made this feature available to anybody. So nobody had to sign up for it. And you could just see those overviews in multiple queries. As I mentioned last week, that rollout didn't go very well because it provided some really weird results. Some of them are just inaccurate. And some of them We're just crazy, suggesting to people to add glue to their pizza recipe in order to prevent the cheese from sliding off when it's being baked or consuming rocks for their nutritional benefits. Now, in addition to that, this week, one of the writers for Wired Magazine, Navneet Allang, has discovered that Google's new AI overview literally copied, almost word to word, his work in an article that he has written on Wired Magazine talking about using Anthropic Claude Chatbot. Now his actual article was not on page one of search, so he obviously was furious about the fact that on one hand they're suppressing traffic to his website and on the other hand literally copying his work in order to put it above everything else. Google defended themselves by basically saying that the, that's the whole point of overuse, is giving people a sense of what is out there before they're clicking into the articles, But Alan's claim, which I agree with, is that he wrote the article in order to enable traffic to come to Wired in order to make money from selling ads, which is how most companies work today. And the fact that they're copying his work and not creating an actual summary of multiple articles prevents people from going to their website. So all of that is leading Google to more or less roll back the deployment of this feature, not completely and without really announcing it, But a company called BrightEdge, which are an SEO enterprise platform has released interesting statistics that is sharing the following. Google has increased the visibility of previously called SGE before the launch of the feature for everyone slowly growing it from 15% earlier this year to 84% of queries getting these results to the short list of people who signed up for it. So 84% of searches have received these reviews just before the release. This number, after the release, and after all, this controversy now dropped so only 15 percent of queries, 1. 5, so not 50, but 15 down from 24, seeing now the AI overview results ahead of search results. This is obviously not very good news for Google. It's not the first time that they're releasing an AI tool that is not being released well. And somehow, despite the fact they have more data scientists, more data, more compute and more money than probably any other company on the planet to get this right, they have seemed to fail in the productization of AI. All AI capabilities. Now in the long run, Google doesn't have a choice, right? They have to figure this out because the livelihood of Google, maybe the most successful company ever built is at stake because this company is built on the fact that people trust the data that they share on Google searches and there was no real competition, but now with open AI potentially creating their own search tool, which may or may not be integrated into Microsoft, which may or may not be integrated into Siri and Apple tools. I'm going to talk more about that later on. And with companies like perplexity, having very good and capable search, AI based search results, Google are in trouble. and I would argue that if they don't figure this out fast, at least their stock is going to take a very significant hit. Adding on top of that, several different Gemini users have shared that they're starting to see gibberish results coming out of the Gemini chatbot. This has been shared by multiple people across different places around the world. They're saying it's not happening a lot. And when it did happen to them, when they run the same query again, the results came out correct. I must say, I really like Gemini. I use it a lot. I found that it writes very concise and human like text, and that it's also very good in summarizing data and analyzing data. And it has the longest context window on the planet right now. That being said, there are weird things going on in Google's AI deployments, and I really hope for them and for us that they will figure this out quickly. And I want to switch gears and talk about global impact of AI deployment on workforce and stuff like that. McKinsey just released a research that reveals a 72 percent global adoption of generative AI doubling from what the survey showed in 2024. The surveys is based on responses from over 1300 participants across multiple industries and all around the world. Now the increase was across all regions, but Asia Pacific and China showing the largest growth from last year. The survey is also showing the professional services industry, such as human resources and legal services and management consulting have seen the largest increase. In addition, concerns about the workforce and labor displacements because of AI has decreased from 34 percent to 27%. Many of these organizations are mentioning that they're taking commercial off the shelf tools and customizing them to their needs, which is providing them with competitive advantages in their niche, in their area, in their industry. In a somewhat contradicting report from Reuters, they have shared that only 7 percent of people in the U. S. use generative AI daily. Which is highlighting a significant gap of adoption between companies and individuals that are using it all the time and the vast majority of the population. Now, despite the low daily usage, people in the survey said that they see a significant impact, positive impact, on their efficiency in their workforce. And that many companies are significantly reducing their staff by using generative AI capabilities across specifically marketing, content creation, graphic design, and programming. So before I share with you my thoughts on this whole thing, I want to give you two examples from actual companies and how they're benefiting from AI by going all in. So the first company that saw some controversy this week is Klarna. Klarna CEO, Sebastian Simitkowski shared some insights about how Klarna is going to save 10 million to the bottom line this week. This to the bottom line this year, just in marketing alone, he has broken down how they're getting the savings across several different aspects of marketing. The three major ones is reducing external marketing agency expense by 25%, cutting the marketing team by half and the ability to create images and create campaigns significantly faster than they did before. So he shared that while cutting the stock by half and cutting agency spending by 25%, they're actually generating better and more content than they did before by leveraging AI technology. Some of the things he mentioned is that they're stopped using stock imagery completely, which I recommend to all my clients to do for the past year. They're using tools like in OpenAI DALI and Adobe Firefly in order to create new campaigns, they've also trained their own model to help them with copywriting, and now 80 percent of their copywriting is generated by this AI driven tool. This enabled them to cut the time they spend on every campaign from six weeks to only seven days days. And that's while having less people working on these campaigns. Now, those of you who has been listening to the podcast for a while, or have just been following the news, it's not the first time Klarna is in the news for this kind of stuff. And Klarna has a close relationship with open AI. they have implemented ChatGPT for all their employees together with how to build, Custom GPTs. They have built over 300 specific GPTs in the company that they use daily to do various tasks across the company. And back in March, they've shared the results of the AI chat bot that they have developed together with open AI. And that chat bot took over 2. 3 million support chats in one month, solving them previously faster than human supporters. So going down on a resolve time from about 12 minutes to about three minutes and while keeping customer satisfaction at the same level, doing the work of 700 full time employees. So there's been a big controversy about this whole issue coming from Plarna. A lot of people attack the CEO for firing all these people, both in marketing and so on. But the reality is, This is where we live right now. So let's talk a little bit about the elephant in the room. If you are going all in on AI, like Klarna and like other organizations, and you should, you can gain significant benefits in your industry. Because the reality is the vast majority of companies are not doing it yet. That being said, you do not have to fire anyone. You really have three different options. Option number one is to go down the Klarna path and saying, if I can get 30 percent savings or efficiencies across what I do, I can fire 30 percent of the people. I think that's the wrong approach, but it really depends on your specific company and your industry and how much you can gain from that efficiency. So what do I mean by that? There's two other things you can do. The very first thing is if you can have 30 percent efficiency across multiple aspects on the critical path of your company, you can look for ways to grow your business by 40%, by 50%, which will make a lot more sense than cutting 30 percent of the people losing a lot of knowledge that these people have within your organization, your industry relationships, and so on. How can you grow that fast? the reality is if you are 30 percent more efficient, you can lower your cost by 15 percent and still make 15 percent more to the bottom line. Also, as Klarna mentioned, you can cut the delivery time of whatever it is you're delivering by a lot. So in Klarna's case on the marketing side, from six weeks to seven days, which means you can offer faster results, better results, faster results. For less money to people that you serve, meaning you're going to steal a lot of business from your competition, which should allow you to grow much faster than everybody else, which should be a much better decision than letting these people go. Now that's not always possible. There are industries that doesn't have that elasticity, as far as Stealing people, long term contracts, regulations, whatever the case may be. It's not always the case, but there's also another potential aspects of this. sadly, I don't think will happen a lot. I can tell you that's what I'm doing in my companies. And that's what I recommend to all my clients to do. And that's what I teach in my courses. But the third option, if you can gain a 20 percent efficiency, give everybody in the company or the people that are the organizations or at least the people in your business that enjoy the benefits of AI, give them the opportunity. Half a day off every week. Let people enjoy a better quality of life because of those efficiencies, because you can still run a very successful. And in this case, a more productive and more profitable business. So why not let people benefit from that instead of working less amount of people, more time in order to gain the same benefits. So do I see that realistic in every company? No, we live in a highly competitive capitalistic world, but I would hope we'll see at least more than a few organizations going down that path. Another big company that is going all in on AI is PWC. PWC is one of the world's largest consulting companies, and they just share. They've signed a huge agreement with OpenAI to give Chachi PT Enterprise to all its employees. But in addition, they're going to become OpenAI first reseller. So not only they're going to use Chachi PT and implement it in pwc, their goal is to help all their other clients. implement open AI across their enterprises by developing the right processes and products and helping them implement it. in PwC's own survey of 2024 CEO survey, they found out that 58 percent of CEOs expect generative AI to improve quality of their products and services this year. And in the research and joint OpenAI, PwC has identified over 3, 000 generative AI use cases that they can implement internally, as well as for their clients across multiple industries, like financial services and healthcare, manufacturing, hospitality, they serve a lot of different companies and different sectors. Now, they're obviously not alone in this business. They're just the first ones that are going to OpenAI, but in Accenture's reports about their Q1 results, Accenture is another huge consulting company, PwC competitor. They've shared that in Q1, they've sold 640 million of AI consulting. That's probably up from very close to zero or probably a completely different ballpark a year and a half ago. So there's a huge demand for AI consulting. I can tell you that I do a lot of this kind of work very successfully over, obviously not at that scale. I work with smaller businesses, the kind of businesses that these companies do not serve, but the impact that I'm seeing working with these companies is insane. Even in two days of training, the amount of knowledge and the amount of ideas and efficiencies that are mapped And in some cases created is absolutely mind blowing. I literally just came back from a training session in Canada, working with a company I'm not going to mention but in one day we were able to identify such a long list of practical AI implementation use cases that are going to save them. Every single week across almost every department in the organization. So really all you need in your business is somebody to help guide you through that process with education and with some consulting to understand where this is going. And if you can gain these kinds of benefits in the short term because in the long term, in three to five years, most companies will figure it out. But whoever figures it out first, we'll be able to gain an unfair, significant advantage in their niche, in their industry. I want to dive a little deeper on this topic and talk about another benefit. Large international consulting company Deloitte. So in the first AI course that I was teaching, which was April of 2023, so just over a year ago, had the privilege of having the CMO of Deloitte Israel as one of the participants in the course, by the way, since then, we've done two courses every single month. So we've have taught hundreds of people in our AI course, but back in the first course, CMO of Deloitte Israel was one of the participants and one of the use cases we did was strategy for Deloitte. And what we did is we did something that I recommend to all my clients to do, is three different assessments. You want to do an assessment on how AI is going to impact your company from a strategic perspective. And there's three different, Types of assessments you want to do under that. The first one is what kind of services or product you're selling today that are going to be eroded or completely eliminated by the fact that your clients will have access to AI. In the use case that we've done for Deloitte, we looked at things like tax reviews or legal reviews, which AI might be able to do completely on its own in a year or two, which means they're going to make a lot less money offering these services. The other two reviews you want to do is on the positive side. One of them is what kind of new services can you offer your existing clients because you now have AI. The example from Accenture is a great example. Those 640 million of AI training most likely comes from their existing clients, something they didn't do before. So you need to think what is relevant in your business, what new services or products can you offer because you now have access to AI. And the last one is what kind of new clients can you serve because you have access to AI that you didn't have access to before in the Deloitte example that we did in the use case at the course, we looked at what if Deloitte takes All the knowledge that it has, every consulting campaign that it has ever done in the past and trains a model on that. And now in addition to using it internally to do their next consulting faster, better, cheaper, they could now approach small businesses, which they couldn't do before because. Deloitte's overhead and all these companies are not picking up on Deloitte, but Deloitte, Accenture, PwC, their overhead is gigantic. They will not do a 50, 000 consulting project because it just doesn't make sense to them. But if they have a chat bot that you can connect to your CRM, to your ERP, to your email, to all these kinds of things, to provide it data, it can probably give you amazing ideas on how to improve the efficiency and the profitability on your business. And because it's a chat bot, they can sell it. As a membership for whatever number they decide, let's say 20, 000 a year. Millions of companies will take them up on an offer like that because they can get Deloitte quality feedback and inputs into their business while paying only 20, 000 a year. I will do that in my business. And so they can now access a whole different market share that was just not accessible by them before you have the same thing in your company. You just have to figure out how to think about it, how to analyze it and how to find them and prioritize these use cases. So you can plan for them and capitalize on them faster than other people in your industry. Okay, that was a lot, but let's switch gears and talk about OpenAI. Obviously, we cannot pass a full week talking about AI news without talking about OpenAI. So OpenAI announced that they have stopped five different covert influence campaigns ChachiPT, driven by countries like Russia, China, and Iran to try to influence public opinions across different aspects in different countries, trying to influence public opinion on multiple topics in different countries. They were using Chachupiti to create fake news and social media content across multiple channels and using it to compose headlines and mostly to compose headlines, create images and write social media content. Now, the fact that these tools can be used for stuff like that is not new. But on the other hand, the fact that we now know versus speculate that government agencies are now using AI tools to sway people's opinions in the time of already highly split and controversial politics in a year of potentially more elections around the world than ever before, this is not good news. I'm actually really happy that OpenAI was able to catch this and potentially stop some of it. The question is how much of it is still happening across other platforms, using different tools. That they didn't catch or that nobody can catch because it's using open source models running on their own computers. This is something that we obviously have to find a way to address globally on how to detect, block, or at least be aware of content that is being generated by AI. Now we shared a lot of negative news about open AI and their safety practices and stuff about Sam Altman as well in the previous weeks, both in the departure of two of their leading scientists in their super alignment team and the disassembly of that team and previous board members speakingly, Openly speaking about Sam Altman's ambitions completely overshadow the amount of effort that they're putting into safety. So to add on top of that this week, a group of nine current and former AI employees has written a letter raising concerns about the company alleged reckless, secretive culture, as the racing to build AGI and how they're completely prioritizing profits and the growth of their impact over safety in the organization. So not to go into the details, you can find all of those by clicking on the links to the article in the show notes. But they believe that self regulation alone is just not sufficient in order to keep the world safe from the future of AI. And they're not talking just about OpenAI, they're basically talking about all the big companies that are now racing to develop AGI. To put things in perspective, Daniel Cocottaglio, which is a former researcher in OpenAI. He believes that there's a 50% chance that we will reach a GI by the year of 2027, which is just around the corner. And he also believes, and again, it's one person, that there's a 70% probability that advanced AI will destroy or significantly harm humanity as we know it today. This is not great news. So what this group is calling for. Is some kind of external oversight over these companies that are developing AI to regulate and put a higher level of accountability than the companies have on their own. Not to reduce some of the heat that open AI is getting about safety in the last few weeks they have announced that they are creating an AI safety committee within their board I think this to me feels more like a band aid because they just disassembled their team of researchers that was working on this while putting together a few board members to look into this. That doesn't sound the same level of scrutiny and efforts in order to really invest into safety. In parallel, by the way, they released a research paper that was done by the super alignment team where they're showing that they're learning capabilities on how to understand how different aspects of these large language models actually work behind the curtain. And this is similar to another research that was done by Anthropic, which I shared with you last week. So these companies are finding ways to learn how these models work in order to reduce the risk in the open AI case, the fact that they shared it right now is actually again, not great because the team that found this is not there anymore or actually has been dissolved and some of its members have left and some of them are just doing other stuff in open AI. I think that these news that have been mounting from multiple directions, multiple people in different levels in open AI, both existing and previous members. Are nothing short of alarming and I would really hope that these things will create some pressure on governments and potentially the creation of international groups to monitor and become the watchdog that potentially saves us from very unhealthy results that may come from using AI. And now to some good news from OpenAI. OpenAI just released another really impressive voice demonstration a couple of days ago. This demo shows ChachiPT's ability to create expressive and creative voices really on the fly, almost seamlessly and without interruption. In this particular demo, a person is writing a play and he's using Chachupiti to create voices for the different characters in his play. It's really amazing. And he's creating voices for a lion and a mouse and an owl and each and every one of them with a different character. You should listen to it. It's really short and it's amazing. Amazingly impressive. Another interesting news about voice. Chachupiti just added another functionality to their voice feature, allowing the voice feature to work when Chachupiti app is closed or while using other apps, or even when you're looking at your home screen when the phone is quote unquote off, you can enable that feature by going to the app settings and enable background conversations in the settings. Now for at least most of us, the voice feature on ChatGPT app is still the old GPT 4 Turbo voice and not the GPT 4. 0 that they have demoed. But they're saying that the rollout of GPT 4. 0 is going to happen in the next few weeks. for listening. The more interesting aspect of this is, as I shared previously, chat, GPT is going to power the new Siri. So in two days on June 14th, Apple's Worldwide Developer Conference, WW DC is launching and they're expected to introduce a lot of AI stuff and the biggest launch is probably going to be. ChatGPT integrated as the new Siri. So all these new voice capabilities are going to be embedded into more or less everything, Apple, from your watch to your phone, to your Mac, and being able to have human like conversations with it about anything, while most likely being able to use it to activate multiple things on your hardware and across other software is very appealing. ChatGPT has already introduced their Mac desktop application, which now is still somewhat limited, but I assume will be a lot more integrated in the very near future and will allow it to see the screen as you're doing things and will allow you to basically use ChatGPT in order to help you with everything that you do because it will be able to see your screen if you will allow it. Last piece of interesting news from OpenAI, they just announced ChakGPT EDU and ChakGPT for nonprofits. In both these cases, they're allowing education organizations as well as nonprofits to use their latest models at a discount rate. So if you're one of these types of organizations, you can now get access to these tools at a discount from what everybody else gets. Another big piece of news this week comes from NVIDIA and their CEO, Jensen Hung, on the Taiwan Tech Conference last Sunday, Jens Stoltenberg emphasized the importance of robots and self driving capabilities to our somewhat near future. He's talking about that in the next few years, the volume of self driving cars and humanoid robots are going to grow dramatically. And he's basically calling the next wave of AI, physical AI, meaning robots will be able to do most of the tasks that human do across from daily housework, yard work, all the way to factory work and other kind of, and other kind of human labor. They have shared again, something they've shared in their previous announcement, something that they called Robot Gym, which is basically an environment that allows robots and robots software to train in order to develop new capabilities faster than ever before. They also, as you probably also know, they have signed a deal with Mercedes to help them build self driving cars that are supposed to be released next year. So what does that tell us? It tells us that in addition to what we talked about before about the risk to white collar jobs, there's a serious risk to blue collar jobs just around the corner, whether it's drivers of different kinds all the way to factory operators, and as I mentioned, even simple jobs like doing yard work or fixing stuff in your house. Take that to the next level, which is something nobody's talking about, but I'll be really surprised if it's not happening at scale right now is law enforcement and military. So we've already seen some basic examples, but the idea of a Robocop, like the movie, or the idea of something like a Terminator warrior that can do a lot of stuff that humans can't do, at scales humans can't do, is just a matter of time, and I do not think it's good news. And still on the topic of NVIDIA, as the U. S. government prevents the sale of NVIDIA chips to China in order to prevent them from developing these kinds of capabilities, but there's a loophole. And apparently it's a very big loophole. So the loophole is that Chinese companies can buy NVIDIA's data centers as long as it's in the U. S. So it has been found that companies like ByteDance, the company behind TikTok and China Telecom and Alibaba and Tencent. So the tech giants of China are either already using or in the negotiations for using nVIDIA's data centers in the U. S. for their operations. And I really hope that somebody in the U. S. government is paying attention and is going to put an end to that. We didn't talk about Elon Musk for a while. Every time there is a news about Elon Musk, it's always controversial and it's always really big. So cNBC shared some correspondence between Elon employees from back in December. It was just announced right now where Elon Musk has redirected about 500 million worth of Nvidia chips and processors from Tesla to his social media platform X and to his X dot a I platform. To be fair, part of that communication is saying that Tesla are is going to get these chips just at a later date, meaning he was just prioritizing from a timeline perspective. The chips to X versus Tesla. This creates a very big Controversy right now with Tesla investors who are supposed to vote whether to provide Elon Musk 56 billion in compensation that has been awarded to him in the past, but questions by the court, which has ordered the fact that now Tesla shareholders is supposed to reapprove this compensation package. Again, to be fair, when that compensation package was awarded, it was worth about$2 billion and it was tied into specific success parameters that Tesla were able to hit. So from a pure process perspective, the process made sense to award that to Elon Musk back then and I think the fact that people are looking them out right now at 56 billion and saying that doesn't make any sense is not completely fair to Elon and the original process, but this is actually, but regardless of whether I think it's a good idea or not, it is, the vote is happening in a couple of days. And the fact that it was released, that Elon has put one of his other companies ahead of Tesla when it comes to priority. Doesn't look good for Elon's small compensation. And from that to another controversial impact of AI on stuff that we know and use every day. Adobe just updated its terms and conditions for apps like Photoshop and some of its other tools requiring users to agree to the new terms and conditions in order to continue using the software. The new terms and conditions state that Adobe may access user content through automated and manual methods and users grant Adobe license to use, reproduce, publicly display, Distribute, modify, create derivative work and sub license the content. Now that obviously made a lot of Adobe creators and there's many millions of them absolutely furious. So people that are designers and artists and movie creators and people like that were very loud about what they think about these new terms and conditions. Some users even mentioned that they cannot even uninstall Adobe apps or contact support without first agreeing to these terms. Now, this is obviously very problematic. Since then, Adobe has tried to backpedal and explain why they put it in place. But the reality is, they need access to your creations in order to train their next models, which may or may not make the users of these tools very happy. How is that going to evolve? I'm not 100 percent sure, but I will keep you posted. I have a few interesting updates for you from my perspective. One of them is that we're coming up on episode 100. We're going to do something really unique for episode 100. Literally the most value anybody has ever delivered in a short condensed amount of time in a live session about AI. this Is coming up in less than 10 days. So stay tuned, follow me on LinkedIn and look up for it. I will share with you more details as things clarify, but we're going to do a huge live event with the most amazing practitioners and AI developers in the world. And each and every one of them is going to share the best tip they have on how to leverage AI in business to gain efficiencies. This is perfectly aligned with everything that I believe in, whether it's building a community of amazing people that I love working with, as well as providing maximum value on practical AI implementation in businesses. So that's coming up on the 17th and or 18th of June. So as I mentioned, stay tuned for additional details. In addition, this coming Tuesday, we're releasing a really amazing episode with Drew Brucker sharing exactly how to create stunning and amazing and engaging images for anything you need in your business within minutes instead of hours. So that's coming up on Tuesday. And until then have an amazing weekend.