Leveraging AI

91 | The Empire (Microsoft) Strikes Back, Open AI Having a Really Bad Week, The Future of Work in the AI Era, And more AI news for the week ending on May 24th, 2024

May 25, 2024 Isar Meitis Season 1 Episode 91
91 | The Empire (Microsoft) Strikes Back, Open AI Having a Really Bad Week, The Future of Work in the AI Era, And more AI news for the week ending on May 24th, 2024
Leveraging AI
More Info
Leveraging AI
91 | The Empire (Microsoft) Strikes Back, Open AI Having a Really Bad Week, The Future of Work in the AI Era, And more AI news for the week ending on May 24th, 2024
May 25, 2024 Season 1 Episode 91
Isar Meitis

Are you excited or terrified by an AI looking at EVERYTHING you are doing on your computer and assisting you with anything from work to gaming?

In this news edition, Isar dives into the latest AI news. Key topics include:

💻 Microsoft's AI-infused "Copilot PCs" and controversial screen recording

🚨 Troubling executive departures and safety concerns at OpenAI

🍕 Google's AI search fumbles with glue in pizza and geography gaffes

💼 Will AI take all our jobs? Insights from tech luminaries

🤖 Affordable home robots coming sooner than you think

🧠 A glimpse inside the "brain" of AI models

Get Isar's expert take on the rapid AI developments, potential impacts on jobs, and societal readiness (or lack thereof). Hear about concerning issues at OpenAI and amusing AI missteps.

Plus, an exciting announcement about a new self-paced AI course from Isar! 🎓 Stay tuned for details.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Show Notes Transcript

Are you excited or terrified by an AI looking at EVERYTHING you are doing on your computer and assisting you with anything from work to gaming?

In this news edition, Isar dives into the latest AI news. Key topics include:

💻 Microsoft's AI-infused "Copilot PCs" and controversial screen recording

🚨 Troubling executive departures and safety concerns at OpenAI

🍕 Google's AI search fumbles with glue in pizza and geography gaffes

💼 Will AI take all our jobs? Insights from tech luminaries

🤖 Affordable home robots coming sooner than you think

🧠 A glimpse inside the "brain" of AI models

Get Isar's expert take on the rapid AI developments, potential impacts on jobs, and societal readiness (or lack thereof). Hear about concerning issues at OpenAI and amusing AI missteps.

Plus, an exciting announcement about a new self-paced AI course from Isar! 🎓 Stay tuned for details.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to a weekend news episode of Leveraging AI, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business, and advance your career. This is Issar Metis, your host. And I just came from a red eye flight from California where I was working with one of the companies that are my clients on two and a half days of training and a mini hackathon that was absolutely amazing. So those of you, I'm not going to name the company, but those of you who know who you are, You guys were absolutely awesome in that event. I shared with you in the past that I think these kinds of events that involve training and then some kind of a mini hackathon with different groups trying to work on different projects, always generate amazing results, both in means of knowledge that it's providing to the employees, but also in the level of excitement and the overall motion forward, it gives the entire organization As far as starting to implement AI in the business, it always generates that really strong tailwind that actually pushes into action entire organizations. If you want to learn more about the kinds of training that you can provide to your team or to your organization, I've invested an entire episode talking about different ways and different kinds of training that you can provide and the pros and cons on each and every one of them. And that would be episode 86 of this podcast. All you have to do is scroll a little bit backwards on your podcast player and find it. If you have not listened to it before, I highly recommend it. If training for AI is something you're considering and something you should be considering. And now let's dive to this week's news. There's a lot. The biggest news of this past week is that this past week, this podcast has crossed a hundred thousand downloads. the first thing I need to do is to thank you, because each and every one of you by listening and signing up and registering and sharing this podcast has helped it reach this amazing milestone. But what I really like to share with you is why I think this is important. The number itself is meaningless. It's just a vanity metric that people use. I, or nobody should actually care about, but what is meaningful is that if 100, 000 downloads happened, that's about 50, 000 hours of AI literacy that people has consumed. Now, 50, 000 hours may not sound a lot, but let's break this down. you break this down into work days, so divided into eight hour days, that is 6, 250 work days. If you work every single day without taking weekends, holidays, or even Christmas off, 6, 250 days is over 17 years of AI literacy consumed by all of you. So, Again, all I can do is say, thank you for trusting me in giving you valuable content. The fact that you keep coming back and sharing this and the podcast is growing is a proof that we're doing the right thing. And I have to thank all the amazing guests that has joined me in this past year since this podcast has launched. So thank you for listening. Thank you for sharing. And thank you for being a part of this journey. You have been one of the guests on the show. So after last week, the big news came from open AI and Google with their two big events this week, the big event was by Microsoft. So Microsoft build 2024, which is their developer conference happened this past week, and they had several big announcements. The biggest announcement was that they are coming up with a new line of PCs called Co Pilot Plus PCs, which are machines that are basically supposed to incorporate all the different components into a cohesive AI solution. That means new chips that are tailored to AI performance combined with local models and cloud models, all working in tandem. They're combining. chips that are arm based chips from Qualcomm as well as Intel and AMD and creating what they call a neural processing unit, NPUs, which allows these computer to run blazing fast. They are claiming that these new computers, in addition to all the AI capabilities are running 58 percent faster than M3 based MacBook Air computers, which are the fastest computers that Apple are currently Which are the fastest computers currently by Apple, at least until their coming announcement that is coming in less than two weeks. So blazing fast computers, but the key to all of this is the fact that these computers are going to be AI based, what does that mean? It means that the entire ecosystem, the entire operating system, everything is going to have AI infused into it. They haven't shared all the details of what that means, but they gave several different examples. They're going to be. AI agents built into it that will be able to participate in different actions in the company as virtual employees. That's something similar to what Google announced last week. So not too surprising once the competition announced it. They also announced a very interesting and controversial, I would say, feature That they call recall. And what recall does is it allows you to look back and search for anything that happened on your computer and retrieve that, whether it's, you just remember who you spoke with or what you talked about or something that you're looking for, but you don't remember whether it came on an email or a Slack or a text message. That recall feature, the AI will be able to find it in seconds and even get you to the relevant page in the PowerPoint presentation about the topic that you're talking about. So it sounds amazing, right? So what's controversial? The controversial aspect is it's doing it by literally taking regular screenshots of your screen at a very high frequency. So it can analyze everything that you're doing, regardless of which platform you're on. So on one hand, this is really exciting and very helpful. On the other hand, that raises a lot of questions on where that data goes and what they're tracking and not tracking. In order to relieve these concerns, Microsoft shared several actions that they're taking in order to enable to use these features. safely. One of them is that it's happening only local on the machine, and that data is not shared for any training purposes or anything. It literally stays on your computer. The other is that you can block different screens or different applications from this feature. So if you're going to your bank account or any other sensitive information, you will not record it. How much am I be willing to use this feature and give them access to everything that I'm doing on my computer? I'm not sure yet. I will say one thing after I had a couple of days to think about it, is that I think eventually, if more and more people are going to be using it, and it's going to save them a lot of time at work, and they're going to become more efficient, Other people will probably be able to suppress their security concerns and we'll start using the feature as well. It obviously doesn't relieve the concern of what happens if you lose your computer or one of your employees use your computer and the data becomes very easily searchable and accessible. So there are a lot of still unsolved questions, but definitely it's another one of those unique features of AI that provides a lot of value on one hand and a lot of serious questions on the other. Another really cool thing that they showed, which is again, similar to something we've seen from OpenAI in their demo, and probably Google will do something similar, is the ability for their desktop app to watch the screen and participate in whatever it is that you're doing and be your Sidekick or copilot in literally everything you're doing. They're giving an example of a person playing a computer game and it's giving him advice while he's playing the game, what to be aware of in real time, what things to do, what are the rules of the game, basically any assistant that he needs in order to be successful in the game, you can use the same exact thing. In the business world to develop presentations to maybe even help you while you're in a meeting and give you hints and ideas on what you can do, maybe on text on the screen while you're in a zoom call and so on. So it's a very powerful capability that is going to be embedded into everything that we're doing and not just within its own little universal copilot because it's going to be watching everything we're doing and be able to help in every one of these aspects of our life. Going back to what I said about the previous feature, same kind of concerns on one hand, amazing. I would love that. On the other hand, I'm not sure I want somebody looking over my shoulder all the time and giving me advice and instructions. I'm sure that as humans, each person in each company will find the right balance between these different things. But I think it's inevitable that this is going to happen. It's going to happen this year across most of the platforms because Microsoft is already selling these computers. They expect to sell a huge amount of them this year. And I'm sure. Everybody that is on the Google platform will get something similar, as well as anybody who just chooses to use open AI with their desktop app will be able to do the same thing that obviously raises serious concern to anybody in it or it security and exactly how that's going to work and how it's going to be monitored and has going to be, dealt with from that perspective. I don't have a good answer. I hope Microsoft and Google does. I'm not sure they do yet, but as I mentioned, I think we're all going to have to figure this out together. On the flip side to Microsoft that had a very exciting and interesting week, OpenAI had probably the worst PR week ever. And that involved several different main figures departing and sharing what they think about the company. So the first one actually happened just over a week ago, and that was Ilya Saskover. Ilya was one of the co founders and the chief scientist of OpenAI. He was one of the people that were behind the coup To get Sam Altman out late last year on concerns on security based on the nonprofit board. After the whole event, he went silent. Nobody was talking about what he's doing. He didn't go public. When Sam Altman was asked about him, he basically avoided the question and said, it was Ilya's role to share what happened or what he wants to share, and. Really, we haven't heard anything about what happened with him or what role he's still holding or anything. There was complete radio silence about the topic until he decided to leave the company last week. So while maybe the departure of Ilya doesn't sound too surprising based on the facts that we know that he had a big issue, a clash with Sam Altman, even though Sam had only good things to say about him, the fact that he went completely silent and didn't say anything may not be a big surprise. But it does not happen in a vacuum and all the other departures kind of shed light on what's maybe happening behind the scenes, which may not be great. So Ilya was the guy that put together the super alignment team under his supervision as the chief scientist officers. The super alignment team was the team that was in charge of safety in case that we get to AGI and what that means. That team was promised 20 percent of the compute of open AI in order to Make sure that the development of new AI systems are happening in the safest way possible, and especially in light of the race to AGI. And if we go back to the point where Ilya was involved in getting Sam Altman out of the company, that was on concerns pushing forward faster than what safety should allow. In addition to Ilya's departure, Liki, was one of the executives and he was the one leading the super alignment safety team. And upon leaving, he accused OpenAI of prioritizing, I'm quoting, shiny products. Over crucial safety research, and he's stating that the super alignment team was struggling to get compute resources that it was actually promised in order to do that, and that it was getting harder and harder to get crucial research done. He's talking about a very serious headwind within open AI. to do the work that the safety requires. and the departure of Ilya and Jan basically means that there is no more super alignment team and the people and the resources that were assigned to the team are now just being divided between other teams that as part of their work are also supposed to take care of safety. This is obviously not good news. In addition, another employee, Gretchen Krager, also resigned roughly around the same time That Ilya Anyan left the company citing very similar concerns about open AIs, and I'm quoting decision making process, accountability, transparency, documentation, and policy enforcement. That doesn't look very good. If you look back a few months, two additional employees, Daniel Cocotario and William Sanders left the company, and I'm quoting again, losing confidence that OpenAI would behave responsibly around the time of AGI. All of this doesn't sound very promising, especially when we heard Sam Altman himself saying they're going to spend any effort and any resource doesn't matter how much it's going to cost to achieve AGI. So, while on one hand they're saying they're doing everything for safety, On the other hand, you hear people who were in charge of safety, leaving the company and claiming that they're not getting the resource and the priorities that are required in actually to do the work that the safety requires. This is alarming. And if we go beyond and look at the other companies and all the interviews that happened recently with people leading these research in other companies as well, this is alarming. They all don't have good answers on what might happen and the answer they're all providing is, well, we think it's going to be okay. We're trying hard to do this and it doesn't really matter whether we stop or not or try to think about it or not because other people are doing it. So we might as well do it as well. I don't like this answer at all. Sadly, there's very little I can do other than to let you know that this is what's currently happening. I must say that the flip side of that is the people like Yan Likun, the head of AI at Meta is taking a completely opposite stance and he thinks that large language models cannot achieve AGI. He's basically claiming, including in a large interview this week, that the path of LLMs end at a certain point and that LLMs today are not smarter than a house cat. That's a quote. And he thinks That we need to develop completely different ways to train these models in order to achieve AGI, which is what he's leading with his team at Meta. who is right? I obviously don't know. These people are way smarter than me when it comes to developing AI systems. But even if Yann LeCun is right, the fact that the leading models are running faster and faster with less and less concern for safety is not good news. In addition to that, in addition to that, ChatGPT had a bunch of outages this week, which again is not great news, and as I mentioned, not the best week OpenAI had, and some of these outages were internal issues with the ChatGPT software that were acknowledged and fixed. By open AI, and some of them were a part of an issue that Microsoft had with their compute that also impacted a lot of Microsoft products. So if you are experiencing issues with using Chachapiti this past week, these are the reasons. The only good news coming from open AI this week is that. is that following their announcement last week with the new capabilities of GPT 4. 0, and if you missed that, then go and listen to the previous news episode from last week, and you'll hear a lot about it. They've seen a huge increase in people downloading the app and paying for the paid version of the app. So GPT plus, and the reason for that is those new capabilities, while they were released for free on desktop are only under the paid version. on the Chachapiti app. And because a lot of this functionality is mostly relevant for mobile, meaning the ability to talk back and forth and to show it the world with a camera, which has not been released yet, by the way, but I don't think a lot of people knew that when they were downloading the app and paying for it has led to this amazing spike, The revenue generated by Paid users on the mobile app has grown significantly going from an average daily revenue in May of around 491, 000 to an average daily revenue of 900, 000. This is Amazing. I think it shows very clearly that people are excited about these new features and the capability to share the world with open AI and Chachupiti and be able to consult with this model based on everything they see or just having a conversation. Going back to the previous things we talked about from Microsoft, on one hand, yes, these are amazing capabilities. On the other hand, it's really alarming, but this is the direction it's all going. And I think it's inevitable that we're all going to have initially phones and then shortly after wearable devices with AI built into them that we'll be able to use in order to get assistance in everything that we do. And from OpenAI to the other company that made big announcements last week and had a very interesting week this week, it's Google. One of the things Google announced last week is their new search that has the AI results, basically an answer at the top of the search results. So in addition to just showing you all the results as usual, there's an answer to your question on top that doesn't require you to go to any website that is done by AI summarizing what it thinks is the best answer. And it gave some really weird answers in the past week. The one that went viral is one of those users was trying to consult with Google on a pizza that he was preparing and all his toppings slid off as he was baking the pizza. Google suggested mixing about one eighth of a cup of Elmer's glue into the sauce in order to keep all the pizza together. This is not something I will recommend you do, but that's the recommendation he got from Google. So I think we need to expect. a few of these weird answers because these tools don't have enough context and context is what AI needs in order to give you good answers. In another query, one of the users asked which city in the U S has the best food. And the answer was Tokyo. As far as I know, Japan is not a part of the U. S. at least yet. So Tokyo shouldn't be included in U. S. cities that have the best food. And yet, of context in combination with hallucinations can get weird answers. This problem is something I think none of the big companies have a solution for yet. And so while this is very helpful getting these kinds of results that basically give you the answer instead of you having to go through several different links in order to find it, in at least the near future, we'll still have issues that we will have to live with. The problem is what happens when it's not that weird and you don't know the answer is wrong. So I hope most people know not to mix glue in their pizza, but there might be questions that you're asking that you really don't know the answer and it may give you an answer that sounds reasonable to you, even though it's going to be as bogus and crazy as mixing glue into your pizza sauce. And then we start having problems. What does that do to the credibility of Google? I don't know, but they've already made the bet. The chips are on the table and now we have to see how the hand is played. And from that, I want to switch to the whole topic of the ability of AI to do our jobs. There were a few interesting discussions and news about this entire topic from several different directions. The first person is Aditya Agarwal, who has held senior positions in Facebook, in which he developed some of the leading features of Facebook, like Facebook newsfeed and messenger, and a few other aspects of Facebook. Then he was the chief technology officer at Dropbox. So the guy have seen one of two things in technology. And He was talking about. creating code together with code co pilots as if you're working with a demigod. He's talking about the experience of writing code with these tools as an almost religious experience with the ability to co create with it, is something like he has never seen before. He's talking about merging intelligence in perfect harmony. That tells you when it comes from somebody with intelligence. years of experience at the highest level of code creation, that something very unique is happening there that has never happened before, at least in the world of code creation and probably in any other creative world. But I think it's obvious to most people that it's only a matter of time until the person that's working with the computer to write the code is just not required. The computer will just write the code for you. On its own based on instructions that you're going to give it and maybe even very high level instructions of what you're trying to achieve and the coding co pilot will figure out exactly what code needs to be created, what components, what APIs, how it needs to be structured, and maybe even the entire system architecture to achieve this thing that you want to achieve. I told you several times on the show that I see sometime in the future, and I don't know when, but I see sometime in the future that the App Store is not an App Store. It's an app creation factory where you can come in and say what you need and the app that you need exactly for what you need for your skills, for your connectivity, for your tools, for your environment, for your experience will be created for your needs on the fly so you can use it to be the most efficient with your needs versus the average application that was created for 20 million other people. But this is obviously not limited just for coding. Coding is just an easier task because it's very well defined. It lives in this confined universe of the coding language. However, if this goes beyond that, and it will, it's just a matter of time, it leads to a lot of other questions that even much smarter and bigger people than me asking. Geoffrey Hinton, that is considered the godfather of AI. He was one of the pioneers in neural networks research. has voiced his deep concerns this week about AI taking numerous jobs and significant jobs on an interview he has done this week with the BBC. And he's saying that in order to mitigate that, he's recommending to the British government to put together a plan for universal basic income, also known as UBI. The concept behind it is basically the government giving revenues away from the what AI is generating to Everyone this concept has been discussed a lot in the recent year because of these concerns that AI is going to take a lot of jobs, Sam Altman himself, the CEO and the founder of OpenAI has suggested a similar thing in his recent interviews. He's even suggesting that instead of UBI, maybe each person on the planet receives part of the compute of an AGI system. So that person can basically own some of the AGI. How does that exactly work? I don't know if Sam thought it all the way through, but it's an interesting thought on how this might work. And if you want to hear a more extreme opinion, more or less on anything in the world, go and listen to what Elon Musk has to say about it. So Elon was interviewed as part of VivaTech24 in Paris. He was actually doing it via a Zoom call or something like that. He wasn't there in person, but he was saying that AI is most likely going to take most jobs. And I'm quoting, probably none of us will have a job. And he's suggesting that in this utopian future, jobs will be optional because AI and robots will be able to generate every service or goods that we need as humans. And there's going to be abundance of them that will be enough for everyone. And there's not going to be shortage in everything. He's actually Promoting an idea that of a universal high income versus a universal basic income, which he believes is a much healthier approach. He obviously like anybody else did not detail exactly how that happens. So what letter you put in the middle of the acronym, whether it's basic income, middle income or high income, it doesn't matter if there's no clear path on how to get there or not understanding of what that means. The other big concern of this whole problem beyond money is self fulfillment. And that's another thing that has been discussed by multiple people, myself included, but Musk said, and I'm quoting, The question will really be one of meaning. If the computer and robots can do everything better than you, does your life have a meaning? And that's another big question that I don't think that anybody has an answer to. And like I said before, it's something we will have to figure out together. Now, do I think AI and robots will take all jobs? Maybe. It will probably going to take a while, but do I think that we're going to see a wave of a lot of people in senior positions initially losing their jobs and then once robots become more available and cheaper, then even manual labor will be replaced by robots? Yes, absolutely. And I know that as a society, we're not prepared for that. I know that some of you are thinking that the whole robots thing is way further out in the future, but that's not true because some companies are pushing very hard that boundary as well. A Chinese company called Unitree just released a smaller version of their previous humanoid. The smaller version is called G1. It's about four feet tall and it can do a lot of different housework and other stuff. It cannot lift very high weights. It can only carry 4. 4 pounds, which is two kilograms. But it can run and climb and walk and move around very freely and it costs only 16, 000. So you can have a small version of a humanoid robot working in your house, doing different chores for an amount that many households can afford at this point. Their larger model, which is a full size, which is 180 centimeters tall, which is 511 for those of us in the U S. Is priced at 90, 000. It's a very big difference, but it's obviously much bigger and much stronger and can do a lot more stuff. The thing is sometime in the near future. And when I say near future, I'm talking about probably 2026, but definitely 2027, these full scale humanoid robots are probably going to come in pricing that people will be able to afford. Maybe not all people, it will start with the rich people, but factories and Public services will definitely be able to afford them because it's going to be a lot cheaper to maintain them than to pay humans to do the same jobs. So we're talking about a very near future with no real discussion on how we deal with that as a society. So in order not to end on a doom and gloom note, I will share something from Anthropic. One of the big problems with AI development and the risks within AI development and the speed of AI development is the fact that we don't really understand how it works. even the best researchers look at this as a black box with very little ability of what's actually happening inside. And researchers from Anthropic this past week released a paper that is sharing that they found ways to start looking into the quote unquote brain, of their Suno model, which is one of the models of cloud three and start to understand how the logic works and what parts of the digital brain actually fires when different things are happening. Think about it as a real time view into a. brain surgery that is looking at what parts of the brain are working when different functions are happening. And as part of the research, not only they've identified which parts are firing to do different things or connect to different ideas, but they were even able to manipulate it and control it and enhance or reduce the impact of some of these aspects. on the way the model behaves. Now this is still not a full understanding of how it works, but it gives us a glimpse into how it works and maybe a way to continue researching this in a deeper and deeper level until we get an understanding on exactly how they work so we have a better control over them as they develop to more and more powerful systems. On a different note, we are about to launch a new product. I've shared with you several times in the past. I've been teaching AI courses on zoom to hundreds of people since the beginning of the year. And I know it's not for everyone. Some people cannot free the same two hours every single week for four weeks in a row because they have other commitments, whether it's a job or their personal lives. And some people just like the flexibility of learning on their own. So we have created a self paced version of the course that is built out of the segments of the sections that I was teaching. So it's still the same content, just broken down into easy to consume short segments with exercises in between that you can take on your own time. And it's also a lot cheaper. So if this is something that might be interesting for you, just Keep on following me on LinkedIn. We will release sometime in the very near future, how you can sign up for that. That's it for the news for this week, this coming Tuesday, there's going to be another fascinating interview episode in this particular case, talking about how to create newsletters and content from the very first step of doing the research about your content to summarizing it, writing it, editing it, and publishing it. All with various AI tools, a fascinating episodes that shows how to combine and mix and match different tools in different steps of the process in order to achieve a final product that otherwise would have taken either you or a team of several people several different hours. That's it. I'll see you on Tuesday and until then have an amazing weekend.