Leveraging AI

104 | AI robots working at factories, serious security issues at OpenAI, Multi-token prediction, and more AI news from the week ending on July 5th

July 06, 2024 Isar Meitis Season 1 Episode 104
104 | AI robots working at factories, serious security issues at OpenAI, Multi-token prediction, and more AI news from the week ending on July 5th
Leveraging AI
More Info
Leveraging AI
104 | AI robots working at factories, serious security issues at OpenAI, Multi-token prediction, and more AI news from the week ending on July 5th
Jul 06, 2024 Season 1 Episode 104
Isar Meitis

Is AI Revolutionizing Your Industry Fast Enough?

This week’s episode of Leveraging AI dives into the explosive advancements in AI technology and the serious security issues shaking the industry.

Discover how AI is transforming manufacturing, with robots like Tesla’s Optimus and BMW’s Figure One taking over assembly lines. Imagine robots that can be trained quickly and are cost-effective enough to revolutionize entire industries.

In this session, you'll discover:

  • How Tesla and BMW are deploying robots to revolutionize manufacturing.
  • The impact of Tesla’s $10,000 humanoid robots on labor and business strategies.
  • Meta’s open-source model breakthrough and what it means for AI development.
  • OpenAI’s troubling security breaches and what they mean for your data safety.
  • The massive investments flooding into AI startups and what it means for the future.

Stay ahead in your field by understanding these critical developments and how they could affect your business. Join our AI Business Transformation Course starting on Monday, July 8th.

Check the course here:
https://multiplai.ai/ai-course/ 

Join us next Tuesday for an exclusive interview on how to harness AI to drive business success.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Show Notes Transcript

Is AI Revolutionizing Your Industry Fast Enough?

This week’s episode of Leveraging AI dives into the explosive advancements in AI technology and the serious security issues shaking the industry.

Discover how AI is transforming manufacturing, with robots like Tesla’s Optimus and BMW’s Figure One taking over assembly lines. Imagine robots that can be trained quickly and are cost-effective enough to revolutionize entire industries.

In this session, you'll discover:

  • How Tesla and BMW are deploying robots to revolutionize manufacturing.
  • The impact of Tesla’s $10,000 humanoid robots on labor and business strategies.
  • Meta’s open-source model breakthrough and what it means for AI development.
  • OpenAI’s troubling security breaches and what they mean for your data safety.
  • The massive investments flooding into AI startups and what it means for the future.

Stay ahead in your field by understanding these critical developments and how they could affect your business. Join our AI Business Transformation Course starting on Monday, July 8th.

Check the course here:
https://multiplai.ai/ai-course/ 

Join us next Tuesday for an exclusive interview on how to harness AI to drive business success.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to a weekend news edition of the leveraging AI podcast, a podcast that shares practical, ethical ways to leverage AI, to improve efficiency, grow your business and advance your career. This is Isar Meitis, host. And just every week, this week has been, Jump back with news. A lot of interesting stuff has happened in the fundraising side of AI this week, as well as some serious concerns about AI safety specifically from open AI. But the first piece of news is really unique. And so let's dive in. About six months ago, we've learned that BMW has signed an agreement to use the figure one robot to work at its factories. this week, a video was released actually showing the figure one robot learning and working at an assembly machine. That robot was able to pick up a very large plate and align it with the relevant pegs to put it within the machine. And then when it wasn't perfectly oriented with some of the smaller parts, it actually pushed it back into its place to have everything sitting properly and operated the machine. Now, this didn't seem to be actually happening at a assembly line at BMW, but apparently they are starting to deploy these robots to the BMW plants. In parallel to this, we know that the Optimus robot by Tesla started working at Tesla's Fremont factory a couple of weeks ago. So roughly at the same time, about six months into the year of 2024, we have humanoid robots working in factories. Interestingly, both of them in vehicle assembly environments, but this is obviously not the end of this is just the beginning to tell you how significant these news are in a recent interview with Elon Musk. He said that gen three of Optimus will cost only 10, 000 to produce, and that they're planning to produce thousands of them through 2025. That means that they're going to be able to replace a lot of employees in a lot of their factories and not just their factories, because if they can create a robot that is humanoid, that can be quickly trained to perform multiple tasks in an assembly line, and they can make it for 10, 000, that will make it extremely attractive for other factories as well. It's also raising an interesting question on what's going to be the main income driver of Tesla as a company, because if they can make them for 10, 000 and sell them probably for a lot more than that, because company will be willing to pay it because it's still going to be a lot cheaper than having assembly line employees and they can generate thousands of these or tens of thousands of these very quickly. That might become a bigger revenue streamline to Tesla than actual Tesla Cars. Now I'm not giving any investing advice. I'm just stating that this is a very interesting junction for Tesla as a manufacturing of technology Before the next piece of news, I want to remind you that our AI business transformation course is starting on Monday, July 8th. So if you are listening to this episode before that, you are in luck because you can still join the course. There's still a few seats left. It's a course I have been teaching personally since April of last year, and hundreds of business people has went through this course and now are transforming their businesses with AI using the knowledge they gained through the course. The course has four two hour sessions that goes from general understanding of AI all the way through multiple business specific use cases, including the tools that are required and how to use them in order to gain significant efficiency benefits, as well as how to potentially change the entire strategy of the company with AI, if that's something you're interested in, you can check out additional information and sign up for our July session using the link in your show notes. So just open your app right now. You can click on the link and navigate there and now back to the news. And switching from robots to another great technology that became suddenly available to everyone this past week comes this new announcement from Meta. So a few months ago, I shared with you that Meta has developed a new technology that allows us to predict multiple tokens at once. What does that mean? So AI models work on tokens. Tokens are these segments of words. So when you hear about context window, they're spoken in tokens. When you hear about generation speed, it always in tokens. But so far, all the large language models work on tokens. In single token. So they predict the next token in a sentence, building them into words, building them into sentences and so on. And meta's researchers were able to build models that predict multiple tokens at once. Now this was previously just a research paper from Meta and now they have actually released open source model that is available for research purposes to the public. This means that the entire AI ecosystem and researchers and developers around the world will be able to build on top of this capability and build models that will be significantly faster and we'll use a lot less energy in the process, as well as being able to do more sophisticated things, because they will think on box of tokens instead of doing them one by one. One of the interesting things that they released as part of this release is a relatively small model that has only 1. 1 billion tokens. That is called Cicero, which is an AI agent, which was initially trained to excel in human diplomacy, and he's actually achieved human level performance in diplomacy. I find this really interesting because yesterday at dinner, I had a conversation with a bunch of people and some of them said, This will take a while for it to replace human negotiations and salespeople. And I said, I don't think so. And here we have today a model that is open source that literally anybody can implement into their tools and so on that has achieved human level in diplomacy, which means it can negotiate pretty damn well. And that's just probably version one of this model. Now that it's been released as open source, a lot more people will be able to build on top of that. On the negative side of news from Meta, Meta has been ordered by the Brazilian National Data Protection Authority to stop using personal data from Facebook and Instagram users in Brazil to train its AI models. It's basically claiming that they haven't made it clear to users that their data is being used for AI training And that Facebook is now facing a daily potential fine of 10, 000 if they don't fix the problem. Now that sounds like a lot of money, but the reality is if you do 10, 000 over the year, that's only 3. 6 million, which is in the amounts that are flown around and invested in model training is pretty negligible. That being said, Meta obviously stated that it's committed to protecting user privacy and that it's working with this agency to address their concerns. But this is obviously not the first time that we're hearing that there's allegations about misusage of training data and so on. Another company that is still in the news for the same issue is Perplexity. So I shared with you in the last few weeks that Perplexity is facing a lot of fire when it comes to plagiarizing actual word to word articles from multiple sources and that there's a lawsuit against them right now and several new organizations are now claiming the same. They're claiming that Perplexity search results often include verbatim passages from their articles word to word without linking back to the original sources. And Sal Outlet even reported instances where an entire article is being reproduced by Perplexity search results instead of a summary from multiple sources. That's obviously a serious problem with the model. I don't think in this particular case, that Perplexity will be able to just say, we're trying our best to summarize information from multiple sources as much as we can, et cetera, et cetera, which was their approach so far. I think it will come to the point where somebody will twist our arm to do some significant changes to the model in order to get it not to do that. Or at least to link back to the original articles and give credit to the the creators of the content. I must admit, I personally really like Perplexity. I like it even more now that it's connected to Claude three Opus and Claude 3. 5 sonnet, because you can do really amazing things with those capabilities while they're connected to the internet. So I really hope that Perplexity will get their act together and we'll solve these problems. So nothing bad will happen to this company and to the models that they make available to us. Now the company that's probably been Investing the most amount of money and resources in order to solve the problem on training their models and information they shouldn't train on is Open AI. So this week, Open AI announced two additional new partnerships with large content providers. One of them is Time Magazine. OpenAI has entered into a multi year partnership with Time to improve ChatGPT model, which means they'll be able to train on almost a century of journalist content that was created by time over its existence. OpenAI are claiming that it will dramatically improves ChatGPT ability to provide accurate, up to date, unbiased information on a wide range of topics because of the depth of information that is available through the time partnership. Now, one of the things that they expect is that the model will have a much deeper understanding of current and historical events and their context, including diverse perspectives of that because of the type of content that Time Magazine has produced through the years and is still producing. The partnership also includes the development of AI powered models and tools to assist to in developing and sharing their content, which is aiming to obviously enhance their reader engagement and content discovery across the time platform. So just like with all these previous partnerships, the goal here to assist both companies in gaining benefits through this process. The other company that OpenAI signed the deal with is News Corp. And it's a similar deal. Obviously, this will give Open AI access to all of News Corp's news outlets, which is a pretty long list, and it will provide News Corp employees tools and actual capabilities in order to enhance their offering and to make their processes more efficient. a trend that we'll see continuing. As you probably know, news outlets has been facing some serious issues in the past few years in the competition from social media and other ways that people are consuming content, which means they having smaller and smaller budgets and capabilities to actually do what they do. And there's a significant decline in ad revenue. And now these new partnerships may provide them the lifeline that they need in order to survive in this new world where people are going to discover the information they want to discover through AI and they may just make their money through these deals. Staying on OpenAI. Apple has secured an observer seat on open AI's board. So as I've shared with you a few weeks ago, Apple has made a huge announcement and released its own AI capabilities finally. And part of it is different levels of partnerships with open AI. There has been conversations that they may have similar partnerships with other companies, but this has not materialized yet. But the fact that now they got a observer seat on the open AI board is a huge saying that this partnership is probably going to be a lot deeper than at least it seems in the first release by Apple. I don't think that open AI would have given Apple access to know its deepest strategic conversations without a very clear benefit on the other. And no details were shared exactly on What might this future partnership be, but again, just the fact that now they have a seat on the board is significant enough to say that this partnership will go beyond a potential usage of ChatGPT on Apple's devices. Now to some, as I mentioned in the beginning, negative news from OpenAI on security topics. So there has been two significant security issues with ChatGPT and OpenAI this past week. The first incident involved a data breach that exposed personal information of some of OpenAI customers and its employees. The company has not disclosed the extent of the breach. We're still working on it. All the number of individual affected, but it has stated that is working to notify these particular individuals and enhance the security matters. So this does not happen again. In a separate incident, a security researcher discovered a vulnerability in OpenAI's API that could have allowed unauthorized access to sensitive AI models and data. Again, in this incident as well, OpenAI quickly patched the vulnerability and stated that it has no evidence to any malicious exploitation of this vulnerability, but it has been there. In another interesting incident, probably less extreme in its potential outcome, but still not good news for OpenAI, a random conversation with ChatGPT that started with just saying hi and hitting enter allowed an individual to get access revealing all its secret instructions and guidelines, basically the operating rules in which by it operates. Now some of the key guidelines include avoiding generation of harmful and illegal or explicit content, respecting intellectual property rights and maintaining political neutrality. And also instructions became available on how DAL E works and so on. So if you want, I will share a link in the show notes. You can go and see some of the information that was, released. but the topic I want to focus on is not what was in the content, but the fact that the content was easily accessible just by random conversations with ChatGPT, which means, again, very low level of security. Now, to add on top of all of that, in another security problem this week, a security researcher has discovered that OpenAI's ChatGPT Mac app, which was just released about a month ago, has stored the user conversations in plain text on the local machines. Meaning, the data was not encrypted and was accessible to anyone who had access to your machine. So if your Mac was stolen or had access to by other people, they could see every single detail of your conversations with ChatGPT and all the information that you've uploaded to it. This is obviously complete negligence when it comes to storing potential sensitive information in a non encrypted way. Same kind of thing here, OpenAI has patched it up and now the information is encrypted and you can even choose in the app not to save your data at all, but the fact that it happened just shows how very little concern there is in OpenAI to security issues. Now on top of all of that, a News article by the New York times has released that a hacker snatched details about open AI technology early last year. So apparently this hacker was able to get sensitive information from an internal discussion forum of open AI employees chatting about how these models should work and so on. I assume some kind of a Slack channel. Now, this was reported internally in open AI to the employees on a all hands meeting in April 2023 and also reported to the board, but it was not released to the public because the board was against it at the time. And now the New York Times has sources that are saying that open AI has expressed Serious concern about potentially China based on or other adversaries getting data and stealing the company's AI secret that could cause a national security threat nothing short of that. You have listened to the interview with Leopold Aschenbrenner from a few weeks ago, you would know exactly what these people are talking about. So Leopold has been an employee of OpenAI that has released his manifesto on the future of AI and also done a bunch of interviews. and one of his biggest concern is exactly that. And he's loudly saying That open AI do not have the right security levels in place to protect it from business espionage and even worse from national espionage from countries like China that could, For him easily steal these secrets that could be a threat to the national security of the United States. So my overall take on all of this is that open AI and probably all the other top players in the AI market. So that's Anthropic and Google and so on are probably not investing enough in protecting their secrets and protecting The data that is being pushed through these platforms by right now, everybody in the world, whether individuals or companies, this is not good news, as I mentioned, both a means of data security, but also from a national security perspective. As we've shared a few weeks ago, OpenAI has put on their board the previous head of the NSA. So maybe that's their move to start making their data more secure, but right now it looks like a joke and that joke puts data and potentially national safety at risk. What does that mean for you and your data and your company data? It just means you need to think twice before you share sensitive information on any of these platforms. The biggest concern so far was the fact that, Oh, they might be training on our data, but now we're learning that may not be the only concern because that data may be vulnerable to access by third parties. And from bad news about OpenAI to some exciting news about Anthropic. Anthropic has made some big and really significant positive releases in the past few weeks, and actually in the last few months, if you go back to the release of Claude 3. And this week they made two new announcements, one is they announced a new initiative aimed to addressing one of the biggest challenges in AI, which is bias and fairness. So Anthropic has launched a new benchmark they call Anthropic AI Fairness Benchmark, which is a comprehensive set of tools and resources designed to help developers and researchers to measure and mitigate the biases of their models. This was trained on a variety of data sets and evaluation matrix and best practices that is allowing this model to assess the fairness and biases of models across different domains and applications. Now, they went even beyond that. And this initiative also includes funding to support researchers and startup focused on developing techniques to detect and mitigate biases in these models. Anthropic is partnering with leading academic institutions and industry experts in order to refine these benchmarks and provide better and better effectiveness for whoever that wants to use this. This is obviously a good move in the right direction that will hopefully allow an industry wide progress in that very important topic. The second announcement by Anthropic this week is that they've just launched Cloud Engineer, which is an interactive command line interface, also known as CLI, that uses Claude 3. 5 Sonnet model in order to assist in software development tasks. So Cloud Engineer is designed to help developers in streamlining the entire coding process from providing intelligent suggestions to code completions and debugging capabilities for existing code. And this works with natural language just like all the other similar tools. So in your language you can describe the functionality or the problem you're facing with the code and the AI will generate the relevant code snippets or will provide you a step by step guidance on how to troubleshoot your existing code. Like a lot of these other tools, it supports a wide range of programming languages and frameworks, which making this tool versatile and relevant to many developers as they work. And let's continue on this topic of code generators and programming assistants. CodeStory is an AI research company that's focused on software development. They had just announced a new framework. They call it the aid framework, which is achieving a state of the art performance on the SWE bench like benchmark, which is the benchmark used the most to rank the capability of these AI code generators. Now the interesting about the aid framework, it is a multi agent collaboration infrastructure. So basically what they've done is they've built multiple agents, each and every one of them specializes in one aspect of the development process, such as understanding the requirements, writing code, checking the code, debugging and so on. And using this approach, they're claiming they can achieve significantly better results than any of the other code generation platforms that are out there today. Now, this approach, this multi agent approach we have seen in other places as well, and there's very small doubt in my mind, and I think anybody else in this industry, that this is the direction that everything is going, meaning every task that we will want to do will be divided across multiple agents by a manager agent that will send the relevant segments of the task to different of these smaller, more specialized agents, and will then collect the task together to build the final outcome. This is just a very solid example of that from the coding and software development world. Staying on the same topic, a U S based startup called Magic. That is in the same field of creating a platform that assists developers in writing code and developing software with AI is in discussions with its investors to raise 200 million on a potential valuation of 1. 5 billion. Now, the interesting thing about this is that just this February, they raised money on a 500 million valuation. So they're talking about a three X increase in valuation in just four months. The other interesting thing is that this company has just over 20 employees. So are you talking about a company that was founded in 2022 that has just over 20 employees that is going to be potentially valued at 1 billion. 1. 5 billion this year. One of the drivers to the huge growth in all these code generators is obviously the success of Microsoft GitHub co pilot, which right now has 1. 3 million paid subscribers and has driven a 40 percent year over year revenue increase for GitHub. That is a clear validation that there is a solid commercial base to this growth in all these different code assistants, which again drives a lot of money into companies that are developing these kind of solutions. Now, speaking about the amount of money that is being poured into AI startups, investors has poured over 27 billion into AI startups in just the first half of 2024. This is based on data from PitchBook. The interesting thing about this fact beyond the crazy amount of money is that this is happening while the venture capital funding world is actually on a downturn falling 25 percent compared to the first half of 2023. So while vC money is not going anywhere else, a lot of it is going towards AI investments. Some of the big investments that we know of are 1. 7 billion series B of Anthropic and a 1. 3 billion. investment in Inflection in series a and Coheres 1 billion series C and obviously X. Ai 6 billion dollar investment. So a big chunk of those 27 billion went to a relatively short list of companies. But then there's a very long tail of companies That received different kinds of sums of money to develop AI capabilities. I shared with you in the past, I'm going to share it again, that I think we are about to see maybe the biggest wipe out of VC money in history. I think the craze around the investment in AI companies is justified, but I think the volume and the type of companies that are getting this kind of money is not justified. And to me, the biggest threat is that the bigger players are adding features that entire companies has built as their company. And we've seen multiple examples of that, of company has developed amazing AI capabilities just to find out that six months later, Open AI or Google or meta, somebody will come up with a feature in their platform that does it as part of either a free or either a free released or their existing paid platform literally wiping out entire companies that has been working very hard to develop that as the entire solution that their company was about to provide. I really think that there's going to be a very serious correction as far as the results that VC companies are going to see from mostly the smaller. AI companies that they've invested in. And now I want to talk about one specific company that is in the process of potentially raising a lot of money on a very high valuation. And that has made an interesting release this week. And that is Runway. So apparently Runway are in talks with general Atlantic for potential fundraising that could value the company at 4 billion. And that's according to a article on the information. Those of you who don't know Runway was founded in 2018, and there were one of the most advanced tools to create videos from text that was true until SORA came out. So SORA came out earlier this year by OpenAI. Blew everybody's mind with its capabilities, but SORA was never released to the public. And as I shared with you last week, Runway Gen 3 is coming really close to SORA capabilities and is now available to the public as of this week. So when I shared with you last week that there's been demos of it, it was still in early stages, but now it's available to anyone who has a Runway subscription that starts at 16 per month. So for 16 per month, you can generate. really impressive, highly realistic videos starting from either text or text in an image and generate amazing results. These are still not Sora level, not a means of resolution or consistency or the length of time that the video can be, but in my eyes, the best model that is available to the public right now. So if you are in the world of video generation or have the need to generate unique videos that are relatively short, that can obviously be stitched together through editing, this is probably your number one option right now. And since we shared some news on the creative side of AI, let's move to Elevenlabs. So is another company. I really like their products. They have voice synthesis capabilities. That is probably the best out there right now. Again, at least until open AI releases the voice capabilities of chat GPT four. Oh, by the way, that's another potential example of how a company like that has been around and doing amazing might get hit significantly once ChatG{T releases this feature. But for now, let's focus on for a second. They just released a new tool called Voice Isolator. And what it does is it uses machine learning algorithm to separate vocals and human voices from the background noise that allows you to clean it up from recordings and so on. This is obviously a very powerful tool for anybody who generates content like me. So I can take a recording that had whatever background noise and strip the background noise very easily and free. The only company that had a tool like that so far was Adobe that again was a free tool and now has putting out a tool that will compete with that. So if you have any recordings and you want to clean them up just by dragging and dropping the recording into a tool and getting a much better outcome on the other end, you now have choices. That's it for this week's news. We will be back on Tuesday with another fascinating interview, diving into a how to do something with AI that can help you in your business. If you're listening to this podcast before Monday, July 8th, you still have the opportunity to join our AI business transformation course that is starting this Monday. If that's something that's interesting to you, check out the link in the show notes until next time have an amazing weekend.