Leveraging AI

85 | GPT-4 is the dumbest model any of you will ever have to use! ChatGPT now has long term memory, and soon "search results, Anthropic has a "Teams" plan, Amazon releases Amazon Q for AWS, and more news for the week ending On May 4th

May 04, 2024 Isar Meitis Season 1 Episode 85
85 | GPT-4 is the dumbest model any of you will ever have to use! ChatGPT now has long term memory, and soon "search results, Anthropic has a "Teams" plan, Amazon releases Amazon Q for AWS, and more news for the week ending On May 4th
Leveraging AI
More Info
Leveraging AI
85 | GPT-4 is the dumbest model any of you will ever have to use! ChatGPT now has long term memory, and soon "search results, Anthropic has a "Teams" plan, Amazon releases Amazon Q for AWS, and more news for the week ending On May 4th
May 04, 2024 Season 1 Episode 85
Isar Meitis

This week's episode of Leveraging AI dives into the latest in AI, from groundbreaking updates and controversial legal battles to strategic partnerships that could reshape how we interact with technology.

But what's really at stake with these advancements?

How can you leverage these developments in your business or career?

In this episode, you'll learm:

  • ChatGPT’s Long-Term Memory: How it works and what it means for Plus users outside Europe and South Korea.
  • Legal Tensions: Insights into the copyright lawsuits facing OpenAI and the implications for AI content creators.
  • Strategic Partnerships: The significance of OpenAI's deal with the Financial Times and what it means for AI-driven content attribution.
  • Emerging Tools and Platforms: From Amazon's new enterprise AI services to Anthropic's team solutions—what you need to know.
  • Software Development Revolution: How GitHub and other platforms are making coding more accessible and integrated.

Isar brings a blend of deep industry knowledge and accessible insights to complex topics, helping professionals and enthusiasts alike stay ahead of rapid technological changes. 

Subscribe to ensure you never miss an episode, and share this podcast with colleagues who can benefit from staying on top of AI trends. Prepare your business for the future of AI by tuning in now!

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Show Notes Transcript

This week's episode of Leveraging AI dives into the latest in AI, from groundbreaking updates and controversial legal battles to strategic partnerships that could reshape how we interact with technology.

But what's really at stake with these advancements?

How can you leverage these developments in your business or career?

In this episode, you'll learm:

  • ChatGPT’s Long-Term Memory: How it works and what it means for Plus users outside Europe and South Korea.
  • Legal Tensions: Insights into the copyright lawsuits facing OpenAI and the implications for AI content creators.
  • Strategic Partnerships: The significance of OpenAI's deal with the Financial Times and what it means for AI-driven content attribution.
  • Emerging Tools and Platforms: From Amazon's new enterprise AI services to Anthropic's team solutions—what you need to know.
  • Software Development Revolution: How GitHub and other platforms are making coding more accessible and integrated.

Isar brings a blend of deep industry knowledge and accessible insights to complex topics, helping professionals and enthusiasts alike stay ahead of rapid technological changes. 

Subscribe to ensure you never miss an episode, and share this podcast with colleagues who can benefit from staying on top of AI trends. Prepare your business for the future of AI by tuning in now!

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hello and welcome to a weekend news edition of the Leveraging AI podcast, the podcast that shares practical, ethical ways to leverage AI to improve efficiency, grow your business and advance your career. This is Issar, my teacher host, and this is another jam packed week with AI news. And I want to start this week with a quote before we dive into the news. ChatGPT is mildly embarrassing at best. GPT4 is the dumbest model any of you will ever have to use again by a lot. This quote comes from no other than OpenAI CEO Sam Altman in a conference this past week. And he's obviously talking about what he's seeing for the future. And even in the near future, because they're expected to release GPT 5 this summer, which based on all these statements from Sam and others in OpenAI, is going to be a huge step forward. Some say even bigger than the step from GPT 3. 5 to GPT 4. So a lot to wait for the summer. And now let's dive to All the news from this week. and we'll start this week with a lot of news from open AI and not just the quotes from the beginning. If you're a regular ChatGPT user and you have the plus account, meaning you're paying the 20 bucks a month, you have probably noticed a message about a memory feature that is now available. So This memory feature has been announced a while back, but OpenAI finally rolled it out to everybody. So ChatGPT now has quote unquote long term memory. What that means is that it remembers user specific information between the different chats. So far the memory of the chats on all these models were limited to the single chat itself and not in between chats, Other than the custom instructions in ChatGPT you to manually add information. But now the new memory feature will allow ChatGPT to remember things about you and your business and your environment and your customers and so on, on its own. It also allows users to manage what information is saved, including deleting stuff that it's not interested for it to remember. And obviously for privacy reasons, if you completely want to opt out of that, you can turn it off in your settings or like I said, you can delete specific memories that you don't want it to remember. And this feature was rolled out to all plus members expect for those in Europe and in South Korea. So if you're there, I apologize, that's not coming now because of probably due to regulatory issues and it's expected to roll out to the teams and enterprise plans in the immediate future. Now, in addition to that, ChatGPT just updated its data controls for free and plus users. so far, if you did not want ChatGPT to train on your data, You had to turn off the data collection, which means you lost your chat history as well. And that has been now removed. So I'm quoting from the release from open AI."Now you can access your chat history regardless of whether you're opted into training or not for model improvement. If you've previously opted out, your choice will remain available on web today and on mobile soon." Basically you can now keep your history while opting out from ChatGPT training on your data, which is obviously awesome. Some negative news about OpenAI this week. Eight prominent U. S. newspapers owned by Alden Global, which includes papers like the Denver Post and the Chicago Tribune and the New York Daily News and several others, are suing OpenAI and Microsoft in New York, accusing them of copyright infringement. But in addition to copyright infringement, they are also claiming reputational damages that are caused by AI hallucinating and fabricating answers that are somewhat connected or maybe mentioned their newspapers while not giving factual information to the readers. This is not the first time and not the first lawsuit of the kind that OpenAI and Microsoft are dealing with. A similar lawsuit was filed recently by the New York Times. And so this just adds probably weight to the claims. That being said, that's not going to be any different than the claims before. So whatever the court decides in one, is probably going to be the fate of the other. And there's really two main ways of how this can play out. One way is that open AI and Microsoft is going to pay a really big check to these companies to compensate them. And we'll figure out a mechanism on how to compensate them moving forward, or maybe to prevent using their data. And the other option is that the court decides that what open AI are doing. is fair use, meaning just like you and I can read the newspaper and then write our thoughts and comments based on what we've read. They're claiming that's what they're doing, which is what everybody's doing. I can read the newspaper and then I can write an email or whatever off my thoughts about it, which is not cutting and pasting stuff from the news, But actually creating new content based on the information I've learned, which is what the large language model companies are claiming. On the exact flip side of that news, open AI signed another agreement with a big publisher for its content. And this time it's a huge one. It's the financial times. In addition to the fact that OpenAI can now train on the Financial Times data, this particular agreement makes it more specific. It explicitly mentions that ChatGPT will provide attributed content, including summaries, quotes, and links to the Financial Times articles. This is obviously a new and interesting development, which means that People who are searching for such information on ChachiPT will get a real time data from the Financial Times and be publishers and people who use the Financial Times will enjoy the benefit because people who will want to learn more would be able to navigate into the articles in Financial Times and see that This is a new kind of SEO, basically, instead of going through a search engine, you're going through a large language model, but still getting links back to the source files. In a very interesting and related development about this, openAI filed a change to its SSL registration that's adding a new subdomain to ChatGPT called search. ChatGPT. com, which leads to this. A lot of people to assume that they're planning some kind of a search engine version of ChachiPT that will provide a summary of results and also links to probably the originating articles and source data, similar to probably what Google Gemini is doing in Google search and that Perplexity is doing in its engine. Which means they're aiming straight at the core business of Google. There are also rumors that open AI is going to make a big announcement on May 9th, and many people think it's not GPT five, but rather this new search functionality that is coming to chat GPT. This intensifies the. Crazy battle of AI and the future of search and finding and researching information, which is definitely going to be very different than what we used to so far. staying around the topic of news about GPT five or the next releases or whatever the next thing is. OpenAI is going to release, I shared with you several times that there's a platform called chatbot arena by LM sys that allows anybody to go in there and put in a prompt and get a comparison side by side From two models without knowing what they are, basically a blind test. And then you tell the system, which one you like better. And this creates a ranking board of the best performing AIs and a new mystery model showed up this week called GPT two dash. Chatbot. Nobody knows exactly what the source of it. The rumors are it is a new version of ChatGPT. Why did they decide to call it GPT 2 when they released it? Nobody knows. But it's actually performing as good as GPT 4 and in some cases better, but it's not significantly better like we can anticipate from all the rumors about GPT 5, including the quote that I shared with you in the beginning of this episode. So some people are thinking they might be considering either releasing an update to GPT 4 Or maybe releasing a GPT 4. 5 before releasing GPT 5 with some median step between these two releases. Sam Altman himself regularly says that releasing a lot of version in between is actually something they believe in order to make humanity ready for what's coming next. So instead of doing a quantum leap, that might be what it is to GPT 5, they may release GPT 4. 5 to give us some additional functionality to close some of the gap and then release GPT 5 later this year. I obviously don't know. This is all based on speculation. I'm just sharing what's happening with you. Switching gears from ChatGPT to Claude, so Anthropic, the company behind Claude, which I actually like a lot. I find myself using Claude probably more than ChatGPT for most of the regular chats. And so I'm using ChatGPT mostly for GPTs that I developed, but the regular chats that I'm having more and more with Claude and perplexity, depending on the specific use cases. And, but anyways, Anthropic just released a Teams version of Claude. So similar to the Teams function that exists in ChatGPT, something similar is now available on Claude. That means that it includes some sharing of information across all users. different people in the organization, including the ability to use a shared database as a data source to some of the chats and some additional administrative tools. The cost is the same if you're paying per month. So it's 30 bucks a month. But if you're buying a annual plan on open AI chat, GPT side, you can get a 5 a month discount. Probably relatively similar functionality between the two of them. And I expect that to keep on growing in functionality as they develop more capabilities. This is obviously aiming to organizations that don't want to just buy the regular licenses, but want more control as well as collaboration between users of this system. And if to add more oil to this fire, Amazon just released their model of a fire. Amazon Q enterprise AI chatbot. And what it does is it runs on top of Amazon web services. And it is geared towards enterprises that want to build chatbots for several different functions of the business. So they released three different variations of Amazon queue. One is called Amazon queue developer. The second is called Amazon queue business, and the third is called Amazon queue apps. Amazon queue developer is obviously built toward software development, assisting developers in tasks such as. Testing and upgrading application code and troubleshooting AWS resources. So anything from the very basic, all the way to the infrastructure side of code development, Amazon Q business is geared towards analyzing information from various sources from the enterprise, providing answers to people in the business across multiple types of data, including summarizing information, creating reports, preparing presentations and so on. And Amazon Q apps enables users to create dedicated generative AI apps. Probably very similar to GPTs or co pilots that you can create in chat GPT or in Microsoft co pilot studio. So again, this definitely puts a lot of heat into the competition. Of more company oriented enterprise focused releases of generative AI tools, and this is not going to stop. This is just going to continue intensifying with Microsoft, adding more and more capabilities almost on a weekly basis. Google is doing the same for users with them platform, so it's not a surprising move by Amazon. But it will be very interesting to see comparisons of companies who will run all three of them because many enterprises actually have some of their business on Azure, some of their business on AWS and some of their business on Google cloud. So on one hand, it will be interesting to see what kind of data you can run across them, which I assume in the beginning is not going to be available. But on the other hand, I think we'll start seeing companies share the pros and cons and more detailed comparisons between these three different platfOrms. Since we mentioned Amazon queue developer, let's stay on the same topic of writing code and releasing software using AI tools, GitHub, the giant company that holds A huge amount of the code in the world and hostess for many different organizations just released co pilot workspace. So it's an AI power developer environment and I'm quoting radically new way of building software. So what they're basically doing is they're combining multiple tools they had before plus some new tools that they've developed into one space they're calling co pilot workspace. And what it does is it leverages different co pilot powered agents to assist developer throughout the entire software development process from brainstorming and planning and building and testing and running the code. And while still giving the developers and the development leads the ability to stop at every given point, look at the code, edit the code before they deploy and commit to it. And their vision is to push to a world where they have a billion people. Users running on GitHub, meaning even people who are not computer developers will be able to use this environment to create full software and not just pieces and snippets of codes like today. There are several different companies who are pushing in that direction in the world right now. And I think it's inevitable. That's the direction that it's going. That on one hand is amazing because it will democratize the creation of software. On the other hand, it's probably inevitable. Scary to a lot of people that have this as their profession or companies that are software development houses that may be less needed as a service to more and more organizations. I definitely see a future where things like the app store is going to be dramatically different. Meaning instead of looking for an app that does what you want, you will ask for the features and the capabilities that you need, and it will create the app for you right there. And then, so it's not going to have. 137 features. It's going to have three features, but it's going to be the three features that you need tailored to exactly what you need. And it will be just your application that other people can also use. If you'll describe the use case and you'll share it. So similar to what we have with GPTs today, just significantly more advanced and capable. And I think that's where we're going another big player in the AI world, but from a different arena, me journey. Which is the most capable image generator still just made another step to having all its users convert from the discord server to their website. Late last year, Me Journey finally released a website, which was a change because everybody was using it on a discord server before, but to use the website, you had to be a user that generated more than a thousand images. I'm not a very heavy user of me journey, but I probably create images several times a week and I created about 850 images so far. So a thousand images late last year was a lot. There's still a lot of people got access to that, but now they're rolling out the access to the website to anybody who created more than a hundred images, which is crazy. A huge amount of people. And you can get to that relatively quickly. the user interface is obviously a lot cleaner than using discord. It also has access to the different parameters in a user interface versus just the hyphens and typing it into the prompt so you can go in for every single image and change the settings for the stylization and weirdness and aspect ratio and all the other things that you could do only in the prompt in the discord server version. So if you're creating images regularly, you can now log into mid journey. com with your regular username and password and start using me journey over there. Much nicer, much cleaner user interface. I highly recommend doing that. Going back to large language models, a new company that at least for me came out of nowhere, but has some very interesting founders. From Google, DeepMind, Baidu, and Meta. just released a new large language models called Reka or Rika. I'm not sure it's spelled R E K A and they call their model Reka Core. And it's a multi modal language model that can process text, images, videos, and audio inputs. And according to their own tests, they are as good as the latest releases from OpenAI and Anthropic and Google. And it's performing very well on some very specific tasks from third party users confirmed some of that. So on some of these tasks. It's really outperforming the big existing models and on some of it, not yet. One of the things that excelled at is visual capabilities. So analyzing data and relating to data in images and charts and graphs. It did very well. Better than all the tools that are available out there. The biggest disadvantage of this model as of right now is that it has a really small context window, at least on its free version, which is only 4, 000 tokens to put this in perspective. GPT four turbo has 128, 000 tokens and Claude three has a 200, 000 tokens context window and Gemini pro 1. 5 has a million token context window. So 4, 000 is very little. It means you can't upload a lot of information. You can not write very long prompts and you can not have very long chats, but I'm sure this is just the beginning. And obviously over time, they're going to ramp it up. And if we're talking about context windows, something interesting that was released this week, so Google researchers released a paper that is titled Leave No Context Behind Efficient Infinite Context Transformers with Infini Attention. This is again a new research paper by Google that's basically providing a potential architecture that will allow scaling transformer based large language models to work. infinitely long context windows. If this is something that can go beyond research papers, that will relieve our need for limiting the amount of context we're uploading and basically will allow us to have an endless conversation with a chat while uploading an endless amount of information to it and still getting accurate information. Time will tell if this architecture that is now in research can be actually transformed to real life. And speaking of developments in capabilities of model, NVIDIA's CEO, Jensen Hong, just personally delivered the world's first DGX H200 server, which is the fastest, most capable GPU for training and running large language models. To Sam Altman and Greg Brockman from OpenAI. This obviously has a homage, interesting aspect to it because Hang himself also delivered the first version of their AI based GPUs to Elon Musk, which was back then the co founder of OpenAI. AI back in 2016. So the history kind of repeats itself. As you probably know, there's a lot of beef now between OpenAI and Elon Musk is not there and he's running X. AI, hand delivered the first platform to OpenAI in 2016 and now hand delivers the first platform at the age 200 To open AI as well. And one more interesting piece of news is that if you have been following Sora, the really incredible video platform from open AI that is not available to the public yet. There are two new interesting news about it this week. One of it is a music video for a song that is really long. It's like almost four minutes. And it's this trippish run through fly through that keeps on changing environments in a really cool and interesting way that was created, presumably at least with Sora. And this is the first of its kind, and it's definitely just look it up on Google. And I will put a link to it in the show notes as well. It's really cool, really interesting and really unique. And it's amazing if it was really totally created by just Sora, But the second piece of news is talking about one of the most famous videos that came out of Sora, which is the balloon head story, which was generated by a Canadian production studio called Shy Kids that got an early release of Sora. If you haven't watched it, it's this Person that has a balloon as its head and it's not perfectly connected to its body, but it's like its head. And they initially said that was created with Sora. And now we've learned in an interview with one of their leaders that yes, it was created with Sora, but then it required some traditional editing techniques in order to really create the video that we all saw and liked. And it was mostly geared around fixing consistency issues, which are still common, even in the Sora environment. And what they're saying is that the way they created the video, instead of just trying to create a very detailed, long prompt, they've actually created it like a traditional way of creating video, meaning they've created a lot of small prompts of different segments of the video and then edited them and fixed them for consistency like they would have done in a regular video. Two of the things that they mentioned that were not consistent. One is that sometimes it was showing the person's head inside the balloon. And the whole point is that the person doesn't have a head, that the balloon is consistent. It's head, but also that the color of the balloon kept changing between different scenes and they had to fix that as well. What does that tell us? Not much because Sora is still in data and open AI is still working on it. But I think what it tells us is that this new world will still require some traditional. Post production editing, only the production side is going to happen on a computer versus created with cameras and lighting and actors and so on. And then the post production we'll stay somewhat the same. That's it for this week. This coming Tuesday, there's a unique episode. It's a solo episode of me talking about various ways you can train. people in your organization on how to use AI in an effective way. I've been doing this since April of last year with multiple organizations and different industries. And in this coming episode, I'm going to share with you the various ways that I narrowed down to that are providing a lot of value to different organizations. So you can have food for thought on what will be the most relevant for you and your team. And until then have an amazing weekend.