Digital Works Podcast

Episode 038 - Ash & Katie, Bytes #4 - exploring 50 years of the internet, the ChatGPT store (and associated soap opera), the death of the social internet, and a polemic about SEO

Season 1 Episode 38

The fourth in our new series, Bytes, where Ash and Katie discuss 3 (actually 4) things from the latest Digital Works Newsletter.

In this episode we'll be discussing some of the links in the newsletter that went out on November 10th.

We talk about:

You can sign up for the newsletter at thedigital.works.

Speaker 1:

Hello and welcome to the Digital Works podcast, the podcast about digital stuff in the cultural sector. My name's Ash and in today's episode, episode four of Bytes, which is our regular short form series where we look at three interesting things from our most recent Digital Works newsletter. You can sign up for the newsletter on our website, which you can find at thedigitalworks, and joining me today and for all episodes in this series is the person who puts the Digital Works newsletter together, my colleague Katie. Hi, katie, hello. So today we're going to talk about some of the things that were in the newsletter that you sent out on November, the 10th, and I will put a link to that newsletter in the show notes.

Speaker 1:

And the three things actually four things that we've picked this time are a collection of internet artifacts which you sent me. A tech crunch article on OpenAI's GPT store, which lets you build and monetize your own GPT. An article from Om Malik, who is a journalist, entrepreneur and venture capitalist, which is titled the Social Internet is Dead Get Used to it. And, lastly, we're going to look at an article in the Verge from Amanda Chicago Lewis, titled the People who Ruined the Internet. So a cheery and upbeat episode awaits.

Speaker 2:

It's all good.

Speaker 1:

First up, you sent a message to me where you said I sort of want to talk about this and it is titled the Internet Artifacts and it is a sort of navigable online museum, I suppose, a collection of digital artifacts that span from 1977, at the sort of birth of the internet, through to today, and there's loads of things in there. There's the first spam email, there's a reminder about Friendster, there's the first Wikipedia homepage, there's the first tweet, and for someone like me and I imagine for you as well, who has been very engaged in the internet as it's evolved, it was quite intriguing and interesting but also a nostalgic journey through 50 years of technology history. What did you enjoy being reminded of?

Speaker 2:

Well, there's a few things which I'll flag in a second, but more generally it makes you realise, or it reminds you, how fast things change and how things that then were like oh wow, that's so cool just now looks so old-fashioned, which is like self-evidently obvious. But I think it's also useful as a reminder of kind of how quickly things are going to carry on changing. So, in terms of things I particularly liked, there was a I only got a project. There was a guy called Alex Chu who in 2005, which isn't that long ago in terms of this particular little widget thing, because it goes like you say, it goes about 50 years.

Speaker 2:

But he set up this thing called Million Dollar Homepage and I actually sort of knew Alex Chu through Friends of Friends and it still it still amazes me that he did it. So he sold basically a million pixels. Each pixel was a dollar. The reason I think that's really fascinating is you just could not do that now for many, many, many reasons, and it sort of reminded me of, I guess, philosophically, what a different place the internet was like, what, like 18 years ago. And when we talk about the social internet is dead article. Obviously we'll come back to that as a kind of idea, but there's just so much, yeah, there's so much nostalgia in there. Who doesn't love a bit of nostalgia, right?

Speaker 1:

I think it is also really interesting and telling to, as you say, look back at things from 10 years ago and they look archaic, they look almost like a joke, that you think this was the home page of now one of the most valuable companies in the world and they thought this was good. And I think it just underlines that thing that we've talked about before. But the pace of change is not slowing down.

Speaker 2:

No.

Speaker 1:

And it's sort of something you have to get comfortable with, I think, if you're deciding to build a career in this space. And this particular timeline does a really good job of just showing that from the late 70s the rate of progress has been steep and ever-increasing. And that's not to sort of try and sound alarmist, because I think there's so many people that work in digital roles are excited by change and intrigued by change and are excited to see what comes next. But this does a really good job of sort of compressing 50 years into a navigable interface and just demonstrating how far we've come 100%.

Speaker 2:

Yeah, I'm kind of interesting why you didn't go past 2007,. But that's a question for another day.

Speaker 1:

Next up is an article in TechCrunch which is titled App Store for AI. Open AI's GPT store lets you build and monetize your own GPT, and this is an article from a few weeks ago now, and since this article was published and since you sent out the last newsletter, there has been all sorts of shenanigans over at Open AI, so maybe we'll touch on that a little bit, but focusing first on this article, what's it telling us? What is this store that Open AI have launched?

Speaker 2:

So everybody will probably be familiar with chat GBT, large language model AI tool that you can go to and ask it to do things for you. What they are talking about is and it is confusing because they're calling them GPTs but essentially they are custom versions of chat GPT that you can create yourself and then they're well, it's not an ambition, because it's there but not everybody can access it. Yet. They intend to have a store like the App Store, so you could create a chat GPT that helped people to make the best vegetarian chili in the world, and then you could put that onto the App Store and you could monetize it.

Speaker 2:

There's an article, which I'll share in the next digital works, from a developer called Simon Wilson, who has written a really good post where he has basically created a bunch of them, and some of them are technical you know, they to do with coding and some of them are just really silly. So, like one of them, you can put in a photo and it takes the photo and it adds a walrus into the photo. Obviously, that is ridiculous, but it's just to demonstrate what you can do with it. So there's all sorts of potential, really interesting ways of using this, I think.

Speaker 1:

And am I right in saying that one of the features that they're making available through this new structure is that when you're creating your own GPT, you can train it on your own data?

Speaker 2:

sets.

Speaker 1:

So, for example, if you've got I don't know all the text messages in your phone, you could feed that into one of these GPTs and train it to text like you do Again. I think that opens up some really interesting possibilities. Almost immediately there were security implications and ethical implications. People were able to access the data sets that these custom GPTs were trained on. People didn't always have the right to use these data sets in these ways. It's really interesting. We just talked about the pace of change, but in this AI world, the pace of change is ludicrous and things are changing almost by the hour and the regulatory and sort of ethical frameworks around this stuff feels like it's lagging behind. I would sound a note of caution if you're excited to get in and start having a play with some of these things is that norms and expectations are still very much emerging around this, and I think you need to be sensible and thoughtful if you're going to be experimenting in the space.

Speaker 2:

Yeah, 100 percent. There's a really good thread in the Museums Computer Group tech email list about AI policies and guidelines. Again, I can include that in the next digital words. But organizations thinking about the ethics, the legalities and the practicalities of stuff using these sorts of tools?

Speaker 1:

Yeah, more widely, people like Rachel Kodakutt imploring the government to have a more thoughtful conversation around this. I saw recently Rachel said the one area I would not ever deploy AI into is decision-making where context is vital and it's not a binary or clear-cut thing, she said, such as safeguarding children such as asylum cases.

Speaker 1:

Then today, I think, I saw a UK British MP saying we're going to try and push AI into all of these decision-making processes and reduce the size of the civil service. It's an exciting time, but also it is a scary time because technology is being embraced without anyone really being thoughtful about how it works or honest enough, perhaps, about its shortcomings. We'll just very briefly talk about the soap opera around open AI at the moment. I think it was over the weekend Sam Altman, who was the CEO of OpenAI, was fired, or did he quit? He?

Speaker 2:

was fired.

Speaker 1:

He was fired by the board for not being candid enough with them. They said in a very mealy-mouth statement and this story is developing again by the hour, and Karris Wish was quite a good journalist to follow if you want to be kept up to date on this but what's going on with this, Katie? Summarize it for us.

Speaker 2:

Who knows? It's very, very strange, and in other Rs and Maths Ms Tav предpensate kay's brilliant. I mean, he's one of the founders. He was apparently very, very well liked and all the subsequent debacle over the weekend seemed to back that up. There was a huge open letter signed by lots of staff saying that they were all going to leave too. There was another guy called I forget his first name, but some Brockman who was previously on the board. I think he was like the president and he left as well.

Speaker 2:

Subsequently, microsoft, who are one of the big investors in OpenAI, have apparently got involved and have sort of calling for maybe some of the board to step down. Usually, when a CEO leaves, it's described in a way that is, they are leaving to pursue other opportunities or to spend more time with their family. But it was very odd on Friday when he went, because the statement that the board put out basically said that he hadn't been entirely candid in communications with them. So obviously that was a fairly shocking thing to put out publicly. But apparently, yeah, now it's all. Microsoft have got involved and there was at some point a suggestion that he might even come back, but I think that's gone by the wayside now. So who knows, the reason. All this is important, by the way, isn't just because it's quite interesting from a gossip perspective, but this company is absolutely changing and will change the face of the internet, and so who runs it and the transparency of that and the governance around that is really really critical.

Speaker 1:

Yeah, because there is a version of this scenario where you could see it as good governance. Because OpenAI was set up as a non-profit, was set up as primarily a research-focused company looking at trying to achieve GAI general artificial intelligence, they call it and Sam Altman, you know, with the launch of this store and other initiatives he'd been on, he'd apparently been exploring a hardware device as well, very commercially focused in devils, and so there's a version that you could say actually this was the board removing a CEO who was sort of diverging from the mission of the company, but the way it's been handled is just a mess.

Speaker 2:

Also, that's odd because Sam Altman was always the one who was not always the one. He was someone who was lobbying for greater regulation around AI, for governments to take it seriously. So yeah, I'm sure it will all come out at some point.

Speaker 1:

I'm sure there will be an Apple TV plus star-releacast drama in the works within 12 months.

Speaker 2:

Yeah.

Speaker 1:

Next up is an article that I came across as well a couple of weeks ago, from Om Malik, who is a journalist, entrepreneur, venture capitalist, and it's titled the Social Internet is Dead Get Used to it. And I just want to read out an excerpt from it, and I'm keen to get your thoughts clear. He says the social internet began as a place to forge friendships in quotation marks and engage in social interactions again in quotation marks. It performed its role as intended until companies needed to generate profit. By then, we were all hooked on the likes, hearts, retweets and followers and the boost they gave to our egos. What is Om saying in this article?

Speaker 2:

So I think he is very neatly articulating the journey that social media has gone on from, let's say, 2005, 2006, 2007 until now, and how much it has changed. I don't think what he's saying in here is a new idea. I've seen it expressed elsewhere but I think it is very well articulated that the fact that he is talking about how what social media was has changed to such an extent that really it no longer performs in the way that it was originally intended to, which is partially down to the business models that these companies have, like Metta and so on, and I think I agree with him.

Speaker 1:

Sadly, yeah, and it is interesting to sort of reflect, because social media, when it came along, was a new form of it, was a new articulation of network effects. You built a social graph by proactively making friends with people and interacting with the things they said and shared, and at scale that allowed technology companies to infer things about you and basically sell ads against your activity because you were sort of telling them what you were interested in. We've shifted and we talked about this before, but we've shifted instead to a more algorithmically driven experience where the user is not required to do as much work. I suppose the user is not required really to make connections, to make friends. The user is only required to engage with the content that is put in front of them and then, based on that engagement, the algorithm serves you up more content that it deduces you will be interested in to keep you engaged in that way.

Speaker 1:

And again, it's a different way of selling ads to someone and you are less reliant on the user building out that social graph for you to sort of be able to hang your ad tech off. And that is quite a pronounced shift in terms of what social media was and what it now is. I don't necessarily think this is going to be the status quo forever and ever. I could see a world in which we move back to a more social graph driven thing. I could also imagine a world in which there is another, as yet unformed, sort of structural status quo that we end up in, and I think it's important for people listening to this to be mindful of the fact that these sort of underlying assumptions and structures and dynamics that may be informing your strategy and the way that you spend advertising budgets is sort of a constantly changing thing.

Speaker 2:

Yeah, 100%. I think platforms like TikTok have proved that you can absolutely create an app that gets huge engagement and interest and really there's very little to no network graph in TikTok. It is deliberately built around content. Your For you page, where everybody lands, is entirely made up from algorithmic content. There's nothing to do with who you follow, only on how popular TikTok is.

Speaker 2:

And I think the other thing is there's a kind of misnomer that younger people are like oh yeah, they're all over social media. They are in a very specific way and I do think that the sort of Gen Z age demographic are much more about using social media to engage with quite specific groups of people. That's not to say they're not doing the performative stuff and all the rest of it, but it's just shifting and changing. So to your point, absolutely, I think if you're running social media accounts, there's a risk that you are going to just stay on a treadmill of just posting stuff, posting stuff, posting stuff as your reach and engagement goes down and down. So taking a step back and thinking about these things is really important.

Speaker 1:

And lastly, we're going to look at an article in the Verge titled the People who Ruined the Internet, by Amanda Chicago Lewis, and it starts, as the public begins to believe Google isn't as useful anymore, what happens to the cottage industry of search engine optimization experts, who struck content oil and smeared it all over the web? Well, they find a new way to get rich and keep the party going. What's Amanda saying in this article?

Speaker 2:

It's a long read.

Speaker 2:

It's slightly snarky, quite funny.

Speaker 2:

Within all of that I think there were some real kernels of kind of interesting commentary about how Google in particular has changed how we all use the internet, you know, and what that has then meant to industries like SEO.

Speaker 2:

Ironically and she sort of references this a little bit at the beginning Actually, the title is a little bit misleading because in the end, the baddie in averted commas in this, as she sort of supposes, is Google, is not really SEOs and the analogy that she makes is you know, there's a reason why in most countries, public libraries are exactly that they're public institutions, because if the management and distribution of information is controlled by a private business, there's always going to be problems with that for many reasons, which she goes into Side note as an article. It is also quite staggering how much money a lot of these SEO people have made from it over the years, which just actually just underlines how search is just everything now, right, Like we just use it for everything, and so being on page one is still really really critical, and it's only really AI that has even come close to, you know, knocking search off its perch in terms of as a source of information. Latterly tick tock for sure, but only in some demographics.

Speaker 1:

Yeah, I think it's difficult to underestimate or understate just how massive the impact of search has been on how we all think.

Speaker 1:

You know, we used to be focused on Retaining information and now people are terrible at retaining information.

Speaker 1:

They're very good at locating it and finding it, and that's because of search, and I think as well you know this comes back to the point we were talking about earlier the fact that we've allowed this sort of dominant technology to be entirely driven by private interests is, on reflection, perhaps not the most brilliant societal decision we allowed to take place. And it is interesting to see that the you know it's a lot of this stuff comes out of Silicon Valley and out of America, which has a very stand back and let the clever technology people sort everything out and then maybe one day we'll put some laws around it approach to things, but it does feel that with AI, the governments have recognized that that approach is maybe not the best one to take. And there are some attempts I think you could judge them fairly harshly, but there are some attempts to try and shut the stable door before the horse bolts on some of these technologies which are undoubtedly going to have transformative effect on how we find, use and share information.

Speaker 2:

Yeah, I hope so, I just don't know. The pessimist in me says that capitalism always wins out, so we'll see, I guess.

Speaker 1:

So thank you, katie. That was another episode of Bytes. This is still a new thing, so we are very interested in hearing any feedback, comments, questions. You can find us on LinkedIn these days, because Twitter X is just a no go zone. It's interesting to watch how that perspective devolved over the last few months Until next time. Thanks for listening to this episode of Bytes. You can find all episodes of the podcast on our website at thedigitalworks, where you can also find more information about our events and sign up to the newsletter. Our theme tune is Vienna, beat by blue dot sessions. And, last but not least, thanks to Mark Cotton for his editing support on this episode. See you again soon.

People on this episode