DataTopics Unplugged: All Things Data, AI & Tech

#76 AI at what cost? Environmental toll, Trump vs AI regulation, creative impact, & poisoned text for AI scrapers.

DataTopics

Send us a text

Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.

Dive into conversations that should flow as smoothly as your morning coffee (but don’t), where industry insights meet laid-back banter. Whether you’re a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let’s get into the heart of data, unplugged style!

This week, we dive into:

  • The creative future with AI: is generative AI helping or hurting creators? 
  • Environmental concerns of AI: the hidden costs of AI’s growing capabilities—how much energy do these models actually consume, and is it worth it?
  • AI copyright controversies: Mark Zuckerberg’s LLaMA model faces criticism for using copyrighted materials like content from the notorious LibGen database.
  • Trump vs. AI regulation: The former president repeals Biden’s AI executive order, creating a Wild West approach to AI development in the U.S. How will this impact innovation and global competition?
  • Search reimagined with Perplexity AI: A new era of search blending conversational AI and personalized data unification. Could this be the future of information retrieval?
  • Apple Intelligence on pause: Apple's AI-generated news alerts face a bumpy road. For more laughs, check out the dedicated subreddit AppleIntelligenceFail.
  • Rhai scripting for Rust: Empowering Rust developers with an intuitive embedded scripting language to make extensibility a breeze.
  • Poisoned text for scrapers: Exploring creative ways to protect web content from unauthorized scraping by AI systems.
  • The rise of the AI Data Engineer: Is this a new role in data science, or are we just rebranding existing skills?
Speaker 1:

Let's do it.

Speaker 2:

You have taste in a way that's meaningful to software people. Hello, I'm Bill Gates.

Speaker 3:

I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded, incidentally, of Rust here, rust.

Speaker 2:

Rust.

Speaker 3:

This almost makes me happy that I didn't become a supermodel cooper and netics.

Speaker 1:

Well, I'm sorry guys, I uh, I don't know what's going on I need to speak to you today about large neural networks.

Speaker 4:

It's it's really an honor to be here rust data topic welcome to the data. Welcome to the Data Topics Podcast.

Speaker 4:

Hello and welcome to Data Topics Unplugged, your casual corner of the web where we discuss what's new in data every week, from apples to poison, everything goes. Check us out on YouTube, linkedin, feel free to leave your comment or question, or reach out to us and we'll try to get back to you. Today is the January 21st of 2025. My name is Murillo. I'll be hosting you today, joined by the somewhat sleepy Bart. Hey, good morning, good morning, and do we have a guest today? Bart? We do. We do have a guest.

Speaker 3:

Very special guest I would say Very special.

Speaker 4:

Thank you, we've been trying to get her on for more than a year, Exactly Right. So after a year I think we managed.

Speaker 2:

After 75 episodes.

Speaker 4:

Yeah, exactly Just 75,. You know, special occasion? Exactly so we're joined by Alex. Hello Hi Alex. Yeah, indeed, indeed. So can we get a round of applause for yourself? Yes, yes, there we go. Yes, yes, happy to have you here?

Speaker 2:

happy to have you here. How are we all doing? How are you doing, alex? Good, it feels weird to be on the podcast, but I'm still behind the scenes a little bit yeah, it's normally, is the our behind the scenes?

Speaker 4:

uh, super support exactly, exactly, she's the one that took us to the next level that does all our other thing, exactly, exactly.

Speaker 3:

Keep us online.

Speaker 4:

Exactly so. We all need an Alex. You know, that's true. Very cool, very cool. Maybe last episode, actually, I think we were discussing a bit about Gen AI and the use of Gen AI, right, and I think so. To recap, there was a meme. Maybe actually I can pull the meme up, but uh, do you remember what it was? But while I pull this up, uh, I think so.

Speaker 3:

Yeah, something about art, right yes, that's all you remember that we should, that we should leave art to people and that we should put ai to better use something like that right, yes, so oh no, it was something with robots yeah, use this is the issue with ai to do to do a laundry, yeah yes, basically the issue with ai was that it wanted ai to do the laundry.

Speaker 4:

Actually, I just found it here, I'm just gonna put it on the screen. Share this step. The problem with ai is I want ai to do my laundry and dishes so I can do art and writing, not for ai to do my art and writing so I can do my laundry and dishes. So maybe to start today's episode, to revisit a bit the discussion yeah, I think I made a statement.

Speaker 3:

Yes, that's. Alex wholeheartedly disagreed with no, so much that she wanted to come on the pot she said like this enough is enough.

Speaker 4:

Yeah, this is it. What was your statement part?

Speaker 3:

um, I think it was something along the lines of uh like uh, that I didn't agree with the meme in a sense that ai doesn't uh stop you from making any art exactly right, right.

Speaker 4:

Just because I can do art doesn't mean you cannot do art. Exactly the same way that if you do art doesn't mean that I cannot do art.

Speaker 3:

Yeah, maybe with the nuance of that. Maybe in a professional setting you will be somewhat forced to pick up these tools to be as efficient as your colleagues, but in a personal setting, like, you're not obliged to use anything, right yeah, I think also there is also a good distinction there.

Speaker 4:

You said, like as a tool, yeah, right, so it's not going to necessarily place the people, but it will speed things up. You will get a first version, but I still expect there to be some manual editing. And what do you think, alex?

Speaker 2:

so two points there, uh, the first is that I agree it's a tool, so I use it as sort of like the starting, my base. So, for example, with video editing it can do the rough cuts. I mean I haven't used one that works well, but eventually I think it's going to work pretty well.

Speaker 3:

It takes a stronger AI to edit Marilla.

Speaker 4:

Yeah, I know, I don't know if it's a good thing or a bad thing, but, as I said, I know I was thinking like thank you no.

Speaker 2:

Yeah. So I use it as a base. I also use it, for example, with ChatGPT to kind of get some ideas, like to brainstorm. I think it works really well. But the problem for me, I think, is that if we, for example, with Suno, it's going to generate a lot of music and I don't think it's yet the best quality music, um, but once that gets better, of course people are going to be using it, and also with social media, content creators, everything like that, um, you're gonna have a lot more ai generated content and I think it's just gonna saturate everything even more. I mean, it's already very saturated for artists, for singers, um. So that was more my point, that I think it will take it away maybe the spotlight a bit for some creators

Speaker 4:

especially if it gets better, and it will get better maybe a question as well, and I don't know if I mentioned this on the the pod, but I also feel like arts and well, arts in general. So it can be music, it can be, writing, it can be. I do feel like there's a bit well, maybe not writing as much, not sure, but I think for music there is a bit of the human connection point um. So I feel like, if that gets good enough, that it fools people, that's fine. But I do feel like people want to connect through music.

Speaker 4:

Like you, the, whoever wrote, whoever made this piece of art was trying to express a feeling and you relate to that feeling. But if you know that it's computer generated, maybe you feel a bit cheated, like it's not a real feeling that you're relating to, you're just relating to an echo of something, a reflection, a poor reflection of something. So I'm not sure, like how, I don don't know. That's kind of how I feel. Like, even if you see a jni song, I think it's entertaining, but I don't think you're gonna feel really moved by it.

Speaker 4:

Even if it is something that is very profound, like on paper, let's say, I don't know if you're gonna feel very moved by it, because I think there's a bit of like the struggles that the the creator went through. You want to. You feel that and I think if it's a ai like yeah, there is no struggles, right, it's just saying stuff, but how do you know? That's the thing. If you don't know, like, it's like the brazilian saying, like, what the eyes can see, the heart can't feel, right, so I think that's today basically like if the if you look at the outcome of suno, you probably will still know, yeah, yeah.

Speaker 2:

But like after five years to this, yeah, for sure but I also see it, like I said, being used as a base. So maybe if you're like a producer or something, if you're just stuck on something, you need to get a new beat, then maybe you can use ai to just get the base and then you build off from that that is true even like uh, I mean, even if you have still a singer.

Speaker 4:

But they use ai to write the lyrics right yeah, it's, then there's no way you can tell, right like, uh, but still kind of takes a bit away that core, right. So but yeah, I mean there's an argument that today that kind of already happens with ghostwriters, right like people just write stuff to people.

Speaker 2:

So but there's also the point where, like with ai, you can kind of tell when it's written by ai. So is that going to happen with music as well?

Speaker 3:

like it's all going to sort of sound the same, yeah that's true or more similar but won't it also a bit uplift, because I understand like it's a very crowded market already right it's hard to get a career in in arts in general yeah, because music before whatever yeah, like, but isn't also gonna, at the same time, gonna elevate people that are really good, like it's gonna push out?

Speaker 2:

yeah, it can automate the tasks that, for for some people, take a long time. So let's talk about video editing. It'll make it a lot faster. So then they can focus on the creative aspect a lot more, like on the storyline or but even I mean more like the, the human aspect of it.

Speaker 3:

Let's take the example. Remember what you were saying ghostwriters. I'd like you have probably a lot of average ghostwriters that you could theoretically replace at some point. Yeah, but, maybe there are a few very good and they will become even more valuable yeah, they are able to but stand apart from the rest and but then now bring a real story and a creative story.

Speaker 4:

As you say this, I'm thinking also of the story we covered as well last week that Mark Zuckerberg wants to replace mid-level engineers with AI, Because I was also thinking like one I brought up is To me that is different, but in the sense that I do think that sometimes for you to be a good developer, you need to be a junior developer first, like you gain with experience, and maybe to be a really good ghost rider, you need to be a bad ghost rider first, like you're not just, like you don't get up in the like you don't, you're not born like knowing how to do these things. And if you're saying that jni is going to elevate you, just kind of create a bigger gap and how you're going to go from level zero to level one now Right, see what I'm saying, or no?

Speaker 3:

Yeah, I think I understand what you're saying.

Speaker 4:

It's a different set of problems, but I also. I mean, I get what you're saying, but I feel like, in essence, what's happening is that you're creating a bigger gap, right, because the mediocre ones are going to get nothing and the good ones are going to get everything. Well, yeah, that's a bit the danger, right? Indeed, yeah, yeah, true, yeah, indeed, so. So, even then, there is a good side that it like, yeah, enhances the human component to it, but there is a bad side because this increases the gap and maybe it's not sustainable because you need more people to get to that level. Right, but if they're not hireable, then how are you going to get there, right? Another question, another thing that came up after we recorded was the the um, environmental impact of these things right.

Speaker 3:

What was your question, alexander?

Speaker 2:

no, you were mentioning a bit like is it is it, should we put these yeah, it was more the fact that, like, I think these tools are really cool, like sun, suno and everything Um, but I think it's also important to put those resources into something that's that's more good for for the planet. I think the environmental aspects is not really talked about as much. Um, and I think it should be, because is it worth creating like a fun song? But then what's the impact of that like? Do we have any comparisons?

Speaker 4:

uh, I I don't have it now. I know some people do.

Speaker 3:

Um, like tim leers, we had an episode with him, yeah you have something, um, I'll look at it up a little bit after we had the discussion, because you were saying a bit like, because we had the discussion on the arts ish scene, creative scene, maybe we call it the creative scene, maybe a bit more a wholesome term for it, and I looked up a bit like what are these causes? Because you were saying like, should we put all these compute resources into this? Should this be the priority, or should we focus on something that has more of an?

Speaker 2:

impact right.

Speaker 4:

And I did look at it a little bit, but I need to so while you look at that, maybe to also give a bit of maybe not perspective, but like I think the a lot of these models, the large language models, they are technically, they they model language, right. So the idea is that you train it once and then you can use it for a whole bunch of things. So it's almost like you pay once and then you can just use it. So if you have one model, you train one model that can help you with your writing but can also write songs. Technically, the costs you pay environmental costs or energy costs most of it is in the training which you do once.

Speaker 2:

I thought it was every time you make a request.

Speaker 4:

And that's the new models. So GPT stands for Generally Pre-trained Transformers. That's what I'm describing now. And now the new OpenAI models the 01, 03, they have way higher compute requirements when you're running these things. So in that case, yeah, every time you run something, if you're going to say, hey, create this song, there's going to be a bigger impact. So there's a bit of a distinction there. Did you find the numbers?

Speaker 3:

Yeah, but take this with a very big grain of salt because there is not a lot of finite numbers on these commercial models available, right, like not a lot of formal numbers. Do you have a link? Maybe you can put on the screen? Okay, okay, um, she's just making it up. So for, for I think indeed, like marilo saying, like there is a big difference with with using a model versus training a model, um, with training, uh, a large model like gpt4 or a clot 3.5, for example, um, where typically the because there's also when you train a model like is it isn't an incremental change, like there's already trained model and you do something on top or do you train from scratch.

Speaker 3:

And what you typically see about the hypothesis is because, again, there's not a lot for not a lot of formal information, like it's such a big architectural change from gpt 3.5 to 4 that that they need to retrain from scratch. And the energy estimations they vary wildly, to be honest, and they go from, let's say, 90,000 households electricity from 9,000 households per year to 90,000. So that's a bit of range. So that's more or less the power consumption of a whole city for a year yeah, small city, small city, small city for a year.

Speaker 3:

But it's hard to, yeah, put that into. Like, what can you do with it, right? Like, what is the, what is the added value? Like, is it worth it? Because a lot of things that take a lot of energy, right, indeed yeah and then, if you look at inference, so when you generate something but it's hard to because it's very much depends on what do you generate Like? And I looked a bit because I told you like, with Bolt I'm a boltnew, I'm making like a mini SaaS. I will publish it in the coming weeks.

Speaker 4:

Maybe we can. We can, we can show here live.

Speaker 3:

We can show it here. But what I will roughly use up to publishing, I will roughly use 80,000 tokens, 80 million tokens sorry, okay, 80 million tokens, and again very rough estimates. It comes a bit down to 60 kilowatt hours and that is. That is, if I'm not mistaken, what my, my car takes for one full charge.

Speaker 4:

Okay so if we say, like, if you had a gpt model just for this, just for you, just for this application, it would be the energy of a city for a year.

Speaker 3:

A small city plus a full car battery, yeah, and of course the energy for a full city, but it's not for you right?

Speaker 3:

no, that's for a lot of people but if you look at just energy for a full charge yeah, if my estimation is correct, yeah, I hope we're not gonna get a lot of, a lot of hate mail then, um, I mean it's worth it because, like honestly, like to build what I'm doing now, it normally you would take months with a team like this team would have a much bigger carbon footprint than one charge, to be honest yeah, that's true, that's true but of course, like, what is the can you like with this approach, can you offset the training?

Speaker 4:

that is a bit of question, right like that yeah, indeed, but I think there is also and I think if you go for the o1 models, then it's a different approach as well, because then there's way more like. We saw the numbers for o3, high compute and actually I can pull these numbers. This is also from the last week's recording um, oh, three numbers. They for the high compute. It was like it was a lot, a lot of money, but there was that was like, uh, like a lot, a lot more right, a lot more I remember no, much higher complexity indeed

Speaker 3:

indeed much higher complexity, and then I agree that for those cases, uh, probably but again, even there, like if, if, if these tasks and we still have to see them because it's not published, and like if these tasks normally would take, I don't know, three months by a person which also has a carbon footprint, and you can do it now in one hour, even though it takes much more. Like a peak, yeah, yeah, power consumption at that one, like it's it's. It's a question like what is the the? The problem is, of course, like everything becomes more quote-unquote efficient and you're just, your overall consumption goes way up right yeah, because like you are gonna it's not a status quo.

Speaker 3:

Yeah, yeah, yeah, you're just gonna keep doing.

Speaker 4:

Yeah, yeah, maybe hold on. Let me just for numbers here. I'm gonna share it again, this screen. This is what I had shown a little while ago.

Speaker 3:

I don't even know if it was last week, to be honest and I think alex's question was also even maybe a bit more complex in a sense like, okay, you have this power consumption, but should you not focus on things that matter more? Indeed, which is very it's a difficult question, right? Yeah, what matters more?

Speaker 4:

because then it's also hard to quantify, because right now we're trying to quantify energy and this, but, like when you quantify the value of a task but I think it's also a bit people need to choose the right model for the right task, right? Because, like, cost per task is over one thousand dollars, right? So it's like, are you gonna use that to to write a, a funny 30 second song? Not?

Speaker 3:

sure right so maybe this will be that really emotional song that you yeah, maybe.

Speaker 4:

yeah, but that's true right, like if you're an artist and you need to constantly crying one thousand one thousand dollars, you know that tiers that's how much it costs.

Speaker 3:

Totally worth it, yeah but it's a difficult one.

Speaker 4:

It is. It is a difficult one.

Speaker 3:

But the domain that met us more is just a very difficult discussion. Yeah.

Speaker 2:

But, like we also said last week, like if AI can really replace jobs but help also and make everyone more efficient, and then we sort of live in this like utopian society. You had mentioned the Jetsons, right, then. If that's the case, what about the energy consumption then? I mean, it's also that's the trade-off, right.

Speaker 4:

What do you mean by the?

Speaker 2:

Like if everyone can have an easier life with ai. Yeah, the energy consumption is gonna go yeah, because it is more used yeah, yeah yeah, but that's true, that's true.

Speaker 4:

I think this. Yeah, I think so. I guess what I'm taking from the discussion is, if the suno and all these things they are used with gpt models, so generally pre-trained models that the inference cost is small, then maybe it's not as big of a deal today. But as soon as it starts using well, and I guess, like my also the other hypothesis I have, what I've read, that the gpt models they kind of use all the like almost all the data in the internet, or like there's not gonna, it's not gonna increase more. That's that's the impression I have like they're not gonna find a new data set or new data sets that is gonna double the volume of training. Do you agree or disagree with that?

Speaker 3:

maybe first, um, meaning that in the future you would that we have even bigger models. That's what you're saying or yeah.

Speaker 4:

So what I? What I saw in a few places that open ai basically uses the whole internet to to train their models. So if you train a new model from scratch, even if it is a new architecture, you're not going to have significantly more data now. That is going to say now it's going to take twice as long, three times as long because of the data. Maybe the architecture changes in any more training whatever. But so my hypothesis is if you still have the same or comparable training costs during their initial, the pre-training phase, like the, and then you just have inference, that is less of a problem to yeah, for the songs and all these things, because also use it for a lot of different applications. But as soon as you start to use more the 01 models and the three models that have a high cost for inference, then maybe makes a bit less sense to to. The environmental impact is considerably higher for making a funny song or something but also just more people using it.

Speaker 2:

But that was more my, my point yeah, that's true I'm using it every single day.

Speaker 4:

There's not a day I don't use it every day she has a like, a no, not, she has an opening song like her alarm. Every day is like a new suno song. You know it's like good morning, it's gonna be a great day I took this up to my home assistant just every morning. It's like a good morning song yeah, that's true, like, but actually I checked suno for an api but uh, I couldn't find anything like a simple no. I mean there are workarounds, but uh, yeah, it could be fun every day, just like.

Speaker 4:

Give me the the top news stories of the day. The weather put it in a song, wake me up with it. It started there. Uh, cool maybe. Um related to this, to this discussion on the gpt versus o3 models, one thing I read a little while ago.

Speaker 3:

Um, yeah, you have another thing to say about before I move on yeah, maybe also related to to to this, I think, what you would be hopeful for, but uh, yesterday's inauguration maybe doesn't make me hopeful is that you would also see, like the the, that you see an offset of this environmental impact, that you see some regulation around this yeah but I think, based on it doesn't look like recent news.

Speaker 3:

We don't see the world going in that direction now, yeah, yeah. So that's that. That, to me, is like the biggest, uh, the biggest challenge, like how do you, how do you, balance environmental impact with societal progress?

Speaker 4:

yeah, yeah, yeah, and there's a big yeah, I know it's a whole. I feel like music. We can move the discussion to a whole different domain, right, because there's also the some people that I've talked to feel that in europe is highly regulated and too highly and that hinders progress, and then europe may fall behind in terms of technological development, and then we're always consuming stuff from the us because of their head, because there's less regulation, and then how do you balance that off?

Speaker 3:

but this and this, but yeah, it's, uh, it's another thing, and that's maybe a segue to uh, an executive order that was signed yesterday from maybe such as this no, the other one, my bad hold on the other one.

Speaker 3:

Yes, this article that you have here, uh, by tech crunch, it says that president trump repeals biden's ai executive order. So, uh, joe biden, in uh 2023, 2023, created an executive order that basically instructed the NIST National Institute of Standards and Technology to come with guidelines on safe AI, more or less. How should we look at should we regulate or not to basically have a framework for safe AI and how to move that into regulation going forward. And we never got to that stage of regulation. But now it's also killed in action by this new executive order that basically fully refocuses.

Speaker 4:

And that's what you were saying.

Speaker 3:

It's basically a free-for-all.

Speaker 4:

And that's what you were saying, that it's kind of the US counterpart of the AI Act.

Speaker 3:

Yeah, exactly, this is exactly. And what we see is that indeed, there is much less regulation on this, on AI in general. But it extends to also what is the environmental impact on this? Like the green deal is being canceled, like that's the the pity of this, because I fully agree with like you need to think about. Like what is the energy consumption? Like is this being good to put use and but, and if you say it's being put to good use, even if you can make that argument, then ideally you put an imbalance with how do you make sure that you're not completely fuck up the environment for the generations to come?

Speaker 4:

Yeah, yeah.

Speaker 3:

And that's a very difficult balance and also something that we clearly see that on a global stage, there are very differing views.

Speaker 4:

Yeah, yeah, I'm wondering how, because it almost feels like it's, you know, when you're developing, and then it takes some shortcuts, you know, and then at some point it explodes in your face. That's kind of how I feel this is. It's like you want to say I know, I want to encourage progress and I want to do this and I want people to be worried about doing this. I don't want like, when you're developing, don't worry about the security policies and whatever right, and then at some point it just kind of explodes in your face. That's kind of how I feel. But I guess, to be seen, what's gonna happen to be seen, to be seen and maybe uh, as a free-for-all.

Speaker 3:

The other article of tech grudge was an interesting one. This one else it's about uh zuckerberg. He's caught up in an AI copyright case around Lama, having used copyrighted information.

Speaker 3:

And the article is actually about. He turns to YouTube for his defense, where he says that YouTube also sometimes has pirated content, but they throw some stuff out and you can't blame the platform for the copyrighted content, et cetera, et cetera. With that he's trying to make an argument that Lama also has the copyrighted content, et cetera, et cetera. With that he's trying to make an argument that Lama also has some copyrighted content, but it is okay, youtube also has it. But okay, it's a bit beside the point of the argument I want to make.

Speaker 3:

In the article it shows that they are using LibGen as a source for their data and, of course, I know very little about the details of LibGen, but LibGen used to be very easy to access and you could download like literally any book that you were searching for, like whether that comes from, whether it be from academia, a novel, like whatever you could download there yeah like super easily yeah and it's uh, super legal as well.

Speaker 3:

No, super legal, super legal, not. It does not have to be clear. That's also why I don't know anything about the details of it. But it's interesting that they use this as a source, because it basically means I mean, that's, that's, I don't know. It's hard for me to put a percentage on us, but it is a large, large percentage of the books that are that have, let's say, quote-unquote a significant fan base. I think everything that has somewhat of a fan base was on LibGen Fan base, let's say an audience, ah yeah yeah.

Speaker 3:

Like, maybe if you wrote a book in your basement and no one read it, only your mama, it was. Maybe not there.

Speaker 4:

Yeah, yeah, yeah, the final moment you have 10,.

Speaker 3:

Like you have 1,000 people that read it. It's going to be on LibGen, but 5,000 people that read it. It's going to be on LibGen, but I would imagine it's not legal to.

Speaker 4:

So there's a huge knowledge base, right, but I imagine it's not legal to use the LibGen stuff right, no, no, no. So it's like if you say you are using it, that's already, then it's game over for him right.

Speaker 3:

I don't know, let's see, because, yeah, not a legal expert, but imagine. But yeah, not sure, but it's interesting that this was also part of their knowledge base. To to train libgen yeah, that's to train a llama, yeah, to train llama, because it's a huge amount of of text. Yeah, yeah, it's uh, even aside from what is typically, available on the internet.

Speaker 4:

Yeah, yeah, indeed, indeed, indeed, and probably. Well, maybe I don't know, probably, but my guess is that it's better quality as well. Right, because on the internet there's already a lot of Gen AI content, and I think with books I would imagine there's more review, there's an editor, it's more curated. So, better quality as well, I would guess.

Speaker 3:

Sometimes these days, even though I use it a lot, I'm still amazed about the recall of these models when recalling certain facts about things that it's trained like. The other day I was uh, I was uh looking, I was asking claude for some information on mailgun, which is like a like a email sending api. Yeah, and it really comes up with the link to the documentation yeah, where this is explained.

Speaker 4:

Yeah, so, maybe for for people that maybe I know what you're saying, I think, but I'll try to also explain to the people that are not as aware. So the idea is, like we talked about the generally pre-trained right, so, um, the idea is that it can recall from that, uh, from the training data right on the training weights, that information and you can actually extract that so from the training data in itself. So it actually doesn't doesn't have like the index with all the information, that it just kind of looks it up on the fly. It's really knowledge that is embedded on the training weights of the network, exactly. Yeah, it's pretty crazy, it's pretty crazy. It's pretty crazy, actually it's. I think it's crazy, but I think it's crazy for us that we know how this works.

Speaker 3:

Yeah, but I think for someone it's like yeah, of course, because typically like the intuitive reasoning is okay, you have it has like a knowledge database somewhere, like you would do see with rack, and it fetches this somewhere yeah but here is like this information is really like somewhere yeah quote unquote, imprinted in these, in these notes and edges, and yeah indeed, indeed, and in a way that whenever you're trying to predict the next characters, you will find the right characters for that.

Speaker 4:

It's a bit crazy. Maybe a good segue on that um, something that does not work like this is perplexity. Do you know perplexity? Uh, I know perplexity is perplexity. Do you know perplexity? I know perplexity. Yeah, so perplexity. For the people that do not know, I'll just share here something oh, hold on. Yeah, oh, let me share this tab instead. Perplexity is kind of like a Google replacement, but it's Gen AI powered, right. So basically, you can ask, like, what is the data thought? And I'm sharing this screen for people listening podcast, let's see what comes up. So you have a question it's a bit like more conversational and then what it will do is that oh, who's this guy? And actually I think so Even goes through YouTube. It has like a transcript, right. And then, based on all the sources, it's like really right, right, right. So it takes all the sources and it kind of gives uh, technically, it gives an answer based on the sources so the like rises the source indeed.

Speaker 4:

So the likelihood of uh, hallucination is maybe a bit less, and all these things, and then it gives a. It gives a answer with the sources there, right? So that's not what you're saying. This is not on the training weights. So if you ask something that happened yesterday, it could still give an answer, whereas if you ask like a Cloth which does not have access to the internet?

Speaker 4:

Exactly because it was not trained on the data there. There's no way that it's going to say so. I think it's really cool, maybe. Another side note I really like this when I'm Googling something, when I'm searching something, I guess that I don't know the actual keywords. So I think I was looking for like star unpacking on Python or something, and I was like I just kind of gave an example like this, like yeah, when I'm trying to do this and I give an example of the code, how can I do this? Or why is this not working, this and that? Or is there a bug on Python 3.14 or something, import something? And uh, and actually came up with the terminology because he found probably one article and then, from that search, was able to give me the actual sources that I needed. So it was really cool.

Speaker 4:

The reason why I'm bringing this up um, so perplex is really cool for people. I would say check it out. Uh, perplexity is acquiring carbon and um, carbon is a data connectivity. So I think the idea that what they want to do now, and probably where they're shifting their product, is you can plug in your data sources and basically you can search in one place. So, for example, we have documentational notion, and then perplexity can be the one place you go to to search these things, and if it's something regarding your organization, it will be able to fetch these things for you.

Speaker 4:

But then you have everything unified in one place and maybe even you can have hybrid queries, right, some things that maybe some of the stuff is on Notion, but some of the stuff is actually on the internet and it merges these things for you. So this is not super new. This was from end of last year, but I thought it was a very interesting possible future. You know the future of search, right? Not true? So, and I think they do a good job with, uh, the current platform they have yeah, the current platform.

Speaker 3:

I think the argument that he, that their ceo, always try to defend in interviews is that why is it not just? Why is it more than just a wrapper around? Open ai? Because they don't have their own model right yeah but it's indeed like they have a very smart way of getting data sources into their context and coming up with relevant and timely questions. That is their value proposition today.

Speaker 4:

I think it is challenging to see like the chat gpt now also has to search search options and um like it will be interesting to see how they can defend their position going forward indeed, one thing that I saw and I mean again, sharing the the search example, you even see stuff from youtube, so I think they transcribe youtube videos as well, so they also have this embedded. It doesn't take too long. You can also rewrite it with a different model. So, like you said, even grok 2 is here. They have one. Yeah, you still need a pro account for this, but yeah, it's yeah. Indeed, I think the value proposition is not necessarily on the models, but indeed it's more the engineering behind it. Right, like how you can make these things efficient, how you can index things efficiently, how can you scale this service?

Speaker 3:

And ProPlex is also very interesting. It's interesting to watch in the market. I think their CEO is very bullish, like he does a lot of acquisitions, some that are relevant, some that are like are these like? You need to be make a bit more of a mental leap to try to give the relevant like this is data connectivity. It's easily to fit. I think two days ago they acquired readmemecv, I think, which is like professional profiles, and I never heard of it before. But it's also clear like it's just a talent acquisition, like they're gonna decommission readmecv, so it's really just a talent acquisition so you think it's more like uh just trying to push them?

Speaker 3:

out, yeah, yeah, yeah. Well, not not necessarily, but to really grow their team with the smartest people. I think that is the like for for me, but and he does a lot of these acquisitions um, and also there was there was some rumor I don't know, I didn't really uh, didn't really fact check, but there was some rumor that he was gonna do a bit on tiktok us ah, really yeah but tiktok is banned in the us now.

Speaker 3:

No, no, it came on again. Ah, really message to the, the users thanks to trump, it's back on. Oh wow, yay, but uh, but that would like it's. Uh, you need to be make a bit of a more of a mental leap like why is that interesting? But there's a huge source of data, of course yeah, that's true that's true, but like it's interesting to see, like to they're not necessarily focused purely on the project, but also how can we make our project grow with right ecosystem of players to acquire yeah, it's a meaning.

Speaker 4:

It's almost like you're playing chess, right. When you're like, this thing is like okay, it's not for the products, maybe it's the people or maybe it's the data, or maybe it's this, maybe that it was very aggressive, which, yeah, I think, uh, sometimes it works.

Speaker 4:

Maybe one thing, so a bit segwaying again um, you mentioned the value proposition for perplexity is not the models itself, but everything around it. Um, so I guess it's the engineering behind it. One thing I came across uh, this is from 15th of january, so not that old is the emerging role of ai data engineers. Have ever heard of ai data engineer term? Nobody can imagine something with it.

Speaker 3:

What would you think it is? I would assume it is, uh, very similar to what we now see to build easy mvps and like web, like web applications, that you would have something like that to make data pipelines and stuff.

Speaker 4:

So it's like you prompt to create data pipelines to.

Speaker 3:

Well, instead of writing for example, let's say you were writing in dbt or in Spark and instead of writing the actual SQL or PySpark or whatever you just prompt, I want the pipeline that takes this from my connected Shopify and I want to first load everything there, and then I want also the logic to transform it into that. And that Did you just describe it. Did it also build some tests for you? Did you explicitly say I want it to look like this write the test for me, and that you iteratively see? Okay, I'm going to run a test.

Speaker 3:

Not, not everything works and I'm going to reprompt a little bit to to shape it. This will become, I know and I honestly think that is a question of time like this will become the new ai data engineer.

Speaker 4:

So what I was thinking when I read this? The title at first, but I didn't read the article at all yeah, yeah, we'll get to the article. But what I was thinking when I read it was a bit not necessarily use ai to write the code but, like in the pipeline, there's a gen ai step. For example, you have text and you want to classify this, and now you just use a. You have a gpt step in the middle.

Speaker 3:

That's what I was thinking yeah, let's see what you mean right, uh, so it's a different way of looking at it.

Speaker 4:

So, and actually, what he's talking here? Um, well, he does talk about llms, but maybe I'll just go to. Yeah, he also talks about the today. That it's a bit different from what we're saying. Yeah, he says about the complexity of data right, he also mentions that a lot of data is not structured anymore, so text data, images and videos and audio. We also talked a bit about text data. In terms of data quality, for example, right, how, now, if you want to assess the quality of the data and have unstructured data, it becomes a bit more difficult. Like the traditional statistics mean, max standard deviation doesn't really apply. Yeah, resource intensive processing, et cetera, et cetera. So now to the core responsibilities of AI data engineers. According to him, data preparation, pre-processing To be honest, I think it's just data engineering, he says, using Python, spark, ray.

Speaker 3:

I think what the author here said the AI data engineer what he's describing is a data engineer specifically that builds data sets or data lakes for AI solutions.

Speaker 4:

Indeed, I think that is the different take. I think it's like to me a lot of the things that what he's describing now as an AI data engineer is what I would think of an MLOps person kind of you know. So, for example, enhance AI Training Datasets, he says, using Gen AI to create synthetic data I think it could be done. Yeah. Data quality and bias mitigation yes, okay, that I can kind of see as a new AI component, but I also think the data quality is already the engineering thing. Again, data scalability and optimization To me this is already just plain old data engineering. And then, well, he also talks of regulatory compliance, security, which I'm going to skip.

Speaker 4:

Integration with the IML frameworks, so integrate seamlessly to TensorFlow, pytorch, hug Hugney Phase, for example. Again, to me this feels more of an MLOps kind of thing. I do think that the space will change for Gen AI because you're going to want to add a bit more AI stuff there, ai powered stuff, but I'm not sure if I fully agree with his view on this. And also in the in data roots, we have the ai unit and the data in cloud, right, the data engineers. So I feel like this kind of sits exactly in between right just say ai data engineer.

Speaker 4:

And then boom you have a new role. So I'm not I don't fully agree with it. I do think that data engineering will change. I think even the data roots. We're also talking about how jni will change in the, all the three units right, so the, so the AI, of course, but also the data engineering and the data strategy. But I think a lot of the things that he's saying here it's like already been done by MLOps, engineers or other profiles. What do you think? Do you agree with this after having seen what he's talking about?

Speaker 4:

Whether or not this is a rule or yeah, whether this is a new thing, first of all, and whether this is a to me like this article.

Speaker 3:

Just what it says is that, when it comes to engineering ai solutions, all the preparation up to and after a model a, an ai solution is slightly different than from the moment. It's a reporting solution, right yeah, and I agree, like, like, like you can. In more traditional terms, you can spread this out over data engineering and mlops and machine learning. I don't think, indeed, there's not nothing much new this is completely different than what I was saying based on the data.

Speaker 4:

Indeed, but I think there's a lot of ways and that's why I wanted to ask you before diving in, because I feel like there's a lot of ways that the, the gen ei is gonna come in. Yeah, the, the engineering space in general, right? Um, now maybe another more timely news as well um, you know, apple intelligence, right, bart uh, I know it.

Speaker 3:

I never used it, but Well, you're eager to use it.

Speaker 4:

You're like, ah, it's going to come to Europe soon. I wanted to try it, yeah. Yeah, what I saw is that you're going to have to wait a bit longer because Apple suspends Aaron Strome, ai generated news alerts. Strome, that's a new word for me, aaron Strome, so for me, for me, okay, so I was looking at this is ai generated, um, so basically apple intelligence. They had, uh, basically something to summarize news, right. So whenever you, you have a notification on the top but apparently it was hallucinating, right, and I think for news is a bit um, I guess it's a bit more sensitive, right, um, so they decided to to stop for now until it's in a better, in a better state. Uh, have you seen anything about this part, or, alex? Actually?

Speaker 4:

I haven't known and um, what do you? You haven't used this either, but you could use this, don't you? On your new Apple.

Speaker 2:

But I can't right because I'm in the EU.

Speaker 4:

Oh, yeah, sure, but you have a new yeah yeah.

Speaker 3:

Would, you use it Like do you know a bit what the premise is? What you could do?

Speaker 2:

I'm not really sure.

Speaker 3:

I think you can make Gen AI emojis. Oh yeah, I would use that, maybe also plugging back into our earlier discussion in the Apple Intelligence the idea is that, but also like sorry, I'm interrupting, but also like you can refine text, like you're writing a message and it proves it. I think it also allows you to summarize stuff.

Speaker 4:

I think even the summary wasn't just for news. It even summarizedized like the text messages Like ah, bart said this, marilo said that and mom said this, like you can get, like the pop-up notification, you would give a little summary, but it was hallucinating so it wasn't reliable, right? Ah, yeah, one thing that also App Intelligence they said they would do is that they would try to run the compute on your phone first and only use the network if it's if it needs, right? Uh, so there's a data privacy and the scalability and all these things, right. So just to plug in a bit there. But yeah, it's been, it's been paused and apparently there's even a subreddit nowadays with apple intelligence fails.

Speaker 4:

I haven't seen this yet. So maybe, uh, maybe this won't make it the end of the pod. But here I see, give me a heart. So I'm just showing the Reddit, the subreddit of Apple intelligence fail, and then, yeah, give me a short heart attack. And then it's just a notification on Instagram, drunk and crashed. So I guess it's a message from someone. See, trump, pardon, fauci. See, just hallucinating stuff. So, yeah, I guess we won't be playing with it yet. Uh, it will take a bit, a bit longer until until we get there, until we get there, but surprised that it's not as polished as, because I feel like Apple products are usually known to be very polished right, and they took their time also releasing something. So, yeah, what else? What do we have? We have time for some Tech Corner stuff. Part One One.

Speaker 3:

One.

Speaker 4:

I'll let you pick. I think I know what you're going to pick, but I'll let you pick, in case.

Speaker 3:

No, I'm going to pick two.

Speaker 4:

It's like I make the rules here.

Speaker 3:

I had a question for you, okay, rye.

Speaker 4:

What is this? Rye Bart?

Speaker 3:

So you as a Rust aficionado.

Speaker 4:

Yeah, I feel like I need to change my titles again because I'm not sure how. Are the expectations too high now? No, I mean yes, but I also haven't touched in a while.

Speaker 3:

So Rye is an embedded scripting language for Rust, and have you ever heard about it? Never, not. How would you?

Speaker 4:

describe it. Rye is an embedded scripting language and evaluation engine for Rust that gives a safe and easy way to a scripting language and evaluation engine for Rust that gives a safe and easy way to add scripting to an application.

Speaker 3:

So I have no idea, but I guess that so you have, for example, I'm going to make a parallel um, so you have uh. What you see in combination, uh, I think, with C++ a lot, is Lua as a scripting language. They're easy bindings and it allows you to build something, let's say a game, in the engine that is very efficient let's say it's C++, but it's very hard for, let's's say, gamers to create additions to this or create their mods by actually changing the source code of the engine. Ideally, you want to give an ability to to uh, to easily allow them to script extra functionality. So then typically you have like a scripting language like uh, like lua, that easily binds with the core engine, which is much more intuitive. Lua is a little bit like Python in terms of intuitiveness and Rai is apparently I've never heard about it.

Speaker 3:

Rai is apparently this for Rust and it has become. I looked into it a little bit two years ago. It was very, very slow so there wasn't really a big uptake, but it has improved a lot in terms of speed. So now when you have like a solution like fully rust based and you want to allow users to easily add functionality through scripting, rye is a apparently a good candidate. It's still way slower than rust itself, but it's good enough.

Speaker 3:

So, for example, glico, like the programmatic music engine G-L-I-C-O-L actually uses this, so that you can write custom functions to generate music and it uses Rye to interact with Rust.

Speaker 4:

So, to repeat what I understood of what you said, basically it's like a simpler way of writing Rust that you can just kind of interpret it. You don't have to compile all these things, you just kind of run it there. But it looks a lot like Rust, huh.

Speaker 3:

Well, yeah, it's a better language for Rust. Yeah, indeed.

Speaker 4:

The only thing I see different here. Yeah, okay, that's cool. I think it's also interesting to see on Python there are also ways you can try to make your script run faster. Then it kind of goes the opposite direction. So like MyPyC or Cython, basically from your Python code which actually a lot of times is not technically Python code, because Cython has some weird keywords which is not technically python it creates like a c version, compiles it and then installs in your interpreter, right. So it's like almost the opposite. Like you, you try to compile python code to make it faster and then you can use it on your regular interpret stuff. And then here's a bit the opposite, right, like here indeed, like like.

Speaker 3:

Performance is not the focus here.

Speaker 4:

The focus is really like extensibility indeed, and I guess if you, if you have something like this and you want to convert it to rust, it's probably not going to be a big step, right, like if you say, okay, this is something I want to incorporate on the core, the core of my application, it probably is not a very, very, very large step, right, so that's cool, I like it. I also think it's everything should be a bit like this right, you have a bit of, like, the flexibility and the speed to put stuff in, and then if you want to make it scalable faster, then you have the other counter part right, oh yeah, very cool, very cool.

Speaker 4:

You haven't tried this yet.

Speaker 3:

No, no, okay, um, and the other thing I had listed in the tech corner uh was uh versaraai, which uh showed up on my radar somewhere. There's not actually not a lot of information. They're still very much in stealth mode. Uh, it's uh, if you go to about, I think it's from from MIT and Berkeley, if I'm not mistaken, let's go to team we're on the website it's MIT and Dartmouth. Well, at least two people from there, and what they're building is basically an answer to how can I safeguard my text from being scraped by an AI scraper that will use this for content generation. How can I safely publish text without it being reused by an AI engine without my explicit approval?

Speaker 4:

That was lmtxt, that they wanted to do something like this. No.

Speaker 3:

Yeah, I don't know the name, but I've seen this before.

Speaker 4:

yeah, Because I remember I saw something so you probably know more about this than I do Because a lot of times in your website you can have a text file that is robotstxt and that's for like web scraping, or no, Bart, Is this a?

Speaker 3:

Well, yeah, so you can have robotstxt on your domain name Like and correct. Scrapers will look at this to see if it's allowed to scrape. But, if you're it's not enforced, like it's more yeah it's more of a convention that like yeah, exactly Okay.

Speaker 4:

And then is this different from the?

Speaker 3:

well, so it's just like a way to well, um, the premise is a bit, and I think the the premise is is uh, it's not correct. Is that? Um, ai scrapers will look at raw HTML, will interpret the raw HTML, while humans will look at the rendered document. They will look at the DOM, the document object model. Yeah Right, and what they're saying is we're going to poison the raw HTML without interfering with how humans read. It is in the DOM.

Speaker 4:

So, for example, I mean, I guess it's not a very good example, but if you have a whole bunch of white text in the background for human and it's a white background as well, the white text blends in with the background so people don't see it. Well, I guess, maybe if people select the text or something, but like the idea is to mess with what the computer see without messing with what the human see, exactly like, like you can do this with with colors of text, but you can also do this a bit more more low level that you say.

Speaker 3:

Okay, I actually have this span tag and I hide it. So it's actually completely it's.

Speaker 3:

It's not part of the DOM, even right, um, but it's still in the raw html yes, yes yes, and you have a lot of these tricks and I think they are playing a bit with using css statements to do this. And I understand that it can trick models because, like they have this example if they summarize an article like a bit like the, the, by an ai like the, what comes out is completely not what do you must read, um, but to me it also seems very simple to circumvent from the moment that you know how they poison the text.

Speaker 3:

It's yeah, it doesn't seem hard to instruct a model to ignore the poisoning. Yeah, I mean, I understand their premise, but I think it's it's a very short-lived one If there would be a lot of uptake of this specific type of poisoning and, at the same time, there is a conscious effort to still read all this content. They will read the content and they will ignore the poisoning.

Speaker 4:

Yeah, that's true. I think it's also a bit of a….

Speaker 3:

Because it's a bit like if you ignore robotstxt on purpose. If you say, fuck it, I'm going to make sure that I can read everything.

Speaker 4:

Yeah.

Speaker 3:

You're also going to read this.

Speaker 4:

Yeah, that's true, I think. But I guess, from what I understand, the difference is that this is not just kind of saying. I guess this is going a bit beyond just trusting someone's goodwill. Yeah, this is really trying to make it difficult. Yeah, I'm really trying to make it difficult. So, even if you don't want to respect me, you will have to. I'm putting some hurdles around it. And Robot 60 is more like hey, this is what I want, Please respect me.

Speaker 3:

It's like it's more of a good faith, and this probably works as long as there are like a hundred websites in the world that use this, If there or thousands.

Speaker 4:

they will find a way around this, yeah, but I also wonder if, even if this is more of a research initiative, if people different people are going to think more about this and they're going to have different ways to poison the data and it's going to become more and more challenging and it's going to be constantly evolving and it's almost like hacking and you know like there are going to be new ways to break this, but there are going to be new ways as well to spoil poison it and it's going to be constantly.

Speaker 4:

I don't know, maybe a question you write some, some stuff sometimes, or no?

Speaker 3:

do you ever? You, alex, like, like and I'll make the maybe the parallel like, if you have a blog, like you want it to be indexed, right, like you want people to read it, but lms, well, if, before the era of lms, you would want it to be indexed by something and that people yeah, yeah, that I agree, that I agree.

Speaker 3:

That is reality, right, yeah, that I agree and now there is this thing where lms read it and, theoretically, they take some quote-unquote knowledge from your blog. I don't think that is like, but maybe it's also because I don't. Yeah, maybe that is also a big point. Like I, the blog is very small. Let's be honest, I had a bigger blog in the past, but it's very small and I don't make like it's not part of my professional career or whatever. Right, like it's not something that I need for my living. That's maybe the changes there. Like, like, if I would be a hobby, if I would be a professional photographer or a writer, and this would be used by something that would compete in one way or another, that would allow others to compete with me in one way or another, I would have more of a, let's say, a more ethical stance on this.

Speaker 4:

Like, like no not okay, like what if you use AI to generate these things, to generate what? The images, for example?

Speaker 3:

on my blog, like I'm not. So if you use AI to generate these things, To generate what?

Speaker 4:

The images, for example.

Speaker 3:

On my blog.

Speaker 4:

So if you use AI to generate the images and then AI uses the data you produced to train another version. You see what I'm saying. So, for example, you use LLMs to write, for example yeah Right, that is a productivity. So you want these tools to get better. Yeah. You have something you published, but now OpenAI is going to come and index you and take care of the content of your website to make better tools. Is this a problem to you, if you want?

Speaker 3:

better LLMs to help you write. If I use LLMs to write this and LLMs are using my text to improve. Yes, that's what I'm just saying.

Speaker 4:

Yes, no, that, yes, that's what you're saying. Yes, no, that would be a bit hypocritical, I think. Yeah, I think so too. But I also wonder, like if in the, in the future, it's gonna be the standard everyone's gonna have.

Speaker 3:

I mean, it's gonna be a writing tool, right?

Speaker 4:

like you would have a spell checker but then if everyone uses this and everyone shares the sentiment that it's a person and I, just to be clear if the open ai takes your data to create a new lm version, if everyone uses lms to write we all agree it's hypocritical then no one's gonna have a problem with yeah, because you never had a large body of something out there that was yours.

Speaker 4:

Well, like, if you you think, no, just kidding, if you would have a large body out there of pictures, or if, like, if you you think, no, just kidding.

Speaker 3:

I mean if you would have a large body out there of pictures. Or if you would be stephen king with 200 published books, and suddenly you have, uh, someone that does not have a career in writing publishing a fantasy novel that has very much your style of writing. You would probably think what the fuck is going on? Yeah, I mean that's something else right yeah, that's true you're talking from the, from the point of view that's gonna, from the person that's gonna write in the style of stephen king.

Speaker 4:

I'm thinking more like, but I'm also thinking like in 20 years. Everyone is using these tools that is probably true, yeah but that's what I was saying.

Speaker 2:

Is it also then gonna make everything more similar, like the writing styles are all gonna be?

Speaker 4:

maybe right?

Speaker 2:

I don't think so. No, there's still going to be some distinction.

Speaker 4:

I think so. Yeah, I hope that if it does get to a place that everything kind of sounds the same, then people would step further away from the tools as well. Yeah, right, they'll be like no, this is not this, let's hope. I want to be original, I want to stand out. Maybe what about you, a? You create content, whether it's images or and ducks.

Speaker 3:

She paints the duck eh, she paints it.

Speaker 2:

Well, actually I do paint. So, yeah, what you were saying about, let's say, a photographer, if AI starts using the images, I would also be hesitant to then put any of my artwork out there, which I have. In the past I created a lot of oil paintings and during COVID I wanted to put it online, but I was kind of scared that people were going to take my painting style not that it's that original, but there's a style to it, and with um gen ai, now you can definitely make that. Yeah, same style with with images. So I don't know, I was always hesitant to put my artwork out there and is it a different discussion?

Speaker 4:

is this discussion of with gen ai? Is it a different discussion? Is this discussion of Gen AI, is it a different discussion than just putting something out there and people making money from your like I don't know? You put something on Instagram and then I download and I sell it as an NFT and now I make money from it. Is this a different discussion or is it a bit the same? It's just a new wave of it, I guess, because I feel like a lot of these things we talk about Gen AI, it's the same discussion, but now it's Gen AI. Like we say, gen AI is going to replace people. No, it's a tool. It's just going to make you more productive. Like it's a bit, you know, like same thing a computer is going to replace people.

Speaker 3:

But I think that the discussion that we're having is should we uphold the copyright law or not?

Speaker 2:

right, yeah.

Speaker 3:

I mean, that is a fundamental run right, like today, where I think what the global world is saying is a fuck copyright, we just take everything, yeah, and while there are cases going on like it will take probably 10 years to have a good view on what is allowed and what is not allowed, and I think that is still a fair one, like can you put something out there without someone stealing it? Because that's what comes down to it. If you say you're not allowed to take it and someone takes it, that's stealing yeah, that's true.

Speaker 4:

Very well put, plain, plainly put. Is there anything else that we want to bring up, I think, or is anything else, uh, you want to talk on, say on this topic, anything?

Speaker 3:

maybe an optimistic quote to end with marilla please, bart, go for it. No, no I was looking at you, I was looking at you, or an optimistic tune to end with alex so I tried to create a song, but an error occurred. So I'm not sure okay but did you? Is it a song about an error?

Speaker 2:

no, uh, it's based off of this podcast episode. Okay, wow.

Speaker 4:

But there's. No, there's. You cannot play anything.

Speaker 2:

No.

Speaker 4:

Okay, we're going to edit this in. We're going to edit this in. I'll add it after Okay, all right. Okay, and I guess that's it. Can we call it a pod? It's a pod, all right. Thanks y'all. Bye.

Speaker 1:

Ciao. Thank you, dear. Data to fix unblocks the show, keeping you in the know. Perplexity in every bite Tune in. It's a delight.

People on this episode