DataTopics Unplugged

#54 Is Apple Intelligence...Intelligent? & More: $10K "ChatGPT" Error, Adobe's AI Policy & Open-Source Insights

DataTopics

Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. DataTopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.

Dive into conversations that flow as smoothly as your morning coffee, where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!

In this episode:

Apple Intelligence is finally here: Apple's latest AI advancements, featuring  GenAI images, privacy-first approach, math notes, and a mention of ChatGPT. Dive into this YouTube clip and ponder, will it work as intended?

ChatGPT in a Spreadsheet: Explore the innovative recreation of an entire GPT architecture in a spreadsheet, a nanoGPT designed by @karpathy with about 85,000 parameters.

“How a single ChatGPT mistake cost $10,000” - A clickbait title that stirred controversy on Hacker News, with many arguing the error was their fault entirely, not ChatGPT. Read more community reactions on Bear Blog and LinkedIn.

The ideal PR is 50 lines long: Discuss the perfect pull request length and its impact on code quality, as detailed by Graphite.

Any contribution too small? Delve into the debate on the value of small contributions in the open-source community with Slidev.

Adobe overhauls terms of service: Adobe's new terms ensure AI won't be trained on customers’ work, raising important questions about data usage and privacy.

Artists fleeing Instagram to protect their work: Artists are moving away from Instagram to prevent their creations from being used to train Meta's AI. Explore more: Reddit, SWGFL, and Cara.

Speaker 1:

It looks weird, eh, with the knees. I don't know what to do with the knees.

Speaker 2:

No, it's fine. Your knees are gorgeous.

Speaker 3:

You have taste. Don't spread your legs In a way that's meaningful to software people.

Speaker 2:

Hello, don't take the pillow. I'm Bill Gates. I would recommend.

Speaker 1:

TypeScript yeah it writes a lot of code for me, and usually it's slightly wrong, I'm reminded.

Speaker 3:

It's slightly wrong, I'm reminded it's a rust, hit, rust, rust.

Speaker 2:

Rust.

Speaker 3:

This almost makes me happy that I didn't become a supermodel. Cooper and Nettie Well.

Speaker 1:

I'm sorry guys, I don't know what's going on.

Speaker 2:

Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here Rust Rust Data Topics.

Speaker 2:

Welcome to the Data Topics. Welcome to the Data Topics Podcast. Hello and welcome to Data Topics Unplugged, your casual corner of the web where we discuss what's new in data every week, from Apple intelligence to artists. Anything today is the. What's the date today, 13th of june of 2024, we're also live streaming on youtube, linkedin, twitch x, you name it. We're there, free for you to check us out. Leave a comment. Take a quick look at bart's knees. It's been uh boosting about it this morning. I've hidden them. It was too much. It's to be safe for work. Yeah, um, when?

Speaker 2:

you start focusing on your knees too much, they start looking weird but I feel like that happens with when you say, when you repeat a word too much. You know, like, sometimes, like actually, that happens to me in portuguese even, you know, like my, my partner is asking oh, is this how you, how you write this word, how you say this word? And I look at that, yeah, yeah, like are you sure? And I started really looking at it. I'm like, oh, this looks weird and what's the weird portuguese word?

Speaker 1:

for you anything, anything but I think in english is the same, you know like, if you say like bowl, bowl, it's weird.

Speaker 2:

No, it's like ball, or you know no, it's just me.

Speaker 1:

What were you doing before this?

Speaker 2:

meeting. I was thinking, I was reflecting, um, but yeah, my name is morillo. Uh, together with the one and only bart hi, the man with the handsomest knees in the hood now, you just made it very awkward. Let's just drop the knees well, I can't, it's just right there. Uh, and behind the screens, alex is with us again hello hello um how y'all doing, how are we doing? I'm doing good good, we're also not a bit where I had a the sun is shining, sun is shining. That's something new did last weekend.

Speaker 1:

This on sunday it was really really sunny, no, it's like, uh, randomly, throughout the day it's raining sun.

Speaker 2:

Yeah, raining sun cold yeah, that's true, and sometimes it gets cold. It's a bit weird. It's a bit weird. Sunday was really nice. I know there was also election day here. I'm going to speak too much on the elections, but uh, anything funny did on sunday aside from uh go vote it's sounds like a very long time ago.

Speaker 1:

That's your age showing, I guess. So is it still crystal clear in your mind what you did last? Yeah?

Speaker 2:

yeah, yeah, but I'm very uh, I felt like a real man. I cut the grass, fix my bike. Oh yeah, oh wow, you know what was wrong with your bike?

Speaker 2:

so there was a flat tire on the electric bike so there was like a few some wires that had to also navigate and stuff. But like I had, so I went to action first and I wanted to buy another like the bike tube, right like the inner tube, but they didn't have it. They just had like something to fix the holes, I guess like a kit. I went to Action first and I wanted to buy another bike tube, right Like the inner tube, but they didn't have it.

Speaker 2:

They just had something to fix the holes, I guess like a kit, like a glue kit, yeah, and I bought it there but I tried to use that. It didn't work. So I had to go again to the store and get an actual tube, because that just didn't work. So I actually switched the bike twice but it was just a flat tire. But it makes a difference because I only have one, like it's an electric bike, I only have one, and there's a bit of a hill going back home and that hill makes a difference.

Speaker 2:

Electric bike boy. So I'm not wearing shorts today. I like some other people here at the the pod. But if you saw my legs like man, super buff now because you know all the manual work on the bicycle, you know. So it's a good exercise, but it was too much because you changed the tube because I had to cycle on the, on the. Okay, come on okay what happened in data?

Speaker 1:

what?

Speaker 2:

happened. So I think big thing that happened uh is with ai and uh, when I say ai, what am I saying here? Have you? What do you think I'm saying when I say AI Bart?

Speaker 1:

Artificial intelligence.

Speaker 2:

Oof, that's a good guess, but no, that's not what I'm talking about. What I'm talking about is Apple intelligence. Apple intelligence Can I get the money? Sign the money, there we go. What is Apple intelligence? It's basically Apple's AI, right, right, they rebranded it a bit, um, so I have it on the screen people following the video. Basically, the long-awaited apple move on ai this year. It's coming in beta this fall, so it's not available yet, um, so, basically, it's a whole suite of new capabilities, new ai things, right, um, some things that kind of. So, if you go here, we're also going to link on the show notes the actual announcement right the page. But, uh, there are different things that were shown, right, there's a lot of uh text, right, so even things like what grammarly does to help you with the text that was going to be offered built in on your apple products. So macbook, ipad, iphone, all the things, um, they also mentioned here.

Speaker 1:

So improving the text.

Speaker 2:

You've written basically yeah, or actually generating new text, all of it actually, and what I saw is like you can actually ask it to generate text. So if it's an email or something using you're using the Apple products, it's an Apple generate the text for the email. Or if it's proofreading what you have, it can be summarizing. It can be stuff like what Grammarly does as well, like, oh, maybe this wording needs to be more this, and that they also emphasize the personal context and the cross app functionality, I guess. So, for example, you can uh, generate uh, maybe, before I talk about that, they also showed up quite a lot of the, the vision stuff actually, which I thought it was kind of nice.

Speaker 2:

I wasn't expecting that necessarily, but uh, for example, you can say, create an emoji of bart and then you have three, I think, styles. I think you have like sketches, animation and something else, and then you create a custom emoji or a custom image of you and then what you can do with those things as well is to go, okay, copy the previous image into notes and then you actually open notes and kind of has a bit more intelligent on the context and all the things there. So there was a lot of stuff on vision which I thought was pretty cool as well, and not something that necessarily expected. The notifications also got smarter, so they're not going to just show all the notifications, they're going to kind of assess what are the things you probably want to be notified of.

Speaker 1:

Curious how they will pull this off.

Speaker 2:

Yeah, yeah. So there's quite a lot of stuff. It looks pretty cool. There was a keynote on the WWDC. It's like Worldwide Developer Conference or something, which I was also watching. A video from fireship, the, the just joking that it's not for developers at all, like it's nothing for developers to code, but it's called the yeah yeah, developer conference, um so quite a lot of stuff. You see, like, um, the other thing that they mentioned the the deal with the opening eye that we also covered a while ago.

Speaker 1:

Maybe also yeah, this clearly comes after the their deal with the opening eye. Yes, came to fruition yes, we're uh what are we? So they do one further, since they publicly signed the deal.

Speaker 2:

Yes, yes and uh, actually. So the way that they position this is that this is a separate thing. So Siri also got a new intelligence. Let's say there is more. It understands more of the context. It's not like. It's more like oh, can you tell me what's the weather in Leuven? And then you'll say this Okay, plan me a day trip there, and then you will know that there is Leuven and all these things, but there's an integration with ChatGPT, but it's not part of Siri, right? So actually, if you're using chat GPT, you will know that you're actually being redirected to chat GPT, right? So if there's something that Siri doesn't know, I think today, what Siri will do is say hey, these are some answers I found online, and I think after this you will actually be prompted to chat GPT.

Speaker 1:

And it will be explicit. So that means that your initial interactions with Siri are actually not via OpenAI, yeah, or? Not via chat GPT. Is there a difference there?

Speaker 2:

Yes, apparently there is. So it's Siri, it's going to be Siri, and then you can actually use chat GPT as a like you can ask chat GPT. So it's like when you're asking Siri, you're asking Siri and then you can actually redirect stuff to ChatGPT.

Speaker 1:

But Siri will not use OpenAI's APIs directly, not from what I understood. No, because ChatGPT is their product right. Yes, yes, they also have APIs.

Speaker 2:

Exactly. Yeah, yeah, yeah, but like so ChatGPT is integrated, but it's not Siri. Okay, so you can tell Siri to gpt, even here right at the bottom, so they actually have the. The very last section, I think here, uh, is about chat gpt. So they didn't focus on this, right, but they're just talking about how there's a seamless integration, right. So on the image here there's some text and you can select text text and you can refine with chat gpt. So that's not not apple intelligence necessarily. Um, which I also thought that the fact that they name dropped chad gpt was already interesting, because historically I don't remember any keynote where apple kind of refers to third parties or integrations or something right and they even mentioned like on their thing.

Speaker 2:

There is like um the leading on ai, chad, gpt industry leader, blah blah. We have seamless integration and apparently even with, like the I think 4.0, chat GPT 4.0, even it will be freely available for Apple users.

Speaker 1:

Oh, nice, so, but I think if they would not have explicitly mentioned chat, gpt or open AI, like there would have been a lot of questions coming from the community. That is, a lot of questions coming from the community. That is true, I mean, especially after the deal, I think they're. They're like the approach that they're taking, like we brainstormed a bit about, like it makes sense to to accelerate and to use open ai, at the very least as a temporary solution until they can rule their own.

Speaker 1:

Yeah, um, but the way that they're doing it is very transparently yeah, yeah, yeah, or at least it looks transparent, right like it looks transparent, uh, but was also um, it was, it was on the news a few days ago that it will be opt-in by default. Yeah, which is also interesting. Um, so it's not that you by default start sharing stuff, uh with uh some ai. Yeah, you will have to explicitly opt-in in. They've also gone a bit in depth on how do they make sure that your data is treated.

Speaker 2:

Yeah, with privacy.

Speaker 1:

With privacy, which which Apple always has taken a very strong stance on, and they make it explicit that, basically, that open AI doesn't doesn't retain any data, that there is no IP address being stored, these type of things yeah.

Speaker 2:

I think more on that. The privacy thing. I thought it was also interesting, so maybe the last thing I'll mention before jumping to privacy. It's available freely for any Apple user with an asterisk, because it's not for like. I cannot benefit from this because it's only for the new devices, basically.

Speaker 1:

So if you don't have, I think 15 and newer right 15.

Speaker 2:

Pro and 15 Pro Max and newer IMAX, m1 and later Anything with M1. And I think part of the reason is because in the privacy strategy it's that they try to compute as much as possible in the device, right, but it's big models and stuff, so of course sometimes you need to leverage the cloud. But the idea is that the requests are going to come to the apple private cloud and and then we look like it would be like the data will be fragmented, like you'll get different servers, so to try to try to protect the part of it on device part of your device as much as possible on device.

Speaker 2:

So if you don't have something like this, m1 or later for the macbooks and ipads and iphone 15 pro pro max, unfortunately you're what? Which iphone do you have, bart? I have a 15, 15 pro, 15 pro so you're gonna be, uh, the first ones to try it out and give us first-hand review.

Speaker 1:

Well, siri will understand my requests and it will still go. Really, I did not understand what you mean. Can you repeat it?

Speaker 2:

but it's you talk to siri in dutch, right? Yeah, let's see how it's. Do you think siri in dutch is more direct as well, because the data was trained?

Speaker 1:

it's like it goes like fuck you, barb, don't mumble so much. Like you're like be clear on your pronunciation. I was.

Speaker 2:

I was thinking if maybe saturday would be good? What do you want? We ain't got time in the world here, you know um I need to pay.

Speaker 1:

Uh, they need to pay tokens now to open ai. This is gonna be like very straight to the point.

Speaker 2:

yeah, another thing too that I thought it was interesting. It's a bit the like I saw from another like Fireship video thing. Like the Apple no, not Apple, sorry, openai and Microsoft. They have a deal, so the profits from OpenAI, part of it, also goes to Microsoft, which is a bit Right, it's a bit. So now OpenAI also has its you know influence over apple. Does it mean that apple profits also go to microsoft in one way or another?

Speaker 1:

I have no clue how the profits are shared between.

Speaker 2:

Yeah, there's an open microsoft yeah, there's an article as well that talks about the relationship between OpenAI and Microsoft, but I haven't looked into it. But I was contemplating and I also heard that.

Speaker 1:

It's a weird dynamic.

Speaker 2:

I also heard yeah, like that Apple was thinking of enabling Gemini like Google's AI.

Speaker 1:

Yeah, there have been rumors about it as well.

Speaker 2:

Yeah. So it's like it will be another interesting thing right, like Apple will kind of be opening up more and just kind of say this and this and this um that would be interesting if they would go that way, like they don't, they don't build their own, but they integrate nicely with third parties yeah and a bit like choosing your browser, choose your lm yeah, indeed, indeed, so yeah, and also maybe, if you have an account, your account there.

Speaker 2:

So quite a lot of. I think we also have something from the Verge, right, like they're also highlighting some features about Siri.

Speaker 1:

What I find interesting to see going forward is, like, apple has always been very big on user experience. Yeah, like, user experience is a bit hard to navigate, which is something that is a bit approximate, a bit probabilistic To see how this will or will not be impacted the user experience going forward.

Speaker 2:

Yeah, I think that's a big question I have as well, because right now, the way they're showing everything looks really cool and really magical, and Apple has always been. I think the image that Apple brings as a brand identity is that everything is very well polished. But AI LLMs sometimes it's a bit hard to maintain that polishing right. Maybe just an example here like this is from the Verge. Apple's open idea is real and I think it shows here how I think it's in Siri. Right, actually, you can type to Siri now you don't have to just talk, right, you can actually type stuff. So it's like intelligent search bar in a way, and then you see, oh, do you want me to use that? Yeah, you can do that now. Yeah, I don't, I'm not a Siri user, I guess. Uh, there's the genmoji, like I said, the genai, so there's actually quite a lot of genai for images as well, which I thought it was, uh, it was quite interesting here, so we're also going to link here.

Speaker 2:

The keynote is actually pretty long. They actually announced some stuff with the vision os, like stuff related to the uh vision pro, which we're not focusing on, but there's quite a lot of stuff there. Um, let me see what is another thing I wanted to show. No, that's not what they called it math notes, uh. So for people following us on the live stream, I just put a.

Speaker 2:

There's a youtube link there, but basically they're showing you can have the calculator. So they're actually saying we brought ai to the calculator and at first it just looks like it's a regular calculator, but then apparently you can tap some buttons and stuff and you can use your Apple Pencil and then you have almost like a textbook. Okay, you have handwritten numbers and text, and the idea is that Apple will intelligently understand what the numbers mean and it will actually fulfill the calculations you want. So here there's an example they're doing 96 times 28. And then, as she puts the calculations you want. So here there's an example they're doing 96 times 28. And then, as she puts the equal sign and waits a bit, you can see the actual answer there, and then, as you added the equation, the numbers also update. So I thought that was pretty cool, but then again, this is.

Speaker 1:

It's a bit like to bring in the Apple Pen for this. It's a bit gimmicky, yeah, yeah.

Speaker 2:

It looks cool. It looks cool, but there's more stuff. It doesn't end there, so I don't know when is this? They also shown yeah, this is not here, but they also shown that there was. They also do graphing stuff. So, for example, you can actually put like Y equals to X, something, and then you can actually ask the iPad to compute the graph, basically, okay, cool. Yeah, I thought it was in this video but maybe not. But it's fine, we'll link in the show notes and then they even show that you could change, like so you can say X is equal to 15 degrees, whatever, and then you will plot the graph and then there are even ways that you can click and drag to change the numbers and see how the graph changes in real time. Okay, nice. So it looks really, really cool. But and again, in the demo they really showed this very nice and polished demo. Just got warm here now.

Speaker 2:

But I wonder how much that will translate to real experience. You know, like I, because I mean, I've, I've been a student, I've also been a grader for math, uh, papers and like homework and stuff, and it's not easy to understand how, like the, the logic of the calculations and all the things, and I'm wondering if if I have a hard time as a human and I know what the problem is and I know what the answer should be how easy it would be for an iPad AI model to re-identify these things and in the examples that they show, everything's super well-written, super neatly packed, but I think in reality it's not going to be like this and I think the reality most people don't. When they're doing calculations, they don't do like how they were doing it and mine wouldn't be legible. That's another problem.

Speaker 1:

So cool, cool indeed, maybe a last question on this is this what you're expecting for the big ai move from apple? Um, it is well to some extent, yeah, I think the the opening ideal hinted towards this, right, I think, uh, I'm curious to see what user experience will be like for all these things I use. So I'm very much an apple user. I have a macbook, that iphone, airpods, um, but for everything that is that is something to do with llms today. Today, I use things that are outside of that ecosystem.

Speaker 1:

Yeah, so it would be interesting to see if I would scan for my typical workflows where I use LLMs, if this would allow me to just stay within the Apple ecosystem.

Speaker 2:

Do you use Keynote Apple Keynote no.

Speaker 1:

I've used it for a few times.

Speaker 2:

Yeah, because they also show some stuff. You can create an image here. You can just create images on Keynote keynote, apple keynote, for no, I've had, I used it for a few times. But yeah, because I was also like they also shown some stuff. Like you can create an image here. You can just create images on keynote. For example, right built in, which I thought it was nice, right, but like I don't use the email client from the macbook, I don't use keynote right. So I'm wondering if there are these features there, which is cool. But also the integration between different things, right, I think on the phone is easier because it depends really on your workflow, right.

Speaker 2:

Yeah. But I wonder, yeah, for me like going forward, like okay, if I have a MacBook that has all these features, would I? Would I really switch?

Speaker 1:

I was trying to imagine. If they make the male clients better, I would be happy. I use it. You use it the mail client, not for all my email accounts, but for some but, like it's dumb, like it doesn't do autocomplete like yeah if I type Bart, I would assume by now 2024, it knows that the next thing I write is Bart Smit. Yeah, like, even as simple as that is today very hard to get.

Speaker 1:

So yeah if we can even improve incrementally on all these things would be cool to see. The question is, of course, what it does to privacy, because there's a lot of uh even though they're transparent on how are they going to do it like there's a lot of discussion on opening eyes part of this.

Speaker 2:

Yeah, now that they're working more closely. There it's a bit, yeah, need to know, need to know, right, what else? You mentioned ChaiGPT. No, we mentioned ChaiGPT, but you also mentioned that not everything's on the Apple Flow, and in here I have something from X, something that you brought Bart.

Speaker 1:

Yeah, just cool to check out. Uh, it's. Uh, it's a post on x by. Let me quickly open it. You're also opening it on the screen by uh dabble, and he recreated based on on carpati's design, andrere carpati. Uh, a nano gpt.

Speaker 2:

Uh in a spreadsheet nano gpt, I guess, is just like a gpt is generally pre-trained transformer.

Speaker 1:

So it's just that architecture, but scaled down the transform architecture in a spreadsheet yeah and it's uh if you're interested like how how this architecture looks like and how the individual nodes are linked together. That makes it very tangible, yeah.

Speaker 2:

Makes it less scary when it's Excel Makes it less scary.

Speaker 1:

yeah, you can't use it for training or anything. It's really for educational purposes. At this stage they actually use because we were discussing the Mac client they actually use Numbers, not Excel, but Numbers. Oh really, it's Apple's spreadsheet app. It's a cool thing. Check it out if you're interested.

Speaker 2:

Yeah, if you're looking at it. Actually, one of the very good explanations of convolution on neural networks actually was something similar that… Also in a spreadsheet. It was also in a spreadsheet, but they were talking about convolution, so they had. It was grayscale like they were trying to demonstrate, but basically, the bigger the number, the different color was. So you kind of mapped like the actual number with the actual yeah, the actual image, and then, when they apply convolutions, you can see how it will look, because the numbers will change and the image will change.

Speaker 2:

It was actually pretty, pretty, pretty interesting and maybe more on Chat gpt. You're actually chat gpt user, right, bart? I am yes how much do you rely on it? Uh, sometimes too much, sometimes not enough it's a very cop-out answer, but okay, um, why am I asking? Uh, this came like.

Speaker 1:

Not this article directly came in my feed, but uh so it says uh, you're opening on the screen how a single chat gpt mistake cost us ten thousand dollars ten thousand plus, plus, plus dollars, um, so basically the article curious yeah, the article here is well, it's very so to burst your bubble.

Speaker 2:

It's very clickbaity Basically. So I'm going to TLDR a lot and I read this, but not as much detail as I would like. Necessarily, basically use ChatGPT to refactor code base going from. I think now they're using Flask and Pydentic maybe they kind of go more in the details here or SQL model or SQL alchemy or something Again, and basically they were heavily using ChagPT for this and then they deployed it at the end of the day apparently, and then the next day 50 people sent an email saying the wheel is spinning but the subscription hamster is dead.

Speaker 2:

So this was actually something related to the payment, or well, not the payment, but actually the user. So whatever people actually wanted to make a purchase, it wasn't working. It took them a long time to kind of get to the root cause. Here they also show why this was a $10,000 plus issue. Here they also talk about the $10,000 hallucination and the issue here is apparently in this line. So when they're creating a new ID, they're using a Python package UUIDUUID4, with no arguments and apparently, from what I understand, this creates the same ID always Because you need to specify some other stuff, but otherwise you create a deterministic.

Speaker 1:

They're setting a default value here, exactly, and this line this is a Pydentic class.

Speaker 2:

I don't think it's Pydentic, I think it's SQL Alchemy.

Speaker 1:

And they're defining an ID column and they're saying the default value is this UUID4 string that they're actually creating. They're just setting a value, so I think they kind of go more deep right, and so this creates basically the same, like users with always the same ID, or can't create it because ID already exists? Probably, yeah, and what happened Like? Why so?

Speaker 2:

what we're failing to notice is that we were copying over the same issue with the way the generating ID is in all our models. So, basically, like, all the models were doing this and I think they were basically creating the same ID for all the users. So whenever someone was trying to register, you couldn't fulfill because the ID was the same for everyone, okay. So that's kind of the issue. Right, they spent a lot of time fixing the bugs and actually it took a while to realize what the issue was. So they spent five days actually and they lost money because this was actually related to the sign-up for monetization, right. So it was a startup. They were almost going for the monetization, like doing the transition. That's when they decided to use ChagiPT and refactors and things and deploy, yeah.

Speaker 1:

This is a bit. Yeah, this is a very long explanation where you would normally go yeah, I fucked up, but here it's a startup, so they're probably invested, and they say, yeah, we can't just say we fucked up. Man like this, 10 000 euros is a lot of a big fuck up. We need a. We need a very long explanation how how it's Chachipati's problem Exactly.

Speaker 2:

And actually like so, even on the. I skipped this purposely, but they put an edit. I want to preface this by saying, yes, the practices here are bad and embarrassing and we've seen that. A robust unit integration, test and alerting, logging so even like logging.

Speaker 1:

I guess they edited it after people called them out. Probably I think so.

Speaker 2:

Yeah, yeah, could should have been avoided, where human errors are beyond anything and very obvious in hindsight. So I guess what they're?

Speaker 1:

and this is basically on Hacker News people were going yeah, this is a bit weird to blame ChatGPT on this. This is just you fucking up.

Speaker 2:

So here I opened the Hacker News, right, and a lot of people no, a lack of monitoring costs you 10K, your app is throwing database exceptions, alerted, blah, blah, blah. So basically a lot of people kind of got on them saying this is not a chat GPT issue, this is a you issue, right, exactly. You didn't add tests, you didn't add logs, you deployed it in the day. You're like ref linter or like an autocomplete from, like a plugin, right, and then that suggests you code that has a bug, would you say, oh yeah, this linting tool or this autocomplete plugin from VS Code costed me 10,000? Probably not right. But just because it's ChatGPT and now people romanticize the ChatGPT like a person, we can blame it on it.

Speaker 1:

Yeah, I think it's really something, that real somebody that realized I fucked up Shit 10,000 euros. How am I going to do? This I can't just say this to my manager like you need to, yeah, chat gpt. You can't trust it like it hallucinates, and then, and then like goes to his manager, like yeah, we use this, it's not perfect, made a mistake. We need to share this to the community to educate the community that the rest of the people don't make, don't have the same don't let it, it's not me it's chat gd yeah yeah, it's a very, uh, very tough, yeah, very tough one to argue, but yeah, there we go.

Speaker 2:

So our jobs are still well not. I'm not going to say much about our jobs, but uh, I guess it's a real life anecdote so what we learned from this is don't set a static value for an id yeah, that's.

Speaker 2:

That's the the nitty-gritty lesson. I think a bit of a broader lesson is for coding use chpd as, uh, autocomplete, as an advisor or as an intelligent autocomplete, but it's like it's not. It's not doing your job right like it's. It's you still. You still need to, you're still responsible for the code, you still need to review and you still need to do all these things right well, either that or you build very decent tests.

Speaker 2:

Yeah, true, but I still think it's like and you can build a test using ChagPT. The same way you can build it using a linting tool like an autocomplete or something right, but you're still building it, right, it's still you.

Speaker 1:

It's not the dev and AI that and just kind of go ahead and just do all the things for you. Today it is Judge PT.

Speaker 2:

No, no, I mean, today it's still you. Yeah, today it's still me. Yeah, today it's still us. Today it's still us. Yeah, true, true, true. Maybe one thing, if you were making changes, bart, yeah, To To any project in general. Okay, it's a bit of a high level question. You ever heard that small PRs are better than big PRs?

Speaker 1:

PRs meaning pull requests yeah, I heard that too okay so get on with it.

Speaker 2:

But I came across this article for one reason or another how big should be a PR and I thought it was interesting to share Because I also think the article you're sharing here says the ideal PR is 50 lines long. Yeah, which is also a bit opinionated. Let's say it's a bit random. But yeah, I think it kind of adds a bit more context to that. Like yeah, smaller prs are bigger, it's better than big ones, but what is a small pr?

Speaker 1:

to me is, um, like if you would, uh, looking a bit from the point of view as a of the reviewer, pr reviewer that you have a clear description to the point and then it's easy to browse through it and understand the logic, but, like browsing wise, like you look at the changes.

Speaker 2:

Okay, I understand what it did yeah, but how much, like for example it should take you to understand what they did. Five minutes, not 30 minutes, that's a bit there no, so you really go more on time, like if it a good pr is something that takes you five minutes to review and max, yeah, that's maybe a bit there. No, so you really go more on time, like good PR is something that takes you five minutes to review at max.

Speaker 1:

Yeah, that's maybe a bit of a opinion, but yes. I mean, this is also yes, I think that is the idea that also makes it feasible to have reviewers.

Speaker 2:

Yeah, that's true, but I also think like, okay, I think I understand what you're saying, but again, if you tell this to a junior developer, they're going to be like, okay, but what takes you five minutes? Is this enough? Okay, this is too small. And also they dive a bit, they actually go. I thought it was interesting the way they analyzed this as well. They actually have a sample set that they have thousands of repos, I want to say, and they actually review all the pull requests on time to review how many pull requests were reverted, how many line comments and the total code changes over a year. Because the other thing too is they say, if you have something that is too small, if every line you make a pull request, it's also going to decrease your productivity, right? Because now, instead of opening one pr, you open five for the same changes, for the same amount of lines changed yeah, yeah um.

Speaker 2:

So, for example, they start going here they actually actually they bucket a lot, but basically they say that after. So, for example after one to two thousand lines, changed the one to two thousand lines yeah, okay, after that the review time actually drops. The people don't look at it anymore, exactly right, so so actually, like me, if you just want to get stuff, that's interesting, that they have all this data on it, though.

Speaker 2:

Yeah, that's the thing. That's why I thought so. Here let me see. Can I take a look? What's the 200 line PRs? Oh, they have some. 50 line code changes are reviewed and merged 40% faster than 250 line changes. They're 15 less likely to be reverted than 250 changes and have 40 more review comments per line changed. And they're also using medians here to avoid the outline. Nice, nice, simple data set how many, how many repos they saw but you also need to be 50, like it can be less well, the end is like they're a bit more.

Speaker 2:

They say between 25 and 100.

Speaker 1:

okay, so. So I think some uh need to start adding some more comments to my stuff yeah, just like, do this and then execute this, rename max um.

Speaker 2:

And also, even they mentioned here right now their metric. They're trying to optimize the, the number of prs and the review time, but and like, reduce the revert rate.

Speaker 1:

But they also said that, like, if you, if you care about line changes, it's actually better to have a bigger pr, because you know it still takes you less time, but I do think like the topic is interesting and especially if you're working in a team context, to have some guidelines on this.

Speaker 2:

Yeah, because I think this is very concrete right because I think if you say five minutes easy, like five minutes for you is not for me. If it's easy to read, yeah, maybe that's different.

Speaker 1:

I think 50 lines is very concrete I think, like if you have this as a guideline within your team, like it's a. It's a guideline where that is concrete, will be very helpful and everybody will say what a fucking stupid guideline it's like in heights, everyone's gonna be like fuck man, this guy, yeah, but it could help, right.

Speaker 2:

So also revert PR size. They also see that in the beginning, from zero to 10 lines changed, there's actually a bigger revert rate than between like 10, 20. And then, of course, if you go higher it also has a bigger revert rate. And I think the reason they give you is probably because these are config files. So someone changed the config and they reverted back right. So they kind of go on and on and you always kind of see this kind of bell shape kind of thing like number of comments. Again, once you get to 2.2 to 5K lines changed, the number of comments also decreased because people stopped reading it. So I thought, as a thought experiment, I thought it was interesting, was interesting, you know. I mean, I'm not sure how much I would follow, but at least now if I tell you someone that like oh, a smaller pr is better than a big one, I feel like I have more, I can color this more, you know. Like oh, 50 lines.

Speaker 1:

It's very interesting that they have all this data to uh to build their argument indeed, and even like they have, I mean, it looked like, uh, because I have the feeling this. These type of things are always often like what's the gut feeling of the person?

Speaker 2:

describing this.

Speaker 1:

Yeah, it's interesting to have to see that it's actually some empirical data indeed.

Speaker 2:

So they did do that also. I went through their reasoning. I didn't see any clear flaws in their thinking. I thought it was pretty, I thought it was interesting and I would be very curious to see this in practice. You know, like, if you have a teamwork and people reviewing and stuff, uh, so yeah, just to kind of close it up right. So I think that let's see. Yeah, I don't remember, but they did mention like between 25 and 100 lines changed. It's pretty good, but anything more than that maybe you should think along yeah, do you actually work a lot with teams today, bart, or?

Speaker 1:

um you when it comes to coding.

Speaker 2:

Yes, on an ad hoc basis well, when you're coding like or do you usually do you still? Do you do more solo coding?

Speaker 1:

In absolute lines of code? I think yes, and when I'm involved in a team, it's more as a reviewer.

Speaker 2:

Ah, as a reviewer, so you review quite a lot of code.

Speaker 1:

Quite a lot is a bit exaggerated Okay.

Speaker 2:

What kind of code do you review? Readme files.

Speaker 1:

Yesterday an interface for a gen ai conversational bot oh okay, cool.

Speaker 2:

Do you have any quick tips for people? What are the things that if someone does, you're like oh my god like that's um putting me a bit on the spot here that's my job I uh like it, but it's very subjective when people form a bit of an opinion on.

Speaker 1:

I'm doing this because I think this is the best practice, because of these and these reasons, and I think that you tend to see that in the code people right as well, but you think that's a good thing, or this, the code people right as well, but you think that's a good thing or this is a bad thing this is a good thing that people like so, but there's a reason why you're designing your code in a certain way so people like opinionated about your code design yeah yeah, okay about there but do you think there is a?

Speaker 2:

better, or do you think it's just different, like, do you like? I guess my question is as long as someone has an opinion, that's good, regardless of what the opinion is, or do you do you feel like some people have really bad opinions and they should stop doing that? Um, I think they're always bad opinions, but I think in general, that is the very, very much minority but do you prefer to have someone that has a bad opinion than someone that doesn't have opinion at all?

Speaker 2:

depends what your definition of a bad opinion is, then yeah, I think it's a bit it's a bit of a difficult situation because I I usually, as a general advice to to people, I say like just be opinionated, right. Like just say this and this and be able to back it up. But I also had moments that I was like yeah, but this is bad. And then like, yeah, but if you think of this, it's like yeah, but no, this is bad. And then like, yeah, but if you think of this, it's like yeah, but no, this is bad. Yeah, but if it is like, okay, maybe you're too opinionated, right, like I think it's good to have an opinion, but at the same time, I feel that that much resistance made everything a bit counterproductive as well.

Speaker 2:

And things took way too long because you were too opinionated, you know. So I feel like I kind of take a step back sometimes on my advice, but I'm not sure where to draw the line.

Speaker 1:

I think I can be very opinionated I think it's good if you, when you're again like you're in a team setting, project setting, like that, there are some existing best practices, yeah like you have a foundation to start from.

Speaker 2:

Yeah, I think it's like it's a bit analogous to I try to encourage people to challenge me, like in the decisions, but at the same time, am I allowed to challenge you?

Speaker 1:

on things like you need permission.

Speaker 2:

Okay, we need to uh do a discussion after the after this episode but yeah, sometimes I like people to, to, to challenge, but sometimes it's like if they challenge too much, I'm not sure if I'll be like I think I maybe need to navigate better. You know, like this is not the moment to challenge. We decided this, we are going to do this?

Speaker 1:

Yeah, but that needs to happen as well.

Speaker 2:

Yeah, but maybe that's just me Like I need to navigate the different context right. The different moments Say this is a moment to discuss. This is not a moment to discuss.

Speaker 2:

Let's just kind of move forward, yeah, but I think, like with the community and best practices and linting tools, I think it's a very good argument to say it's not me, you know like. It's look, pepe, it's there. All these things are there, yeah. And maybe talking about contributing and PR sizes and all the things there, do you contribute to open source a lot or no?

Speaker 1:

These days not much.

Speaker 2:

But you have contributed to open source.

Speaker 1:

Back in the days when there were still dinosaurs roaming the earth.

Speaker 2:

You know, maybe I'll confess something. Sometimes I feel like it's a bit intimidating to contribute to an open source. I feel like it's maybe my insecurities knocking on my door. But sometimes I'm like man. Most of the times when I do contribute, it's because I'm trying to do something. There's a bug, and then I realize there's a configuration issue or something small, okay. And then I'm like nah, that's just like I don't know.

Speaker 2:

One time I was trying to install dbt and I realized that the requirements on the package they were too loose. So actually, for some reason, my poetry installed one of the sub dependencies, an old one, and then it was breaking my dbt, even though I just had just dbt. And then, like, I spent a lot of time to try to figure out why this was happening. It was very easy for me to fix on my project because I can just specify the dependency to be higher. Yeah, right. But then I was wondering like, yeah, I should I make an upstream contribution? And it was going to be one line change, okay. And I think in the end, like, because I was like okay, now I need to write a merge request, explain why I'm doing this, try to explain why the issue is and it's, and then I think in the end I didn't do.

Speaker 2:

But I also noticed that I have this thought is like man, like it's one line, you know, is this really valuable? Like it's definitely valuable because, yeah, I mean, I think it is right rationally. Yes, but I do notice there's a bit of uh, ah, that's just me, no one else has this issue. Like that's a me problem, why am I trying to do this? And that. You know, like it's such a hustle, I'm going to go out of my way this and that, and slowly I try to change it. I also think there's a bit of exposure to the internet which is like some people are like, why are you doing this? No, I'm not going to do this.

Speaker 2:

You're afraid everyone's gonna be behind me. Hey, come see this guy. You know so stupid. I think there's a bit of that. I think there is a bit of that. But I try to be strong, I try to be courageous, I try to follow your lead, bart. I'm like, no, I'm gonna do it. Do you think there is such thing as two small contributions to open source projects?

Speaker 1:

no, it's not relevant.

Speaker 2:

I don't think so either and, to segue with that, I actually contributed to this. It got merged. Um, the package maybe I'll talk about the package first, because you know is this to get people to follow you on? Uh, hit up uh, no, but then feel free to follow me. No, but yeah, maybe talk a bit about the package first, because you know what they say. Right, a package a day, no. A library a week keeps the mind at peak, or something like that ah, yeah, do I need to you gotta queue it, queue it up wait, wait, wait

Speaker 3:

a library a week keeps the mind at peak. With code so sleek, thank you. Functions and features we eagerly embrace Debugging and building. Our knowledge expands With every new library. We strengthen our hands.

Speaker 2:

It's a hit. It's a hit. It's a hit.

Speaker 1:

We just created two minutes before we started with sunocom.

Speaker 2:

Yes, shout out to Suno. It's like I've been a bit starstruck now it's to preface our uh library.

Speaker 1:

a week keeps the mind at peak, exactly section exactly, really cool.

Speaker 2:

Uh, actually, can you make this available actually for people to listen to afterwards if they want to?

Speaker 1:

yeah, what spotify? Yeah, we should do it under your name your artist name, my artist name okay. So what is the library you chose? It's called sl.

Speaker 2:

It's called Sleedev. It's not a new library, but it's something that I actually use quite a bit. It's my code. Do you pronounce it as Sleedev, sleedev, slydev, slydev, slydev? I think it's probably you know, because it's slides developed.

Speaker 2:

Okay, what do they do? If you're familiar with Review, reviewjs or any like javascript framework to create slides, this is similar, but it goes from markdown. So basically you have like some markdown text here and then from that you will create a slide for you, right? So usually you present it on the browser. There's actually quite a lot of stuff. It's actually pretty cool. Can I change this? Now? I cannot change this, um, it has quite a lot of features. You can draw stuff, you can sync with other devices. It's really cool, and this is actually my go-to today if I need to present something like something quick, right, for reasons that I'm not necessarily going to discuss now today.

Speaker 2:

But on upgrading versions, I noticed that basically I was trying to put images and overlay images, but for some reason the images were being cropped. So actually I spent quite a lot of time trying to figure out why this was happening, and then it turns out that there was one line here and there's a CSS class that they had added that would basically, there's an overflow, hidden property in CSS, right. So basically, images that fall over the boundary that it's supposed to be in, they get cropped out, they get hidden, and I think this issue. This was the only thing that I needed to change and I was debating again. It's one line change, right, this was there for some reason, but I removed it and I was debating a bit on whether it should have contributed or not, so I changed it on my uh. On my side I had actually a quick fix here, right, um, and I opened the pull request and I was like, okay, whatever, um, and actually got merged, so I'm officially a contributor now. To slide that wow, congrats you have to.

Speaker 2:

Uh, you know. Can I get an applause please? No, it's fine, stop and where you, I'll just do my job.

Speaker 1:

Really stop, it's fine, it's fine were you in this case also hesitant a bit yeah?

Speaker 2:

a bit because like and also try to follow the it looks like.

Speaker 1:

So for the people that don't see it like, the first line is thanks a lot for this project. Marilo says yeah it's really my go-to tool in these days. I really really love the project. And then he goes on. What the PR is, I really need to make sure that I get them high top to merge this.

Speaker 2:

I love you. Are you single?

Speaker 1:

no, but actually, I heard that also I have a question so you removed this overflow hidden class and which probably maps to some CSS that says overflow is visible. But there is also this existing CSS that says overflow hidden is important, overflow visible is important.

Speaker 2:

This is what I had.

Speaker 1:

You out-attached yourself I had that myself because that was an issue. Never mind my question, I understand.

Speaker 2:

But basically it was just that one line, nothing to change really. And I was like, okay, maybe I'll just do this. And then the guy's like, can you elaborate more on why and what this is needed? I was like, okay, should I try to create screenshots or reproducible use cases? But then I just put this and he just kind of I answered.

Speaker 1:

And then he just kind of stayed like that for a while, said hey, actually we don't, we also did this change, we don't need to do this. So in the end actually got merged. So congrats. I know, I know how many of these uh contributions did you do like to, because you have a few of your own open source projects but like to other open source projects a handful, a handful. I feel like the it seems like max max five.

Speaker 2:

Yeah, less than five but I think the biggest one that I've done and maybe I can actually yeah, it's called this, but it takes, like if you want to do more than small things, like it takes time to actually collaborate in an external project.

Speaker 1:

right, I think it's like, I think by default, it's like you do it, like you use a tool and you notice, okay, something's wrong.

Speaker 2:

And because I need it, I'm going to quickly do this change, and that's the thing for me. A lot of times, these changes are small, but what I'm trying to re-educate myself is it's still very useful, right, like true, yeah, definitely. Like there's a lot of people using and maybe some like maybe it's like five percent of people, but it's still relevant, right, and a lot of people are not even going to raise an issue or something, just like I did. I think this is the biggest. This is also something I contributed. It's called Reflex. It used to be Pinecone. It's like a Python transpiled to Nextjs for front-end stuff.

Speaker 1:

Did they change it because of the vector they made Probably Just called?

Speaker 2:

Pinecone Probably, but for this one I actually did a broader contribution. There was something missing, really, and I actually wrote more code. It wasn't just a light change. What?

Speaker 3:

did you build.

Speaker 2:

It was just because their markdown it wasn't formatted right. If you actually wanted to have lists, the indentation was weird and that's because they were using a different class. Let's see if I can find myself here.

Speaker 1:

Are you a big markdown user? This is also a Markdown thing.

Speaker 2:

Yeah, this is a Markdown thing. I don't know, I think you make, so you mean in the slide.

Speaker 1:

If you also write Markdown like are you a big Markdown user? I think so.

Speaker 2:

I mean, I think it's. If we need to have something quick, this is for the Python user group. Actually, this website that that we built for the Python user group in Belgium is built using Reflex and the actual text. We're actually sourcing from the readme from the GitHub right, which is all Markdown, so that's why I was also using it. Cool, but I think yeah. Why wouldn't you use Markdown?

Speaker 1:

No, I'm a big Markdown fan myself, maybe a Markdown-related tool to shout out and I might be perceived as old when I say this is Pandoc.

Speaker 2:

Ah, you actually had some stuff with.

Speaker 1:

Pandoc no, I at some point made a plugin for it, but Pandoc is like. You have any type of file with text and I use it very often with Markdown and you can convert that to like anything. Html slides is one of them, pdfs via latex is one of them like. But you can also write just like. Write like write your book in markdown and convert it to a, to a latex or to an html like an epub version. Like is it good, the convergence?

Speaker 2:

it's very the convergence, it's very good.

Speaker 1:

Oh, really, it's very mature. What was your Pandoc plugin? It's allowed you to it's been a few years actually but it allowed you to have a code block of Python within your Markdown file that it would actually execute and then show the results of that. So that could be like a text output, but it could also be like a graph or something.

Speaker 2:

I see, so it would actually show just underneath the code. Yeah, exactly. Oh, this is really cool. Oh, really cool. This is the project right. Yeah, and you have here a vice versa contributor as well, Less committed two years ago. Okay, Do you actually do you? I have this as well, that sometimes I write something and then I lose interest and it just kind of I always have this.

Speaker 1:

This is probably like I typically work on this these open source stuff and it's not necessarily because it's open source, but I create something because I'm using it in my workflow, and then you think a nice thing. Next step is that, oh, I'm going to share it with the community, Maybe someone else can use it. But from the moment that your workflow changes and you're not doing that anymore, it gets abandoned.

Speaker 2:

Yeah, and I think that's the yeah I mean. I also admire the people that are really invested in open source long term, because it also takes a lot of discipline in my eyes to really you know because of those people, that we actually have the ecosystem you know.

Speaker 2:

but I think also, like you said, contributing to a project, like just actually just contributing a project that you don't necessarily use, is not something you need. It also takes a lot of time to get up to speed with the code base, to really understand, make the changes, add the tests and all the things right. But that's something that it's a soft goal of mine to just kind of contribute more on these things. I think it would be nice. All righty, this is too much non-AI stuff, Bart. I think Maybe we should go back to the AI things, Because Adobe overhauls terms of service to say it won't train AI on customers' work. What is this about?

Speaker 1:

So Adobe released their new terms of service, I want to say, somewhere last week where there was a bit of vague terminology, uh, and that was, I think, perceived, at least or interpreted, as um adobe has access to all your content, which it still has even with the, with the overall terms of service, and that it will use this content to train AIs.

Speaker 3:

AI models Okay.

Speaker 1:

Which sounds I think that were the headlines that you saw. I actually read it myself last week and it actually says Adobe has access to all your content, which is weird, even content that is on the NDA, like they. I guess technically they always have access XSuite.

Speaker 1:

It's in their cloud but apparently they can really view it like open it, like oh wow but what the terms of service actually said and now they made it more explicit is that what they do is to use that for content moderation, so for like images that should not be on there, like these type of things okay, but then this not training ai models on the customers work, I think the fear of people? Was that everything that is in my adobe cloud?

Speaker 1:

yeah can be used by adobe to train their firefly model, which is their, their gen, ai model yeah um, which is, uh, which, which, and they're made that more explicit now with their reviewed terms of service do you see a risk, security risk there as well, like if you have something.

Speaker 2:

I mean I'm wondering, for example, if it was text right, you have a key, you have an api key or something that gets leaked. That is a clear privacy risk, right? Like if you have a I don't know. Like we think we talked about warp and they have warp ai.

Speaker 1:

If they use your data to train, maybe they have some autocompletes they put someone else's aws keys do you see, what you would hope, I think, with any cloud services that it gets encrypted and only you, as the user, can open that, which is not the case for Adobe.

Speaker 2:

Yeah, sure.

Speaker 1:

You're an Adobe AI user. I am, yeah, an Adobe user, not, well, sometimes, an Adobe AI user.

Speaker 2:

Does it concern you at all?

Speaker 1:

The problem with Adobe is that? Well, yes, it concerns me. I think it's weird that people have access to your local files, to the designs that you create, but you're not going to stop using Adobe. The problem with Adobe is like it's so big, you cannot go in the creative space without Adobe. Yeah, they really, really have a monopoly.

Speaker 1:

And you can say for individual apps like Photoshop, there is an alternative. Like for Illustrator, there is an alternative. Like for Illustrator, there is an alternative. Like you have alternatives for each app, but not for that ecosystem of apps that you typically use together.

Speaker 2:

Yeah, yeah, and this is very polished, right. Adobe products are very well polished, no.

Speaker 1:

I think that those opinions differ depending on who you talk about this, but feature-wise they're very good. Yeah, yeah.

Speaker 2:

Cool. Okay To keep an eye out, then? Yes, and who else is keeping an eye out? Pretty artists. What is this about? Artists are fleeing Instagram to keep their work out of methods AI.

Speaker 1:

Yeah, so meta announced that, uh, if that, if I'm not completely mistaken, that, starting as of somewhere end of June or July, that they can use all your public posts on Meta's platform, which is Facebook, which is Instagram, which is maybe even WhatsApp groups, I don't know to train their AI model. Meta said that. Meta said that and what you see is that a lot of artists use Instagram a bit as their portfolio, which basically means, like, having their portfolio up on instagram gives meta access to use their art to train meta's model. And you see now, uh, since a week or two, a huge uh growth of, uh, of cara. It's called um I think it's caraapp, I'm not 100 sure it's caraapp c-a-r-aapp, which is, uh, an alternative to instagram for artists to show your portfolio didn't we talk about cara or?

Speaker 1:

I read it in an article, maybe you read it I think I read it, but I'm trying to context and if they uh grew a lot, um, would be interesting to see.

Speaker 1:

I think they grow uh in a week to uh 600 000 users. I want to say I'm not exactly sure about the numbers. Uh, and a funny thing? No, it's actually not funny, it's a bit sad. It's unrelated. Not funny, it's a bit sad. It's unrelated to Meta, but it's related to their huge growth. Is that the creator I forgot her name of Kara got a bill from Vercel of almost 100,000 euros?

Speaker 2:

That's what I saw, yeah.

Speaker 1:

Because the app grew that quickly and had to scale like to huge amounts like the yeah they got a youtube bill and they're not working for sellers now working with uh, with the, with the creator of cara, to see how to go forward. Um, they say that, um, that they tried to uh to notify her, but that she was very busy because she was making sure that the platform could handle everything. Yeah, that's true.

Speaker 2:

But like, yeah, in this case it wasn't here there's also TechCrunch, Maybe we can also link that. It grew very quickly. It was using serverless in Vercel, meaning that serverless basically it just scales really well. Right, but she didn't put like it's good for you yeah, but she didn't put caps or alerts or monitoring for the billing and then she had a.

Speaker 1:

Her bill basically exploded and it's, and it's easy to say yeah, but you need to have monitoring place notifications in place, but like it grew so quickly, like within a week there were 600 000 new on boarders. Like I think your first priority is to make sure that the platform keeps working yeah and then three days, four days later, you see a bit of hundred thousand. Yeah, then you're gonna oh shit, I should have done that. It's easy to say she could have seen it coming, but I think, yeah, yeah struggle with the unexpected 1525.

Speaker 2:

Yeah, it's crazy. Yeah, what's the name of the?

Speaker 2:

because I'm, uh, the name of the creator uh, jenya, jeng jeng, let me say that one also and I think, uh, yeah, because I remember, like I, I also glanced at this story and uh, she's also suing, uh, google, yeah, for allegedly using their copyrighted work oh, wow, to train imaging. So, yeah, it's a big advocate, I guess, for art against gen ai art and all the things so cool. I'm also wondering yeah, um, so this is the parallel between instagram being the portfolio of artists and now people, artists leaving um, and github is kind of the portfolio for developers. But it's well known that GitHub uses the code to train, but there's not a friction at all there, like no one's fleeing GitHub now because they're using this to train AI. Do you think that's because the AI tools are going back to developers? Do you think it's because there's something that you don't feel like you're being cheated on as much if they're taking your code? Because developers copy codes, are known to copy code from the internet and from each other. Uh, do you think my knowledge is just flawed.

Speaker 1:

That's a good question. Interesting parallel to make, I think, with um using copilot for hitup, which is trained on uh, on open source products. Right, open the projects, um, I think. Unless you're building a competing project, then maybe, but that's typically not the case. Like you're, you're gonna see code generated in a certain style that maybe you're gonna say yeah, I also write my code this way, but it's still very vague, right, because a lot of people write their code like marie writes it not like me, not like me, but but I think, when it comes, when you're, when you're an artist and your art, you have a very specific style.

Speaker 1:

And suddenly you see, jen, I generated art in your style. That is much more. It creates much more friction or, like the feeling like this is unfair. Yeah, but do you think Because it's really the end product, like the style in that you write code is not your end product, I see what you're saying.

Speaker 2:

So it's like you're saying that code is a means to get something else done. Yeah, and art, that's the something that's something else done. Yeah, and art, that's the something that's the end product. You think that's also something a bit of the identity of an artist.

Speaker 1:

There's more attached to this than the developer with code, potentially. But I think the the, the friction is neat, how you said, like it's the end product okay, and it feels unfair if someone mimics it and not because of their own effort that they're putting into it, but because they use something that was trained on your stuff yeah, I see what you're saying.

Speaker 2:

It's like I'm basically taking what you did, yeah, what's your, what is your end product?

Speaker 1:

yeah, and I'm mimicking it based on what you did and taking the credit and the goodies as guests from from you and I think if you would have, like you're an artist, you're famous, if you're in style, if we would not have jenny, like you couldn't still be an inspiration for other people and they might incorporate your style to some extent. But it requires them to to practice for years to get to a level where they, where they can be inspired by your style. Right, yeah, yeah, like it's a different, different feeling, like in that case, you're probably going to be proud yeah, there's also level, like a mastery you need to achieve to be able to do these things.

Speaker 1:

And now someone that cannot do anything, cannot draw at all, can say generate this in the style of marilla, yeah, and it doesn't feel I'm benefit of it if that was even an option to design something else in my style.

Speaker 2:

I'll be pretty flattered if someone actually wants to copy my style yeah, but you're not living.

Speaker 1:

I'm not an artist, no, no I'm not an artist.

Speaker 2:

Well, yeah, it's a.

Speaker 1:

It's a good point, but and I think they're like I don't. So to me the the the major difficulty there. I think, like meta or open ai or other companies that are building these gen, ai, um, image generation models, they are not necessarily nefarious in this. I think they would be open to contribute something back to the original artist Going out on a limb here. But the problem is today there is no, the method does not exist to have a hard link back to. I have this new output. It is based on a style by marilo, but it goes through this major black box system yeah and you don't really know.

Speaker 2:

Like was marilo's original art how much was it part of it if?

Speaker 1:

at all. Right, yeah, yeah, and I think that mechanism doesn't exist now, but we need it to some extent to be able to do attributes back to artists. Yeah, I think, from that moment onwards, like you're in a completely different scenario yeah, yeah that's true, and then, from that moment onwards, it also becomes possible for people to opt out, because you actually know what to block.

Speaker 2:

True, but do you think that if it was possible to really like let's imagine there is a financial contribution or even just acknowledgement, right like this. Seven percent of this was contributed by parts art yeah do you think artists would be less resistant?

Speaker 1:

to gen a?

Speaker 1:

I think so I think so but I think there will still be people that say and then you can make the parallel with the music industry, yeah, where you can, uh, I can use a sample of something that you made and I use it in my music, but it will be attributed to you. And I have a deal with you where I pay x percent to you. Sometimes it's it's in a contract, sometimes it's just spotify that distributes it, for example, um, but I also have the option to say no, I don't want that and I'm gonna not allow you to use it so because there is this transparency that you use something from yeah, it's true yeah, I think that's it.

Speaker 1:

That's like the economic model for that doesn't. It does not exist now because we do not have the mechanism to make that transparent and that's all.

Speaker 2:

Black box, that is an lm yeah, that's true, that's it, that's the, the intentionality or the like, the transparency, I guess.

Speaker 1:

I think it makes a big difference and I also think so, if you know a way how to build that I'll think about it, give me like next week.

Speaker 2:

But I also think that in terms of code as well, there's a bit of a change, different approach as well. That I think it's people also expect, in a way, that you're going to copy paste code, that you're going to do this, that you're going to do that, like I think if you put something there, it's kind of expected that people are going to copy that.

Speaker 1:

In a way, there's a bit of a maybe, like a few weeks from now, you will see somewhere someone has removed class overflow hidden and you're gonna go, you're gonna go, god damn it. They trained on my code. Ai just does that.

Speaker 2:

Yeah, good luck with that. You know my changes are very punctual. You know, very slim, all righty Cool. This week was a big week on AI I think mainly on the Apple stuff, as well, always, always AI, huh, always, always AI.

Speaker 1:

Maybe there's also been a big round of investment round for Mistral 600 million if I'm not mistaken true.

Speaker 2:

so we have a lot of things moving let's see like every week, every week, and we'll be here next week. Thanks for whoever's been here following us. Thanks, bart, thank you thanks, bart thank you, marilla, thanks Alex. Thank you, can we hear that sweet, sweet outro that you had for the the library week?

Speaker 1:

you want the same or?

Speaker 2:

the alternative version. I'm gonna put alternative version. I'll leave it up to you okay, okay, I really love it and then maybe people can let us know which one they like best. Bart wants to do a video clip of that too, so pick one, and Bart will dress up and do the whole.

Speaker 1:

Let's do it the library a week. This is gonna be the outro alright, hit it.

Speaker 3:

Can you break dance? Funny enough now we have the video from my contra boobies and javascript breaks new functions and features we eagerly embrace, keep bugging and building.

Speaker 1:

Our knowledge expands with every new library we've got in our hands ciao library boobie, boobie, boobie, boobie. Library.

People on this episode