DataTopics Unplugged: All Things Data, AI & Tech

#82 AI Cracked a Decades-Old Science Problem & Europe’s Push for Digital Sovereignty, The Secret Behind MCP, and The Cloud Hack That Slashes Costs By 62%.

DataTopics

Send us a text

Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. DataTopics Unpluggedis your go-to spot for relaxed discussions around tech, news, data, and society.

Dive into conversations that should flow as smoothly as your morning coffee (but don’t), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data—unplugged style!

In this episode:

And finally, a big announcement—DataTopics Unplugged is evolving! Stay tuned for an updated format and a fresh take on tech discussions. 

Speaker 1:

You have taste in a way that's meaningful to software people.

Speaker 2:

Hello, I'm Bill Gates. I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded, incidentally, of Rust here Rust, rust.

Speaker 1:

This almost makes me happy that I didn't become a supermodel.

Speaker 2:

Cooper and Netties Boy. I'm sorry guys, I don't know what's going on.

Speaker 1:

Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here.

Speaker 2:

Rust Rust Data topics. Welcome to the data. Welcome to the data topics podcast.

Speaker 1:

Welcome to the Data Topics Podcast. Hello and welcome to Data Topics Unplugged, your casual corner of the web where we discuss what's new in data every week, from EU to superbugs, everything goes. Feel free to check us out on YouTube. I think, we're still putting stuff there, right, alex? Yeah, feel free to leave a comment or question. Feel free to reach us. Today is the today's, the day, 18th of march of 2024. My name is marillo and I'm joined by bart. Good day, good day, and alex hello hello.

Speaker 1:

Um, it is a good day, isn't it? Sunny, still a bit cold, but sunny. I can live with this.

Speaker 2:

Sunny is good. Yeah, yeah, I can default to sunny.

Speaker 1:

Yeah, but like I don't mind the cold as much, I find, but the grayness is what gets me, and the lack of sunlight, ah, that gets me too, but it's hard to to say that as a Brazilian you know, oh, I can't imagine, I can't believe you're saying this, I think everybody likes to leave the winter gloom behind them, right, yeah, well, in brazil there's not a lot of winter gloom, like, actually, I noticed once I left out, once I left brazil uh, you realize that winter exists.

Speaker 1:

I realized that, no, there are more. There's like four seasons, no, but also like the. Oh, it's nice out, let's do something outside, like to let the the weather dictate your schedule. That never happened in Brazil, like it was never. Like, oh, yeah, it's all it's nice, let's, let's spend the day outside.

Speaker 2:

How constant is the temperature, like in Brazil, across the year?

Speaker 1:

I don't know, Like I don't know, it gets warmer, Like I.

Speaker 2:

I mean it can get to 40 degrees in the summer, peak summer, coldest, I'd say maybe 15. Do the trees lose their leaves?

Speaker 1:

No, you don't have like very like you don't see the gray, the brown leaves and all these things, but I think it's more like you also get so many more warm, sunny days that you don't. It's never like oh yeah, we need to capture this day because we don't know what next time we're going to have it. You know, know. It's like oh yeah, next week we'll have more, or if not, next week, then we got spoiled. Yeah, I think so. I think that's why you don't see the value in it anymore. Yeah, there's a joke.

Speaker 2:

The people from the north. Now at least you, you enjoy the sun yes, that's true.

Speaker 1:

That's true, that's true.

Speaker 2:

I feel like you appreciate it more you know, too much of a good thing is not a good thing yeah, maybe, yeah, yeah, wow, it's a quota, just wow.

Speaker 1:

Okay, can we have the bell? Maybe? I don't know. I feel like we needed something to it's okay it's been a few weeks. Alex lost track of the song yeah, she's like what is this, what are these buttons? But so, yeah, it's been a few weeks, so we have some stuff to discuss. A lot of stuff happened actually, like I think uh, cloud was released, a lot of these things that probably our listeners are already in the loop, because it's been three weeks since we last recorded I want to say yes, we spent two weeks.

Speaker 1:

Two weeks we had a. Uh, yeah, we had to I had excuses.

Speaker 2:

Yeah, that's what came down to her.

Speaker 1:

Yeah, we had excuses. We have more updates later, but um, we'll leave it at that for now. But what do we have then? What has happened in the recent past?

Speaker 2:

let's say, a lot happened right start. But well, I think we'll uh focus a bit more on end of last week, beginning with this week yes, so we have here like open ai asks white house to for relief from state ai rules.

Speaker 1:

What is this about, bart?

Speaker 2:

this is from march 13th, so five days ago yeah, I thought it was actually a more recent um, I haven't gone through it in a lot of detail because I don't. I'm saying that because I don't fully understand some of the wording in it. It's it's a. It's an article on yahoo finance and basically says that open ai went to the us government to ask from a sort ofa dispensation to not have to comply to all state rules and in exchange, by having this like waived for them, they will share a lot more of information and potentially also usage. And that's what I'm not exactly clear on. Like what is this extra information about models? What is the quote, unquote, free usage that the federal level they will open up with?

Speaker 2:

but, uh, their reasoning, uh, I think that's very politically uh, um, we won't go into the politics of all this yeah but the the reasoning is a bit like uh, there is no today, no clear guideline on what is ai policy in the States, meaning that you have, at state level, a lot of policies starting to be implemented and it's very hard for them to implement all of these different variants.

Speaker 2:

And what they're saying is we're in a political climate where we need to move fast and I think they're very much jumping on the ah, deepseek is Chinese, so we need to move fast, and I think they're very much, uh, jumping on. The deep seek is chinese, so we need to be fast and fast. They translate to let's not try to comply with all regulations in all states, but let's just start, try to have, like, an agreement on a federal level. What's um, what do, what are the minimum requirements we need to adhere to? And for, let's say, the return of simplifying this for us, we give Easy Access, a lot of transparency on what we're actually doing to the federal level.

Speaker 1:

Yeah, do you think that there's like a mini crisis I don't know how many or how much of a crisis it is in OpenAI? Because I feel like there's, I don't know. I think they had the. They were going to be for profit and deep sea came along and then there was. I think I saw a lot of statements from open ai as well, saying like we shouldn't trust because it's government this and it's government that and um, I don't know. I feel like before they seemed to be more in the lead, like we're just worrying about what we're doing. We are ahead of everyone. No one's going to catch up with us. Actually, there was a statement from, I think, sam Altman that was hopeless to compete with them, and I feel like now they're appealing more to yeah, like this flexibility from the state and yeah, I don't know the from the state and yeah, I don't know. Do you do you think that, uh, the, I don't know you? Would you still say that opening is still in the lead?

Speaker 2:

do you think they are? That's a difficult question. Yeah, don't have that much insights. Uh, I think, um, whether or not I think the deep seek example, and now the the man, is one like it took everybody a bit by surprise. I think that is the like it and that creates a bit of an uncertain platform to stand on, that like, if you're, if you're, uh, suddenly there is someone that you that is competing and you didn't see coming at all yeah I think that's difficult, so that's probably does create some turmoil I think the time was also like.

Speaker 1:

They also announced they're gonna have so much money invested in ai and it's like so many things and it's like, well, actually we don't need this much money to get good enough performance. So that's a bit uh yeah, I don't.

Speaker 2:

I think there's still value to say like I don't really believe in the whole thing, like where you say, oh, you don't need these investments. You will still need significant investments to have advancements, and I think the the deep seek was strained on a very limited budget is also very much of an overreach, like there was a lot of budget behind deep seek yeah, yeah yeah, um, but I think the the the bigger challenge maybe for for open ai is like it's been hyped up a lot been.

Speaker 2:

It's been um, very much the most performance, uh, let's say general purpose a lamp provider out there but it has like I have the feeling in the last year it has been it has made a lot of claims on going towards agi super agency, whatever, yeah, but I don't really have the feeling like we're seeing this now. So either they have something behind the scenes that will surprise us all or the claims have been a bit overreaching.

Speaker 2:

If you look at very much in the Gen AI-based coding space, none of the leaderboards today are being dominated by OpenAI right of the leaderboards today are being dominated by, by, by, open ai, right, like, it's, literally like it's, it's, uh, it's cloth, and and and, and, uh, it's well, it's entropic and in some cases it's deep seek yeah um and open ai trails somewhere behind, like that. That is the reality of our own today. We waited, I don't know, I want to say two years for sora, and so it's like the video generation. Like it's meh to say that, like, to say the least, right, like it's um.

Speaker 1:

So let's see I think the the feeling I have is like since 01 they made some upgrades, but it wasn't like, because I also feel like before it was like every time opening I came and released something was like wow, wow, wow exactly.

Speaker 1:

And then since 01 is like yeah, okay yeah, um, I think since last time we talked so again, it was like wow, wow, wow, exactly. And then since 2001,. It's like yeah, okay, yeah, I think, since last time we talked so again it was a bit a while ago, actually, maybe, I'm not sure. But, GPT 4.5 came out.

Speaker 2:

And deep research, the functionality as well. Right.

Speaker 1:

That's true. Openai deep research. I don't think we talked about it yet. I'm not sure either. But if we didn't, what is open ai deep research?

Speaker 2:

so open a deep research and uh, and you have. Uh, if you want to test this out, you don't have a paid open ai account. You can also check out perplexities. Deep research, because it comes a bit down to the same thing. Um, in my feeling, open eyes is a little bit more performant. Um, what it basically does is, like you have this research question and then it starts uh, querying a lot of different sources, uh to get to a cohesive research report to answer your question. That's a bit of it yeah, that does.

Speaker 2:

Um, I think, uh, let's say, I've tested a few times so you can't like for the, I have a paid uh chat gpd account and then you can.

Speaker 2:

You can do this like I don't know 10 or 20 times and then you're, you're locked out for the month, something like that oh, really yeah okay, um, but it is quite impressive, like I, I've asked like a few times, like things like make a competitive map of this and this industry, of that and that geographical region with these and these constraints, and you get a good answer Like is the? Is the conclusive answer?

Speaker 1:

Probably not, but like an answer that would have taken me a lot of manual research to build myself, but I guess the question for me is if you compare that to perplexity, because we're practically, you sign up, but then you can also use the feature for free.

Speaker 2:

Yeah, but also they nudge you to get to go to pro like it's not unlimited like like, for example, like these are very much details. Like but uh, on the open ai one, chat, gpt one, like the deep research one. The references are typically very correct, like okay, there's a statement, part of the answer and there's a reference. You click reference and you read the full documentation of where it comes from.

Speaker 2:

And it can go from web pages, research papers, whatever. Perplexity is the same concept, but sometimes the references don't really match up with the statement. But it's linked to the text. Somehow you need to to reconstruct this a little bit, like it's much less precise yeah, no, no, I also.

Speaker 1:

That's also something I find with perplexity, like sometimes I ask a question and he gives me an answer and then I click on the source and it's like a blog post that doesn't really talk about, doesn't really give the information that I want.

Speaker 2:

It's like but if you then go like three sentences lower and then there's not a reference, oh, and doesn't actually yeah, that's the one. Yeah, yeah, indeed indeed, indeed, save these things, yeah yeah, no, true, true.

Speaker 1:

But uh, I mean, I think the question that I have, at least, is is it worth it? Right, like I think the because the pro subscription of chat, gpt is not the of opening, is not cheap, right, it's like a couple hundred yeah, yeah, that is very no, no yeah, yeah, I don't have that one.

Speaker 2:

You don't have that. You mean the, I don't know. It's called pro, I'm not sure about that. I'm not sure what the tears are called you mean the very expensive one, I think.

Speaker 1:

Then you have unlimited access to the decrease yeah, because for that one the gpt 4.5 is available for the pro one, or is available for the, the not pro one. Let's say, because I think the chat gpt 4.5 is is available for the, the not pro one. Let's say, because I think the chat GPT 4.5 is not available for everyone, is it?

Speaker 1:

Uh, to be honest, I don't know exactly but uh, yeah, well, I don't know either, but uh, I think the criticism that comes a lot and this is from another article, it's a hot take Do and this is from another article, it's a hot take Do you have the hot, hot, hot? Oh, hot, hot, hot, hot, hot, hot, hot, hot, hot, hot. Now my hot take from Gary Marcus Hot take chat GPT. Gpt 4.5 is a nothing burger. I'm not going to go through the whole article here, but basically nothing burger is something that gets a lot of attention but which, upon close examination, reveals to be little or no real significance, and basically says, yeah, gpt 4.5 is a nothing burger. He says, yeah, there's like a lot of explanation, people saying why people are not getting the performance boost that they see, but he thinks it's just because, like, yeah, they train more, they train more data, more resources, but he doesn't feel like it's. It feels like, yeah, it's not something that we should really focus on.

Speaker 1:

And again, I think these comments I've heard or I've seen in different places of the internet or social media and whatnot, and I think that, coupled with the subscription and how they want people to pay for more stuff, I feel like that's what puts the question mark on people's heads, you know, is this worth it? Also when you see the competitors? I mean, I know I agree with you that a lot of these things are a bit exaggerated in a way, like the deep six investments. But um, yeah, I don't know, I have a.

Speaker 1:

I have a few question marks when it comes to OpenAI and like going ahead, because I also see other things on the other competitors, like Anthropic with the model context protocol, which we'll talk more later. I have an article there as well that got a lot of attention and I don't know, I have a feeling that OpenAI is not, is not in the hot, like it's not, it's not get, like it's falling behind in a way I'm not sure of course, still like a huge, huge, huge company, I think but it's just we're so used to them being always in the forefront of everything, always them wowing us and always saying that, ah yeah, we should really follow these guys, we should really do this, and I guess maybe the absence of that makes me feel like they're losing momentum.

Speaker 2:

Excuse me.

Speaker 1:

Would you say that GPT-4. Actually, have you tried.

Speaker 2:

GPT-4?

Speaker 1:

No I haven't tried it. They also say they're going to announce the.

Speaker 2:

Is it available in, because to use it for the API is super expensive.

Speaker 1:

Yeah, I think so, but that's also why they're saying like it's super expensive, but uh, it's not much better and that's a big, the biggest criticism, like, yeah, you can find some things here and there that it's a bit better, but it's not. He also points out in this article that um saint maltiman is way more cautious. When he announced this, okay, so he wasn't like agi or this and this. He was like this is a better, but it's a bit expensive, et cetera, et cetera. So I guess maybe they're less bold. It seems like.

Speaker 2:

I see, yeah, yeah yeah, right, oh yeah. Yeah, I just checked it out, so I have a paid account, but it's available there it is available there. As a research preview. Good for writing and exploring ideas.

Speaker 1:

I think that's what he says on the tweet. Sam Altman said that it feels a bit more human, a bit more conversational. I don't know, what else do we have?

Speaker 2:

Actually also heard that it's very good for creative writing like creating, creating novels like these type of things.

Speaker 1:

Okay, there are some good um gpt 4.5 yeah yeah okay, but these things are very hard to measure.

Speaker 2:

Yeah it's very uh like going by a good feeling. I have to feel vibe writing or like vibe writing you know what that is, alex?

Speaker 1:

have you heard of vibe coding? No uh, it's actually something I've started seeing on reddit everywhere. I mean I actually heard also on podcast, but it's like I don't know where it came up. But from what I understand is the idea that you just talk to your chad gpt and it just does stuff like you're not really coding, you're just like it's like that's vibe coding, you're just like kind of telling actually gave a talk on this, uh, yeah, and a meetup, uh, last week oh yeah, that's true.

Speaker 1:

How did it go? Did you call it vibe coding?

Speaker 2:

no, well, I. The funny thing is is that I called it um. Prompting is. Prompting is all you need. Oh yeah, which I still think is a good title. But between me submitting the talk and actually giving the talk, the term vibe coding was coined by Andrew Andre Carpati.

Speaker 1:

Ah, he coined it, one of the founding team of OpenAI. Oh, okay.

Speaker 2:

And it basically means also what I was going to present. It's like you have, um, a chat interface, whatever. That chat interface is right, like like take cursor, take whatever, and you just say, uh, allow everything, because you typically can say like, I need to, before you do anything, I need to approve, I need to verify, answer, you allow anything and you just say, okay, build me this app with this and the specs, and you, you see what comes out, and then, um, something doesn't work, or you want to add feature.

Speaker 1:

You just prompt it, like, and you just lay back and type and chat with your interface yeah it does everything for you saying I want this page, I want this ui, and he just kind of does it yeah and the the.

Speaker 2:

The challenging thing I think of where we are at today with in terms of the performance of these things, is that like it really gets you like 80% of the way yeah, yeah, yeah if you're like you're doing something standalone and like yeah, yeah yeah, a lot of what is but, of course, but it's already very good yeah, it's true, but I hear now vibe coding everywhere.

Speaker 1:

That's what so you could? You could just like do this, do that. She's gonna look, it's gonna be a developer next time.

Speaker 2:

Hoodie on, no, no, but seriously like, like, if you just want to check it out, like go to lovable, go to bolt and just just try it. Like it's super easy to with a prompt to uh to create an app this makes me think of.

Speaker 1:

This is something I saw on reddit. I don't know when, but I saw recently. This is let's, let's see when. Is this from 19 hours ago? The comment this is, oh, 19 hours ago, so very recent. So this is from the Cursor. So Cursor is like the VS Code fork. It says guys, I'm under attack.

Speaker 1:

Ever since I started to share how I build my SaaS using Cursor, random things are happening Maxed usage on API keys, people bypass subscription, creating random shit on the db. As you know, I'm not technical, so this is taking me longer than usual to figure this out. For now, I will stop sharing publicly on x. Uh, there are just some weird people out there. So interesting, yeah, and that's the thing so like.

Speaker 1:

So I think what I understood is this guy was vibe coding, but he doesn't understand what's happening. Yeah, so maybe they're like api keys that were hard coded. There were like a lot of stuff there and now people are attacking and and then he's like what should I do, because I don't know how to fix this. So I think it's like vibe coding and I think you're a good example. You can vibe code, but I think you can also read and say this is good, this is not good. Change this, change that. So I still feel like I think that's a good example of the mix that you need to have between how much technical you need to be and how much can you just vibe with it yeah, yeah, I think that is the.

Speaker 2:

That is the challenge, like, I think, when you're on a few discords where uh, well, the new term is indeed vibe coding, it's where you see a lot of these questions that are being up, that that they're using a gen I based coding tool and you see questions coming up which have nothing to do with and which basically means like they just really don't understand what's being yeah, I saw that too yeah, this um like a lot of the reality where they were and it's a bit like that's the challenge of the bit, what I was mentioning.

Speaker 2:

80, it's very easy to get you to 80, yeah, but like the rest, like you need a bit of an understanding of the code yeah I think new supporting tools will ease that transition. Also the example that you shared here, like like someone that has security issues yeah like a very open system, like like uh, for example, bolt and lovable.

Speaker 2:

They integrate with super base. Super base already does some like proactive notifications that we're saying, okay, these tables are probably too open to the public. You need better level security so you get the like these proactive uh monitoring on like safeguarding, like you're probably a shit developer, make sure to check out this right yeah and I think we will see an evolution in these supporting tools as well yeah, um, no, for sure, for sure.

Speaker 1:

But I think there are also some automatic tools that can look if you have api keys and all these things right, that already help.

Speaker 2:

That that already assumes that you understand why that's an issue.

Speaker 1:

That's the thing, and I feel like, yeah, maybe, yeah, I don't know, maybe bolts and stuff like this that kind of takes care of all that underneath. Maybe they already do some of these things and they stop it. But, uh, if you're just on cursor right, just kind of go with it. But yeah, um, maybe also this makes me think of, uh, this article as well different topic, topic, but I saw this on. This is not super recent, but ai cracks superbug problem that in two days that took scientists years. This is from january, february 20th, basically the.

Speaker 2:

There were scientists that spent, oh sorry thanks, he's trying to share it on the screen.

Speaker 1:

Yeah, I was trying, but technology it's difficult sometimes. Um so, from tania, febb, basically, there were scientists that spent decades trying to understand why super bugs are created and they did a lot of research, did a lot of things and they like a google model. Uh, they said they gave the same prompt and then after a while they came up with the same conclusion, basically. So I don't know exactly the parameters. They don't go into a lot of detail. It's a short article here.

Speaker 2:

Yeah.

Speaker 1:

But also the paper wasn't published, so there was no way that Google could actually gotten on the training data.

Speaker 1:

So they were really, really perplexed by this. They even reached out to Google saying like, ah, do you have access to my data somehow? This and this? And it's like no, we don't. Um, and also, he came up with, like other theories as to why superbugs can happen, and it's like, oh, now we're investigating these things. So in the end, they talk about, yeah, like the, the place of ai in research. Yeah, so this would change science and whatnot. Um, but I still think it's interesting because, like, you can get to a conclusion, but you still need to have the years of experience to know what is a valid hypothesis, what's just hallucination. Is this something worth exploring? Does it make sense with empirical data and I don't know exactly. Also, what are the parameters of the AI tool? Right, because you probably couldn't do experiments in the lab, right, but again, it's like another example in a different field now, where ai, like it's not replacing really, like at a glance you feel like it's replacing people, but if you really look a bit closer, it's not really.

Speaker 2:

Uh, it's really not I also read the article I saw passing by in my feet. This is very interesting. Like, uh, it's um. I was wondering, like, also the, the process that they used. I'm not sure if they published something on that, like, like, what was the? What was the? How did they configure this ai that cracked the super bug? Like?

Speaker 2:

yeah when models were used um, but it's interesting to see that and to me, this is also a very good example of more like using ai as an acceleration tool, right indeed but I think that's and that's what I'm kind of boiling down.

Speaker 1:

it's like I think the fear is that AI is going to replace people, but I think more realistically is that it's going to be a super efficient productivity tool.

Speaker 2:

Yeah, it will create a bit of super agency for people, the people that can use it.

Speaker 1:

Exactly, exactly, exactly so, and I think, yeah, I think there's always a bit of the dream, but I think kind of, in the end of the day, if you really take a few steps back, it's like it's a, it's a tool that will make you more efficient. But, uh, I thought it was really interesting as well. Also something a bit outside the programming, because I think we hear a lot of this on the programming side, true, um, what else do we have? So european tech industry coalition calls for radical action on digital sovereignty. Uh, what is this about?

Speaker 2:

yes, this was an article published on tech ranch uh two days ago. Um, it's a basically a coalition of a lot a lot of uh european players more or less in the tech space. Like a few big ones are mentioned there airbus, element, ovh, cloud marina, next cloud proton, like names that a lot of people know. Um, and it's a bit of a, a letter to the european union, or to the european commission, more more precise, to to uh to the president uslob, on the layer and the digital chief, where they uh call for, like, strong action on digital sovereignty, like I think this is very much a reaction to the whole geopolitical climate currently, where I think Europe wants to wean itself a bit off of Silicon Valley through different measures, and a very important one being start buying locally. I think it's interesting to see.

Speaker 2:

I think it's maybe also good to have a wake-up call there right, and that's maybe a very concrete example and it's also a good segue. It's like, instead of your local government using AWS Cloud, maybe you need to go to something like OVH or Scaleway or another European provider. Right Indeed, even though and that's the question mark like are they up to par today or not?

Speaker 1:

Yeah, I think the first thought I have also is like we're discussing tech, but I think tech with geopolit geopolitics is very it's very intertwined these days, right, but I yeah, a lot happened in the last months. Indeed, um, and I will segue, I'll take your segue here for the hopsworks. So their mlops platform right, and they actually shared the story here, migrating from aws to european cloud, how it could cost by 62%.

Speaker 2:

And just so Hopsware, mlops platform is a European company.

Speaker 1:

Yes, Well, I know that the founder, jim Dalwin. I met him in a conference and he works in Sweden. Okay, he's a professor there. Okay, interesting so, but yeah, so I think, from what I understood here I just glanced at this they're using more. I think they weren't using very much the specialized AWS services, so I think that's why it worked for them. But I do wonder how mature the cloud space, the European cloud space, is. But I do think that it will become more mature as well. I think there's a lot of attention. I already read articles that people are mobilizing to put more money behind it, which I think will be.

Speaker 2:

I think it's good. It's a big statement. I cut costs by 62%.

Speaker 1:

Yeah, but I think the thing for me. You also hear from time to time people saying that they're going to cut costs by moving back to on-prem right, so I think it's but to me, like, I think the good thing today is that we indeed like, we're both, uh, we're both in europe.

Speaker 2:

I think it's good that we, that the european economy is a bit stimulated by, by buying these services locally. I think also by having that movement, we will see improvements. Um, I've used ovh in the past, few scaleways in the past, like I don't think you you can argue that they're not good. I think the it's a bit the the other extreme, like if you look at especially aws and uh and Azure and GSP to some extent, like from the moment that you're in that ecosystem, they have a solution for little anything and you don't need to look around like, like ovh has, like probably stuff for 90 yeah and for the rest, you need to go a bit more low level and, well, enable the option, maybe have some syntactic sugar, or you have some very, very niche thing that you need, like it's a bit more of uh.

Speaker 2:

So you need to think a little bit more, I think, when you limit yourself, uh, um, on an offering that is less extensive but I think probably quality wise as good, um, I think the what they're saying now, how we could cost by 62, to me there's a bit, a little bit like you do a huge refactor.

Speaker 1:

Yeah.

Speaker 2:

And one of the objectives is to cut costs and you're probably going to cut costs right, Like we all know. Like you start an architecture, you start an ecosystem and you're going to cut some corners because it gets you running quickly, but it's probably not as cost efficient as it can be.

Speaker 1:

Yeah.

Speaker 2:

And from the moment it's mature and you do a bakery factor like you should be, it should be more efficient. Yeah, yeah, indeed, like, if it's and I don't probably like there are some underlying resource costs that are different. But if you look at the big tree I don't know, ovh, this one, like compute cost is more or less comparable, maybe just like, but there's also a bit of a recent uh oracle is very much undercutting the market, but I think it's just to to win market share, but that's not like there's so much difference in resource cost. But from the moment that you start using these like more syntactic sugar type of services um, on top of the, on top of the like the bear uh compute resource then it starts to be to add up right true, but I also, yeah.

Speaker 1:

But for example, I know for aws a lot of the services. They say that you only pay for the infrastructure underneath, so the, in theory, the syntactic shooter doesn't add costs. Uh, I mean I'm not sure that maybe there are services and services right, but I'm I mean I think it's good that they mentioned European cloud. I think it's good that they're also promoting how there is a future there.

Speaker 1:

I also am a bit skeptical by 62%, because, again, 62, it is a very big number, right, and I agree, they're migrating the public. Okay, now let's use Kubernetes, let's really make everything really scalable, not have anything idle. You're going to cut costs. So if they say, like how we migrated from AWS to GCP and we cut costs by 50%, I think that could also just as well be. But yeah, but do you see? Then you don't see any. If we say tomorrow we're going to work with OVH, you don't see any like, okay, let's do it. You don't. Do you change the way you tackle the problem at all. I don't have a lot of experience with OVH but, like from what you from I hear what you're saying it's like it's mature enough that it should be considered as an option for.

Speaker 2:

I think if we ignore all preferences right, I think OVH for a lot of companies, especially SMEs, it's by far enough. But if you're a very big, large corporate environment that has very niche requirements niche requirements on on separation, on networking, on security, like you might want to it might take you more of an effort to set this up because it's not out of the box there.

Speaker 1:

Yeah, I see what you're saying yeah I should say but a little bit like this yeah I also think it will be more out of the box in the near future with the investments that people are doing Well, if we actually get to a European commission that has the power to also say to our local governments let's buy this locally by default, we will see a big stimulation.

Speaker 2:

Whether or not that's going to be OVH or scaleway or something that doesn't exist yet, I question that right yeah, scale way or something that doesn't exist yet.

Speaker 1:

Question mark right, yeah, yeah. And talking about government owned I don't know if owns the right word government things that are created, big segue, what is?

Speaker 2:

docs part. Yeah, there's a big segue. Yeah, docs is actually a Notion alternative and it comes from the French government and it looks quite cool. It's very similar to Notion. It's a team-based collaboration.

Speaker 1:

It's open source.

Speaker 2:

It's fully open source. You can download it on GitHub, you can host it yourself. From the Readme it's clear that they're now using it in France and in Germany and they are currently onboarding the Netherlands. I'm like how big that is, I don't know Like, is it a small department there or is it like the full government? I don't have no clue. But it's cool to see these very open initiatives, let's say lower down in the tech stack, like really end user tooling. Yeah, I don't I can't really mention another example where you see this like this is really something for an end user to use. Yeah, and it looks it looks very nice.

Speaker 2:

It looks very nice yeah.

Speaker 1:

It looks very like polished and yeah, so that's I. I mean cool initiatives right, yeah, indeed, indeed, indeed, indeed. Yeah, I I'm probably projecting it, but I'm also thinking, indeed, like all this whole geopolitical thing, you know, trying to have more stuff in house, more open source, more like um. But I think it's, it's nice to see. I don't think it's bad to have more options, especially open source ones right, because hopefully we can see more extensions and all these things. But really really cool. Have you tried this at all or no?

Speaker 2:

No, I haven't tried it, but you can actually log in via on our demo site, so it's easy for people to try if they go to the GitHub repo, okay.

Speaker 1:

Very cool. Got to give it a try. And you, because you, I know you have some pains with notion. I hate uh formatting and notion but also, like you have, pains with note-taking apps in general.

Speaker 2:

No, that as well. So maybe first my frustration with notion. Okay, so we use notion to uh, prepare our items, to discuss, yes, during data topics, and so this is in an itemized list and from the moment I copy something like a title from a website to there, you lose the itemization and that's like in a header and like there is no easy way to change back, like I just want to paste without the original formatting, like how difficult can it be, like Notion has been around for so long. Yeah, like what the fuck.

Speaker 1:

One thing I also did I was trying to copy paste markdown yeah, and you wouldn't understand the markdown and I was getting super frustrated. It's like it was just markdown, yeah, copy paste it. And I really wanted to just like go like nicely format, and it didn't.

Speaker 2:

Yeah so, yeah, I hate notion. I will never use it as my note-taking app, never, never, wow, that's a big statement. No, I've done it like it's in the past. It's in the past, a closed chapter. I think I've given it as my personal note-taking app. I gave it a chance for like three months years in the past, but fuck that.

Speaker 1:

And also I also heard um it's still a note-taking app space.

Speaker 2:

Uh, you don't like obsidian either oh wow, this is a bit more putting you on the hot seat.

Speaker 1:

Uh, it's really on the hot seat yeah because obsidian is like the, the love child of uh everybody in the in the tech sector right yes yes, yes, but maybe like do you know what obsidian is, alex? No, so obsidian is like a like Markdown based, no taking app, right, but like you have your, just your files. And the idea is like yeah, you just have your files, right, so so obsidian is just kind of reads the files and can create mind maps.

Speaker 2:

Yeah, a bit like uh, apple notes, but then a bit for more. A little bit, a little bit people that are a bit more in the tech space, right like yeah, yeah want to do markdown. I want to do these a bit hipster. Yeah, yeah, they wear hoodies all the time yeah, and now you want to know why I hate it. Yeah, yeah, I hate. It's a strong word, okay, you, because I used it quite a while have you.

Speaker 2:

Yeah, I use it quite well okay, um, so I think the ui is ugly. Let me start with that. I mean the. I also want something right. Okay, you want some eye candy. Yeah, I want some eye candy.

Speaker 2:

I want to look sleek um I think it's ugly, like there are teams, but still ugly. I think there is this whole notion of interconnected nodes and this interconnected knowledge domain, which and that's really a me problem it just doesn't work for me. I'm not going to craft my nodes in such a clean way that I'm going to add tags and then no, okay, this tag, and then we'll link it to that. That's not me. I can do that for four nodes and then the fifth node is just some scribbles.

Speaker 1:

It's just like this, that, yeah.

Speaker 2:

Check previous notes, yeah, and then what I've always what I kind of dislike is like their whole notion like we're the free note-taking app. Yeah, and sure they're free, they're open source. But like sure they're free, they're open source.

Speaker 2:

um, but like, fundamentally, I use my note-taking app on multiple devices I also want to have access on my notes, on my, on my phone and while there are workarounds, I know this via hit. Well, you can, you can do this for free, but it's like it's it's a lot of effort. So basically the only way to make it like usable, if you don't want to spend too much time on it, is to go to the paid plan. I see. So it's like it's like the whole premise is like we're a free note-taking app. We don't take ownership of your notes, but yeah, if you really want to fully use it, you have to pay for it, and I understand that. Right? They?

Speaker 2:

need to make money to run a business, but then don't run with the whole world for free note-taking app, right? That rubs me a bit the wrong way, I see. But I think what they did and there were definitely a lot of examples before already but I think what they very much did is to popularize Markdown as the default for note-taking.

Speaker 1:

Yeah.

Speaker 2:

And that I very much like.

Speaker 1:

Yeah, no, no, but I agree and um.

Speaker 2:

So actually, I recently switched to bear bear bear.

Speaker 1:

Maybe I'll just. Uh wasn't planned, but uh, given the discussion, this is obsidian for people that don't know.

Speaker 2:

Yeah bear is uh, is uh my maps here. Well, baby's, basically apple ecosystem. Only what I like about bear is that it's, it's. You can import and export markdown notes, sports markdown. Well, you can actually and this is actually an interesting, actually interesting rabbit hole you can import and export text bundles, which is this wrapper around markdown that also includes, like images and attachments that you are okay.

Speaker 2:

Okay, this is a very, very handy thing in this ecosystem didn't exist a few years ago. Okay, um, and it syncs this on all my devices via icloud. So I quote unquote own everything. There is no third party, like I'm already like, like there's no third party where my notes are hosted, I see, and the interface is very sleek, wow, and it just works like it's very yeah for me.

Speaker 1:

so I use uh raycast, like that's the spotlight replacement. They actually have like sticky notes, so just like a scratch pad, I guess, yeah, and from there it's very easy to export it to notes or like other places. So I haven't downloaded Bear, necessarily, so I just started like I take Notes on the sticky notes thing, the scratch pad, and then just export it to Notes and then if I need to find it I'll just search or whatever. The only thing I don't, I really don't like, and that's my only like and that's true, it's horrible notes.

Speaker 2:

I did so. I my I had three minimum requirements. Yeah, like very good support for markdown. Slash tax bundles, yes. Uh, it had to be affordable and it's at like below four or five euros a month. Okay, and I don't want to have my data going elsewhere. Fair and from the moment you do that, like notes, is out of the table, because markdown is another thing. Yeah, it's very hard. There are some workarounds with automations and stuff. It's super hard.

Speaker 1:

To me it's just like what is Apple doing, just add Markdown support.

Speaker 2:

You know, but I think it's very niche audience, do you think?

Speaker 1:

I think so, yeah, yeah, maybe we are in the bubble. We are in the bubble, I think.

Speaker 2:

To me.

Speaker 1:

I was really like we should start a petition, Because if they had Markdown I would be really happy with it. That I agree, I would just be like it's on my phone, automatically sync. It's iCloud.

Speaker 2:

Because otherwise it touches more or less everything Exactly.

Speaker 1:

Maybe one thing that yeah.

Speaker 2:

I'm going to do some advertisement.

Speaker 1:

Advertisement.

Speaker 2:

Yeah, go for it If you use Bear and you need an MCP server, because we're going to go there, right.

Speaker 1:

Yes, that's what I was going to say If you have Bear and you have an MCP server. Do you know this? Uh, I'm gonna say no, I don't think so. Okay, because you said I know if you have bare and you need an mcp server for it.

Speaker 2:

Basically meaning like you can via cloud or something, like you can query your notes. You can go to bartsspace and there's a link toa small repo I made hold on. Hold on there's a small bare mcp server. Is it barts barts, barts, barts, bartsspace. Let me share it again and it's under etc. My bare MCP server.

Speaker 1:

My bare MCP server. Yeah, cool, so maybe already moving into what is a MCP. Well, what is this?

Speaker 2:

What is an MCP server server? What?

Speaker 1:

are you doing here? You mean in general, what is an mcp server? Yeah, if you had to like explain to our audience, right, like, what did you do here?

Speaker 2:

like mcp is very I think it got very popular in the past week yeah, so mcp stands for a model, context, protocol, um, and it is a way that standardizes um or brings, bring some, some standards on what is an mcp client, what is an mcp server and what is the communication in between those and a mcp client.

Speaker 2:

Uh, is basically typically and I'm gonna generalize here a bit because, because a lot of edge cases but like you can typically see, a bit like Azure chat interface, like think, for example, you go to Cloud, for example, if you download the Cloud desktop application, it literally is an MCP client, right, literally is an mcp client, right, uh, if you use um according ide, like cursor or or client or continuum, there's also an mcp client. Yes, and uh, what an mcp client does, aside from the simple chatting that it already does, right, the simple directly to an lm, it allows, uh, it, it's from the moment it starts up, it, uh, it sees, like which mcp servers are registered and it connects to these servers and basically explores, like what tools do these servers have available? Yeah, and here, like in the example, it's actually good that we have a concrete example. So here I created a bare mcp server yes something very local.

Speaker 2:

Um, so when my cloud desktop application boots up, I register this MCP server in a config file and it will connect to it and it will ask this MCP server, what tools do you have available? And this will give back. Using me as an MCP server, you can retrieve a node, or you can search for nodes, or you can search for tags, or you can do I have a bunch. Search for tags or you can do. I have a bunch of tools available to yeah, this one.

Speaker 1:

So open notes. Yeah, exactly, search notes, get tags, open tag yeah, so this is the most typical example mcp servers can also host prompts.

Speaker 2:

So they can have prompts ready for you, so they can return prompts, yeah, um, and what then happens is like also, the protocol to communicate between these things is, uh, standardized. So it's in, it's uh, uh, rpc json, basically remote procedure calling in a json format. Um, and then from the moment, so this client, let's say claudia, initialized it knows I have these tools available, and then for every query it gets, like it tries to determine like is this a query I'm getting that I should send to a tool and get the response of the tool back, or is it just a regular chat? Yeah, and then, based on that, like it will either either return a regular chat or go to the to the tool and then return the response of the tool and I think, a lot of the times it asks for permission to acquire the tool or no depends a bit on the client, yeah, but cloud desktop doesn't need like very to me a bit too extensively, but you can just say like, yes, go ahead, so very cool.

Speaker 1:

So I mean, the example we have here is a good one. So if you say, if you go to, uh, cloud desktop and you say, can you find all my notes about project management, it will understand that it's something about your notes. And you says, well, I don't have this, this is not something on my base, my knowledge base, so I should probably go for your notes. That is on bear, and you will see, ah, do we have? What are the tools we have available? We have a search notes, yeah. So then it says, hey, can I search notes using this tool?

Speaker 2:

and you say yes, and then you actually get the context there and I'll give you an answer exactly very cool and it knows this that to make the switch to selection for this tool because, like, from the moment that uh client starts up it uh does this initial connection to the server, it gets back this metadata from the server. Like, these are the tools available and this is where you what you can use them from. Yeah, so really, based on the matching the the description with the query of the user.

Speaker 1:

It's very cool and also like, yeah, for the the servers part, like you can also run it locally. Right, you can have a like a local server running next to your client application. So this one is a good example.

Speaker 2:

This is very local this is very local, yeah that's because that's what I was looking at here.

Speaker 1:

The configuration, like for the people following the screen, is a is like a json thing, um, so this one looks very local indeed. You run npx, so you run this locally, but you could also have something hosted somewhere else, true, right? Um, so yeah, and also I also look to npc servers. I haven't built one, so I think it's really cool that you did. I did look at the code, but in python they use like the fast mcp or whatever. The way I understand it and correct me if I'm wrong is like we have lms, so we have just models, and then they started creating these tools, right, that you can actually add on the open ai, like these are the available tools, but this you have to do when you're coding it up. But, for example, entropic, um or other things, they already have the model and they, like you, cannot go there and change the code for the available tools.

Speaker 1:

So then you can have this model context protocol where you can actually sound like a plug-in system right, you can say okay, you didn't code this up, but everything that you see here, these are the tools that you're going to be available to, to go um, and these are separate servers like uh, locally or somewhere else true, um, so it's a way that basically can add whatever tool you want for the clients, maybe also to show here they have. This is an entropic that created this, but it is they. They, they created this very much in a let's create an open standard kind of right, so there is another page here.

Speaker 1:

This is from the model context protocolio, and then they're just showing some clients. So you mentioned, indeed, they have, like resources, prompts and tools, sampling routes, but actually sampling routes. There is nothing there, um, and some clients they offer some things, but what every client offers, or almost every client offers, is the tool calling things very, very cool and um, maybe, how, how, maybe do you want to share more about? How was the experience of building this mcp server? Was it a straightforward? Uh?

Speaker 2:

um. This is very easy yeah yeah, actually, this is 95% built using Gen AI Vibe coding. Yeah, more or less. Yeah, exactly, but it does become, because I did it via client. It does become sometimes a bit confusing because you're giving instruction on building an MCP and at the same time, my client had access to that MCP.

Speaker 1:

Ah, so it was like you actually tested it okay.

Speaker 2:

So, uh, it was sometimes a bit confusing, but uh, yeah, but that's, that's really cool, that's really cool.

Speaker 2:

So yeah, for people interested, there was a bit of space here because, like uh, bear is um, there is a very much a native uh, native os explication. Yeah, in the sense that they uh, there's minimal integrations possible, but they do have x callback url. So most apple uh applications allow them to interact with other apple applications via that, via the protocol, but to call back to uh something like cloud, like it had to open up uh, basically a callback url which opens a browser so it's very like the, the mcp servers that existed.

Speaker 2:

They're very like to me. It was a very grating user experience, Like you query something and then you see five browser queries popping up to feedback that information. So here I'm actually connecting directly to SQLite database over there. Oh, okay, but they're very cool, and that's also the reason why it's limited to read only, actually, and that's also the reason why it's limited to read-only. Actually, that's very cool, but I mean that's oh, thanks for the star.

Speaker 1:

Yeah, yeah, yeah, so there's two stars. Yeah, actually, I'm the second one.

Speaker 2:

It's probably mine, right, let's see.

Speaker 1:

Yeah, I did, but yeah, but really cool, really cool, for I think, yeah, very helpful as well, and I do think maybe also on the what's the name? It's Smithery, yeah, smitheryai, yeah. The other cool thing is that, because this is a standard right, people can actually create their servers, they can create these things and you can just kind of plug it into your model right. So this is a Smithery, is a hub, I guess, of these mcp servers. So really, really, cool.

Speaker 2:

So, yeah, there's stuff for like neon. Um, yeah, they have stuff for almost anything. Yeah, and I think smithereeai, like, if you want to get started with mcp service, is a very good starting point because, like, there is a lot of quote-unquote syntax sugar to get your clients configured yeah yeah, um, I'm wouldn't use it myself. Um, because of this syntax, you should go into a little bit more, because there's typically quite some sensitive data involved, like yeah see a little bit like what am I when I'm?

Speaker 2:

what am I connecting? Is it local, is it remote or and all the information you can find on smittery. But it's, uh, it takes a bit away if it abstracts a bit too much for me like, yeah, not being too transparent on what it is but it's a very good way to get started with them but I think servers indeed.

Speaker 1:

So it's like a lot of these things run on servers, like actual servers, um, you can run it locally, right, and locally you can guarantee the privacy of your data right. And we talked about, talked about Anthropic Desktop, cloud Desktop, but I used it for Cursor, for example.

Speaker 2:

Maybe just for something very concrete. So I use this in Klein as well. Well, not Smittery, but like directly MCP servers. Two that I found very useful is I use Puppeteer.

Speaker 2:

Puppeteer is just the web browser, right, but I can like from Puppeteer Puppeteer is just a web browser right, but I can like, from the moment that I use a very new library, I just say make sure to use the most recent doc on this URL To have to really like view on the most recent documentation. Yeah, so that's really helpful. I've also used so for the thing I'm now recently making is something as a Postgres backend. Yeah, so I have a Postgres read-only MCP server that connects to my database and I have Klein also writing migrations for me. So database schema changes and always when I now propose like I need this and this change, then I say make sure to first inspect the scheme that is already out there to make it as precise as possible. So it already reads the structure of my Postgres database database.

Speaker 1:

It makes it much more robust, much more precise indeed no, but I think, yeah, like it won't guess what schemas you have, how many tables you have, you actually can read and say this actually needs to go there. So it's uh, yeah, it's really cool and it feels very seamless in a way like at least in cursor is really just like actually in curse you can set a yolo feature it just goes through.

Speaker 2:

I think there is like an incline is the same.

Speaker 1:

Uh, I think in cloud it feels a bit more yeah, because you need to very explicitly say well, maybe it's also more user like it's not for developers but it's very cool one thing that I maybe also, yeah, linking to the, the thing I had, um, if so, openai doesn't have this.

Speaker 1:

They don't support this right, like on the chat or whatever. Um, if you wanted to integrate this with openai models, the like, yeah, you're developing something yourself, right, you want to use chpd4o and you want to have this. There's a I found a blog here as well from the azure ai services. They actually use a chainlit which is, um, you know, chainlit it. It's like very much kind of like Streamlit, but just chat.

Speaker 2:

Yeah, I think I saw a demo on the boss.

Speaker 1:

Yeah, indeed, so they have like a lot of decorators here and you can actually use these things, but in the end like and I think this is also interesting because they kind of show how you would build an MCP client from scratch so here they say like, yeah, on the connection from the MPC server you're going to have the tools and you need to add this programmatically to your OpenAI client, right, because you have the tools parameter that expects a certain JSON, and then you need to kind of add these things there in the way that you expect it. So it needs to be flattened and all these things. So also another interesting read for people that are curious about this uh, what else while we are on the lm, maybe open router open?

Speaker 2:

router. Let's, let's do it, let's see um. So open routerai is um. For all my let's say, personal coding stuff, I've defaulted to OpenRouter. Oh yeah, yeah. So OpenRouter is a router that gives you basically a single endpoint and in that endpoint you can say I want to talk to this version of an LLM, and in that endpoint you can say I want to talk to this version of an LLM. So it becomes super easy to switch Cloud 3.7 for 3.5 or GPT-4.0 or LLM or whatever, and it's just a router to models hosted wherever, and even on specific instances of models or specific versions of models. Like, let's say, you take Cloud 3.7. They claim that they have a better uptime than the original provider because when they route to Sonnet 3.7, they will route to Entropic, but they will also route to Entropic on Bedrock and they will also route to.

Speaker 2:

Entropic on whatever, and so and I'm not sure what they base it on. I think it's on certain time on as a laser, on uptime, so, but maybe also a bit of round robin. Yeah, but that also means like if entropic goes down, which is actually fairly recently these days, and not that goes down down, but like a few minutes, a minute a few minutes.

Speaker 2:

you can't that, it will, it will. It will use them one of the other providers Okay, so you have a much more seamless experience. And also it bypasses because they are a very large user of these things. It bypasses existing tiers. So Entropic, for example, has like tier one, two, three, four on and it defines like how often can you run new queries and stuff like that. But they bypass because they're a huge user of Entropic.

Speaker 1:

I see, but then how do you work practice with the? Because you said use this for coding, that's your default.

Speaker 2:

I use this for coding, so, for example, in Client or in Cursor or wherever you can have your. Normally you have your OpenAI or your Entropic token, but instead of that you use your OpenRouter token. Ah okay, and OpenRouter is, I think, more or less all of these clients well-supported and you just go via OpenRouter.

Speaker 1:

Okay, I see.

Speaker 2:

And it's like also like it doesn't force you to say, okay, now I want to. Okay, I've used this now for a while, I want to try that out, okay, but then you need to put credits in yet another account and I credit me in another account and that's interesting and so so.

Speaker 1:

But like, so you have to, you put credits there.

Speaker 2:

So it's like a proactive thing, you put credits there, yeah, yeah and I'm actually like also now using it for a small application, uh, where there is actually tool usage, uh, and where that and also then allows me in a very unified way, like not having another provider but to save, like for this specific step in my lm, like use this less performable, also very cheap because it's good enough.

Speaker 1:

And for this step.

Speaker 2:

Let's use that and you can always do that. But normally it's a bit more convoluted because maybe you're on other providers and there's a different endpoint, different authentication whatever. So OpenRouter eases a lot in this.

Speaker 1:

Do you know the pricing for these things? Oh no, you actually put credits.

Speaker 2:

There's no pricing. You just kind of add credits and you add credits and it's. It's more or less the the the same pricing as you would pay it, or traditional providers, like it's a fraction, a fraction more, like it's not noticeably more so you just use this.

Speaker 1:

I mean you have the. So I'm just thinking of your ai setup in your ai subscriptions, because this would meet all the developer needs.

Speaker 2:

Let's say this would meet all the developer needs like, and I'm ignoring a lot on enterprise usage. What about data privacy?

Speaker 1:

who is this?

Speaker 2:

organization. I'm ignoring all this yeah.

Speaker 1:

Yeah, I'm now talking for personal projects. Yeah, I see um and it's prepaid kind of thing. So it's like if you run out of credit you stop.

Speaker 2:

It's prepaid, but you can do auto top ups okay, okay, okay, cool.

Speaker 1:

In cursor actually it's a they have a subscription and uh, and then you can also change models, but like you change it manually. So it's not like I don't know how open router is, but you have like a drop down on models and you can say now I want to use this now I want to use that.

Speaker 2:

Actually it's the same, but, like with cursor, you have a subscription. Yeah, what does that mean? Towards what models you can use?

Speaker 1:

People basically request models to become available. Actually, they call it premium models. So I think Cloud 4.0 maybe I'm not sure there are a few models that are more expensive that you have, I think, 5,000 premium requests per month, but I never passed this and even if you pass this, it's not like you don't get you, just let you enter a queue. So it's like they give priority for the people that have more tokens still and uh you can always request.

Speaker 2:

I'm not sure how that translates to my workflow well for me so far.

Speaker 1:

I haven't used it all, but they also have like the autocompletes, and you also. They always have like 40 mini or 03 mini, which is actually it's as much as you want, okay so that's the, to me, the, the.

Speaker 2:

I would have to look into how many tokens that adds up to, because the nice thing of open, open router is also like I'd say 5000 tokens, or did I say 5000? No, I would like to check how much tokens that would add up to, because they are probably also expressed in tokens, I would assume yeah yeah, um, but the nice thing about open writers that I now actually have statistics on which tool use how many tokens.

Speaker 2:

So I can actually a bit be a bit more conscious about uh I. I blew this weekend uh 100 euros on tokens ah, really yeah okay, how come?

Speaker 1:

um wait, I like how your show notes is claw 3.7 and blowing 100 euros in two days.

Speaker 2:

We lost the camera for a second. There it's back. So yeah, I was saying I blew 100 euros in tokens. What were?

Speaker 2:

you doing, but it's so. I was coding, vibe coding. Yeah, I was very much vibe coding. I still think it's worth it. Yeah, and I was. The weird thing is so, why did I blow that much? It was by far the most is because the pricing of claw 3.7 is super expensive. Yeah, and I don't even um use the extent of thinking very much. It's more or less the reasoning model, which is even way more expensive to me, the reasoning model. I sometimes use this in in klein. You, you call it plan mode and either you call it architect mode, um, like, like, make a plan.

Speaker 2:

And yeah, I see I always use uh 3.7, non without the thinking, uh to execute yeah, and then maybe planning is just a thing, yeah but I'm working on a fairly large code base, yeah, and while I now would open around or sometimes try, okay, let's, let's, let's, let's try gemini, like it's, it's good, like it seems good enough. Or let's try deep seagull, let's try whatever, yeah, from the moment it's, it's complex enough. You always go back to sonnet. Yeah, and I'm not the only one like this is also what you Like, even though it is really priced way higher than other, at least in the coding domain and programming domain, like it clearly outperforms everyone else.

Speaker 2:

Yeah, that's the yeah, indeed, and you see, when you look at the leaderboards there are some exemptions, like a bit on the exemption is okay. It's even better if you use deep seek r1 for the planning but for execution please use 3.7. Like, you get these combinations but in the end you're still using yeah sonnet, and it's crazy that they're that's big, like it's. It's been a while now that they're so much better, right? Yeah, I feel like all the kerfuffle on uh, the, the o1, the o3 the deep seek, r1 that's actually beating everyone.

Speaker 1:

Yeah, even the 3.5, because I think 3.7 is more recent, but even the 3.5 and five was the same. Yeah, indeed, like even.

Speaker 2:

It was like whoa, this is cool, but then then people went back to the cloud, you know, like to just get stuff done and 3.7 gets a lot of um, a bad rap with extended thinking and I think there's a fair amount to say like extended thinking. Very much. It leads to over complexifying things, like you really like like yeah, yeah.

Speaker 1:

What the fuck are you doing like?

Speaker 2:

that's like normal, right, yeah? Yeah, but um but cool and a bit linked to that. Uh, that's maybe the the last point on this. On the lm domain is not diamondai yes, what is?

Speaker 2:

this, not diamondai, so I haven't used it, but I uh, I uh found this because actually it's an option in open router um for model selection. So notdiamondai is a model that you can basically talk to as you would talk to an LLM provider, but it will receive your query and, what I understand from it, it will try to find the best model for you for this specific query, to either or a combination of those improve the performance or reduce cost.

Speaker 1:

Hmm, that's also what OPI was going to do. No, well, yeah, but then it's within the OPI ecosystem.

Speaker 2:

Here is a very open ecosystem and integrated with OpenRouter, which basically gives you access to everything I see. That's an interesting thing, but it also gives me a feeling like like every layer is an lm doing something yeah, yeah, yeah, it's like lms all the way down.

Speaker 1:

Yeah, yeah, but it's true. I mean, I do feel like that sometimes, especially when the people are like, oh, yeah, and then, to test this, we're gonna have another lm looking at the code and I have to see if there's any hallucination.

Speaker 1:

We use another lm and it's like I'm not sure if, like if lm was the problem, I'm not sure if it's the solution, but I don't know. I feel like today is kind of the. It's kind of where people are at maybe. I want to ask one open question before we go to the closing segment. Let's say, when interact with jgPT, with Claude, how do you, what's the? How can I say it without giving it away? How do you ask the questions? Do you just say like do this, do that?

Speaker 2:

Yeah, I would say like that Do this, do that, do you say please?

Speaker 1:

No, how about you, alex? Yeah, I do say please and thank you oh wow, oh really, oh, that's interesting. Uh, maybe, uh, why do you say, please, and thank you, alex I have no idea.

Speaker 1:

I just feel like sometimes it depends if I really need something fast, and I'll just do it without saying it, but you know my people like yeah, my, my wife, she, when she first got the co-pilot, like she would say please and thank you. But the first few messages was like hi, how are you? He's like I'm good, how are you, do you have time? Can I ask you a question? He's like sure you know? Um, uh, but yeah, but why I'm asking? I just thought it would be fun, something that came flew by. Let's see. I need to share this. Instead, it was they did a research, I guess. Are you polite to chat GPT? So it says here are you polite to chat GPT? Here's how you rank among AI chatbot users and actually most people, you're our number part. Most people are nice, but only some do to fear. So I think they kind of talk about like. I think like actually I looked at the numbers. Let me see if there was an infographic or something here.

Speaker 2:

How fearful are you, alex? It's definitely not out of fear, I think it's just habit.

Speaker 1:

Yeah, like when interacting with AI, texas LGBT smart speakers. Are you polite? 50% 55% said yes, it's just a nice thing to do. 12% said yes. 12% said yes, when the robot uprising happens, I don't want to be the first. And then, yeah, the other one says like no, I waste my time. So that's 20% and 13% is no, it's a machine. Why would I be polite?

Speaker 2:

but yeah, I think there's a difference between being polite and being disrespectful as well yeah, that's true, that's true, but I think it's like I'm a bit neutral out there.

Speaker 1:

But I feel like for you be like, why wait? Like I think you do more out of efficiency. You know, like here's like a google search, yeah exactly. Yeah, I think that's a good, good way to express yeah, yeah, yeah, do this or that the only argument that I would think. So I think sometimes I sneak in a please or uh, could you?

Speaker 2:

yeah, there are some resources that say that improves performance right indeed.

Speaker 1:

So that's the thing. Uh, because they said like it's all training data and apparently, um, in conversations people are more helpful if you're polite to them, so then the model also learns this behavior. So like, if you're very rude them, so then the model also learns this behavior. So like, if you're very rude to the model because it's seen in the training data that people reacted not as helpful, then the model maybe is not as helpful. But let me see if I can find this.

Speaker 1:

So I think that's the only reason why it makes sense. But, yeah, like, well, some also in the article they also say it's. It's also not refreshing to see because and I think it's a bit of a stretch but they say if people normalize being impolite or not being polite to these things, then maybe this kind of falls through the social, like social relationships. Uh, so to see that people are still trying to be nice or still they feel like they they should be polite because it's the right thing to do, regardless if it's ai or not, they also kind of took a nice thing from that. But then, on the other hand as well, there's like, yeah, but would you be polite to google search? Yeah, yeah. So it's a bit I don't know. On the opposite end, would you get mad at the ai? I would, I don't, don't think I would. I don't think I would.

Speaker 2:

I've done it a few times, why I have too. Oh, really but like a major edit to the code base and then you say like nothing was solved and I can say fuck this, just solve it already, but like then I say fuck this, solve it already, but it never really is, like, never adds value. It's better to look at what went wrong and like give a precise hint, like fix this specific thing because that's what's going on, but it wasn't because it wasn't helpful.

Speaker 1:

He was just asking for something he didn't do.

Speaker 2:

Well, yeah, that's not being helpful.

Speaker 1:

What was your view? Alex, Was Chachapit a bit rude to you?

Speaker 2:

Yeah, I just wasn't getting what I was trying to explain and I just put it on all caps.

Speaker 1:

Yeah, I said, oh really wow, I saw uh, all caps, that's when the young generation was there.

Speaker 1:

Now it's on that was, uh, a video I saw. There was like a girl saying like, hey, jot this icon for me. And then, but like the way, the way she talks to Chet GPT is like girl, be critical of what you did. And then, like Chet GPT starts to mimic her and it's like girl, my bad, I swear to God, I'm going to do better. And then, like he generates another image that is just as bad. She's like going scrolling through. It's like girl, my bad, I swear to God, I'm gonna let you down again. And like they start going on this, you know. So it's a bit of a rant.

Speaker 1:

But one thing I also thought and I wasn't gonna show this, but since you talked about getting mad at AI Cursor, someone had an interesting exchange with Cursor I. I think this is the yeah, kirsten, ai tells user I cannot generate code for you as that will be completing your work. Oh, I saw this. Yeah, I saw this on Reddit as well. So basically, like on the forum, they actually have 800 lines of code limit, so they also talk about this. But then he also mentioned like, and the model kind of refused to to generate code for me because he said it would be completely my work and it would take an opportunity to learn and all these things. Um, I thought it was funny. It's like I never got to that level that would. I think that would tick me off a bit yeah, I'm wondering like is this real?

Speaker 1:

ross is a bit staged right, maybe, maybe. But I also think that in cursor you have like a dot, cursor rules, so you can kind of say, like you can prompt the lm to say, ah, the distance and this. So maybe if he says I'm a student, yeah, let's learn that's.

Speaker 2:

Maybe that's a good point.

Speaker 1:

Yeah, maybe they has that was one that uh, that's a good point, that was one cursor rules that they got put and it was like, I think, when uh chachpt, something came out and then they asked how many hours they are in strawberry and the answer is like there are three. You stupid bitch. But I think they prompted to say finish every sentence with you, stupid bitch. I'll add that to my client indeed, indeed, but uh, also maybe one thing. I also, as I was talking to you, mentioned client. There's also continuedev.

Speaker 1:

You also mentioned open router, and I also feel later, can't forget either either, and I also feel like, because I'm using cursor now and I'm still happy with it but I'm also wondering if you could recreate the same experience pretty much with the different components. Right, like you can pick client, you can pick the same experience pretty much with the different components. Like you can pick client, you can pick the open router, and you can kind of have the same experience actually in the end. Because I think with Cursor what I liked one was the tab completion, that it wasn't just an autocomplete, you could actually edit stuff in the middle of the sentence. And the other thing too that I like is that I had options to a lot of models.

Speaker 2:

But I feel like, from what you're saying, it's like you can do the same things with yeah, yeah, open writer I also wonder, what uh I think it's a bit like how integrated the, the client, like line of curse or whatever is. Like like you have sometimes I think ada has to some extent, but you have some ways to automate it as well. Like we need to be very explicit. Like I want to build this feature, but you need to select these and these files I'm going to include in the, in the, in the query. So, yeah, you need to think about it much more, like what, what client is gonna inspect more or less the code is gonna attach the right files automatic, automagically yeah, yeah, yeah, but also like I like this uh, like now I'm saying it, maybe it's like the vibe coding, more it's weird.

Speaker 1:

It's weird. Huh. 2025 vibe coding. Yeah, um, all right, I think we have a lot more topics, but I think we'll leave it at this yes because we have an announcement yeah do we have? We don't have we don't have a drum roll, do we? No, okay, we don't have trouble. Oh, do we have something relevant? Do we have? Do you don't have?

Speaker 2:

we don't have a drum roll, do we? No, okay, we don't have trouble. Do we have something relevant?

Speaker 1:

do we have something relevant like do your magic no, okay, we imagine alex is gonna add it in a drum roll. Sure, yeah wow, wow.

Speaker 2:

Oh, this is the guys. Yeah, wow, that was a nice one, alex, wow thanks, alex, cool.

Speaker 1:

um, so we have a podcast update. I don't know if you want to do the honors, no, maybe I'll pass the ball. So Data Topics will still live on, but it will live on under a different format.

Speaker 2:

Yes.

Speaker 1:

It will be maybe not weekly and you'll have have a bit more, maybe a variety of angles, something more maybe business oriented. There will be more topics regarding a project, or maybe more topic oriented, let's say, unless in use. Um, why is that part? Maybe, why is uh?

Speaker 2:

is Data Topics? Data Topics is a company podcast. It's part of Data Roots as a company, even though we don't talk about it a lot. Right, I don't think we let it influence a lot of our banter on data and AI, but so that means that Marilo and myself are colleagues at Data Roots.

Speaker 1:

And friends, and friends. Yeah, that's okay. Whatever, it's fine and friends.

Speaker 2:

I'm just explaining how this came about and I will soon leave Data Roots. Yeah.

Speaker 1:

It comes down to yeah.

Speaker 2:

Comes down to yeah. So I, together with Jonas, found DateRoot. Now I want to say eight years ago, a good eight years ago, and we will start a new chapter going forward which brings a lot of changes. But the handover has been going on for a while now and that no longer be part of the data topics the davis podcast, basically yes, well, definitely, well, I think uh goes without saying we'll definitely miss you.

Speaker 1:

We'll definitely miss jonas. Thank you, um, we'll still stay in touch Because, as I emphasize, we're still friends.

Speaker 2:

I hope we're friends.

Speaker 1:

The day after he leaves, he leaves me everywhere you know blocking. Yeah, he's like hey, nice to meet you.

Speaker 2:

Will you see it when I throw you out of WhatsApp groups?

Speaker 1:

Yeah, but yeah, I think we'll definitely be missed. But yeah, I think we'll definitely be missed. In any case, I think, rather than be sad that it's over, I think it's better to be happy that everything happened. I think so. I think we'll still have a big party coming up. But does that mean that this we're not gonna, this is we're not gonna, you're not gonna see? But does that mean that this we're not going to, this is we're not going to, you're not going to see you anymore. We're not going to see us in this format. Part.

Speaker 2:

Well, maybe we need to first discuss what is the new data topics format going to be right Like, because it will keep existing. Yes, there are some maybe a bit preliminary ideas, so not final yet, but you have some ideas right.

Speaker 1:

Yes, so I think.

Speaker 1:

So enlighten us I think it would be the. The idea currently is bi-weekly um and then alternate between, uh, something surrounding a topic, so something more of a deep dive, let's say, and then we have guests from data routes or maybe even externally who knows? And um on the other. So alternating episodes bi-weekly and on the other one something a bit more um I don't want to say business because I don't know if that brings the right message but something more in regards to how this is applied in the industry, how it affects people. So something a bit more um, closer to the audience. Let's say like how can these things be applied right out there? So we'll see. That's the. That's the idea.

Speaker 2:

I'll uh subscribe and listen yes, give a comment and I'll give stars um, but you and me being friends. Yes, we do some plans right. Yes, we do. We do want to give, like this weekly banter on what's new in ai and data. Uh, a new um place in podcast land yes right. Yes, that is true. We're not exactly sure when it will start. Nope, we're not exactly sure what the name will be. Yes, I mean, no, we're not exactly sure when it will start. No, we're not exactly sure what the name will be.

Speaker 1:

Yes, I mean no, we're not sure, but it will happen. But it will happen, and I think, personally as well, I think I quite enjoy doing this, I think also discussing what's new, something that I think we we would do organically as well well, that's a bit the the coming to be story of the podcast anyway, right, exactly, we're discussing these things anyway, exactly, and then, uh, we had the idea, let's maybe just uh hit record and hit record.

Speaker 1:

Let's do it, but it did so. More news to follow um. Where can people keep posted uh?

Speaker 2:

it's a good question, because I would normally say the data topics podcast, yeah, but maybe we can do an update there, right, like I think so from the moment that there is something new, um. But uh, maybe the easiest follows on linkedin, um, maybe it's easiest I think I will definitely update my personal website when it's there, bartsspace. Um, yeah, we'll try to uh, to uh, to make a bit of noise.

Speaker 1:

Indeed, for sure, for sure but uh, it's been a lot of fun, it's been uh oh yeah, maybe, uh, a call to the public at large.

Speaker 2:

if anyone knows of a good, uh cool podcast studio in the bigger leuven area, yeah, um, hit us up.

Speaker 1:

Hit us up, Indeed, and we'll see.

Speaker 2:

So we bring so with that thanks, A big thank you to all the listeners of the last. How much, how much episodes do we have?

Speaker 1:

We have a lot of episodes, I think, so Let me check.

Speaker 2:

I think the last one was 82 wow or 81, yeah, the last episode 81. Yeah, ai code assistance wow, that's crazy with the listening numbers, a lot of people that indeed were able to to indeed so a lot of episodes, yeah, very grateful to our listeners. Indeed, I think it was also very grateful for the interactions we had, for the interactions we had for the guests yes and, uh, we'll see you all very grateful to you as well.

Speaker 1:

Grateful for alex as well to coming and putting us in there and the art piece that alex made.

Speaker 2:

I think that is uh will not be forgotten will not be forgotten. Will not be forgotten.

Speaker 1:

We'll still be part of the data topics podcast, right yes, yes, yes, yes, yes, but we need to order a new one.

Speaker 2:

Yeah, we need to commission a new one, but we can't spill the name just yet, right?

Speaker 1:

Yeah, exactly, exactly, exactly, exactly. But all right with that, oh yeah, and it's not Gen AI. No, it's not. This is actual artistry, exactly skill yes, alright, and with that then one last time. Thank you all.

Speaker 2:

Bye, you have taste in a way more sad soundbites hello. I'm Bill.

Speaker 1:

Gates.

Speaker 2:

I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded, incidentally, of Rust here, rust.

Speaker 1:

This almost makes me happy that I didn't become a supermodel.

Speaker 2:

Huber and Netties Boy. I'm sorry guys, I don't know what's going on.

Speaker 1:

Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here.

Speaker 2:

Thus Data Topics. Welcome to the Data. Welcome to the Data Topics podcast.

People on this episode