DataTopics Unplugged: All Things Data, AI & Tech

#58 Maximizing Productivity: Bookmarklets, Q Command-Line, RouteLLM, and DuckDB Extensions

DataTopics

Send us a text

Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.

Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!

Bookmarklet Maker: Discover how to automate tasks with the Bookmarklet Maker, a tool for turning scripts into handy browser bookmarks.

RouteLLM Framework: Explore the RouteLLM framework by LMSys and Anyscale, designed to optimize the cost-performance ratio of LLM routers. Learn more about this collaboration at LMSys and Anyscale.

Q for SQL on CSV/TSV: Meet Q, a command-line tool that lets you run SQL queries directly on CSV or TSV files, simplifying data exploration from your terminal.

DuckDB Community Extensions: Check out the latest updates in DuckDB's community extensions and see how this database system is evolving.

Apple Intelligence and AI Maximalism: Explore Apple's AI strategy, their avoidance of chat UIs, risk management with OpenAI, and the shift of compute costs to users.

Being Glue: Delve into the challenges of being "Glue" at work. Explore why women are more likely to take on non-promotable work and how this affects career progression and workplace dynamics.

Speaker 1:

you have taste, in a way that's meaningful, to suffer hello, I'm bill gates I would.

Speaker 3:

I would recommend uh typescript. Yeah, it writes a lot of code for me and usually it's slightly wrong I'm reminded, incidentally, of rust, rust.

Speaker 1:

This almost makes me happy that I didn't become a supermodel.

Speaker 3:

Cooper and Netties.

Speaker 1:

Well, I'm sorry guys, I don't know what's going on.

Speaker 2:

Thank you for the opportunity to speak to you today about large neural networks.

Speaker 1:

It's really an honor to be here. Rust Rust Data Topics. Welcome to the Data Topics. Welcome to the Data Topics Podcast.

Speaker 2:

Hello and welcome to Data Topics Unplugged, your casual corner of the web where we discuss what's new in data every week, from SQL to Glue, anything goes. We're not live streaming today, but we'll make sure to have the video on YouTube. Actually, can we put stuff on linkedin? Probably not, right, only if we fake a live stream right, we can just put a video on can you just do it there we?

Speaker 3:

can do that, or we put the links to the video on linkedin. Let's do that we'll do that.

Speaker 2:

We'll do that. So, so you don't miss out. Feel free to check us out there, um, if you want to follow us on the, on the videos and stuff, but we're not live streaming today. I guess we're already going on a bit of a summer mode in a way we'll touch more on that later.

Speaker 2:

uh, if you leave your comment or question, we'll try to get back to you. Today is the 11th of july of 2024. My name isllo. I'll be hosting you today, joined as always by my sidekick. Is it okay to call you sidekick, bart? I am right, I don't know. Is it still on the podcast? Maybe Outside the podcast, not sure. Always with us making sure we are on our toes. On track is Alex she's saying hi, just trust me, and that's us. That's just us today. Ambar.

Speaker 3:

That's just us three today. Should we just jump in.

Speaker 2:

Let's jump we have. There are some things that did happen. Well, I think Chrome has a small AI model and some other things, but we haven't had the time yet to cover very thoroughly, so we'll maybe check that out first before covering here on the pod. But there are a few things that happened. But already, going on the tech corner, sharing some cool stuff that we've seen, some things that we've touched upon, um, a wise man once said a library week keeps the mind a peek right, bart. Yes, so what do we have? Maybe I'll show this one. I put this here, but actually you are the one that, uh, shared this one with me.

Speaker 3:

Bart, I'm curious to see what you're gonna share now bookmarklet maker. Oh yeah.

Speaker 2:

Right, so maybe this is actually the Git repo. So people following the video, you can actually see the Git repo here. Bookmarklet maker. I think there's bookmarks and there's bookmarklet.

Speaker 3:

No, there are bookmarks and there are applets, applets. So in the bookmarklet it's like the combination of those it's a combination of those.

Speaker 2:

Okay, so maybe a bookmark.

Speaker 3:

I hope I imagine that most people here already are familiar with uh bookmarks a bookmark is like in uh, if you're in your browser and you say bookmark this page and you get a button that is basically a hyperlink to that page that you bookmarked. Exactly, and what is a bookmarklet? What a lot of people don't know is that your hyperlink to a page in a bookmark can also be a small piece of JavaScript code. Yes, and that can.

Speaker 3:

Then, if you click that bookmark, it can actually do something on the page that you're on yes, so let's say someone like you're, you're on a on a web page and you make like a minimal javascript code that just fetches the title of that page and gives it back to you in a pop-up or something, um, like those. That type of logic you can save in a bookmark where the content of the bookmark is basically the JavaScript code. Yes, you call it the bookmarklet.

Speaker 2:

Indeed, the way I think of it, and actually I don't know if this goes on, the no, this does not go on the screen sharing, but if you also open and inspect the webpage right with the Chrome tools or something, there's also a console part that you can actually write JavaScript, basically in line. So, the way I understood it, it's kind of like that Everything that you could run on that console is what you could also run on the bookmarklet.

Speaker 3:

I think that's a good interpretation.

Speaker 2:

Actually, that's the way that. I've also tried to debug some things you know. So indeed, as Bart mentioned, so really cool. So maybe there's also on the Git repo. There's also a YouTube video if you're more curious. So here this is basically a quick way to creating these things, right? So the idea is that you can generate the bookmarklet. You have some JavaScript code that you can just put here.

Speaker 3:

Yeah, what Marilo was showing is like a small app with like a text input for where you can put in the JavaScript code.

Speaker 2:

Yes, very minimal, but yeah, like a web app, and then you have some output here which is just like a string and all these things, and then the idea is that you can bookmark the well, they give also a link, right, and you can also bookmark these things and then you can interact with it and actually, the.

Speaker 2:

It's a bit of a convenience tool to make this bookmark for you, right exactly so and I think, yeah, um, when I, when I click generate or click the bookmark, there's a alert. It doesn't show because I'm just sharing the tab on chrome, but they're all like all sorts of cool things, right. So, indeed, so this is just javascript, so that's what I saw first, but, as you mentioned, um, you actually have access to what's on the page. You're currently at right. So, if you go down below here, there are some examples, like, I guess it's submit current URL to Reddit, you can actually get what URL you have, you can get the title, and then you can actually like write posts and all these things, right. So there's quite a lot of cool stuff you can do with this, and this is how I would interact with things, right? So, for example and then I'm also thinking a bit ahead right, imagine you have an llm, you have a api key to chat gpt, right, one thing you could potentially do with this is, every time you click on the bookmark link or the bookmark on your browser, you could kind of take the context of the text. So, let's imagine that's a blog post and you can send it to llm and say summarize this for me. So then you can kind of interact with other things that are not just javascript and there's not just in the app. So I thought it was pretty cool, or I don't know, maybe, alex, when you're adding chapters to the youtube video, we also have chapters from buzzsprout, right? So today you have to kind of select this manually on the page, copy paste and maybe fix, trim the end, the white space. This is also something you could automate with these things.

Speaker 2:

The other thing I use this for is to actually, whenever I'm in a page, I wanted to take some information and post it somewhere else. So I saw used ChatGPT to help me write the code, because I'm not as proficient in JavaScript, and then the output actually was already copied into my clipboard with JavaScript. So then you just click the button. I also have a pop-up that says oh yeah, we copied this to your clipboard, and then I just control V right, command V, because I'm on a Mac, or control V if you're on Windows, and then you just have the output there. So really really cool Actually, like yeah, maybe I'm going too far with this, but I think it like kind of opens a lot of possibilities and I thought it was pretty cool. And actually you shared this with me, Bart. Have you used this for something more meaningful than I have?

Speaker 3:

uh for very ad hoc stuff. For example, I use this for uh.

Speaker 3:

I sometimes run my workout schedule that I have in training peaks through chat gpt to like generate new workouts, but Training Peaks, which is like a bit of a data collection thing, where, like, all your workouts sync to there, but it doesn't have an open API and you need to become an approved developer and stuff like that. So I just have the small bookmarklet that fetches everything that is on the screen and just turns it into a JSON, gives me the JSON and a pop-up and I can very easily copy paste the json to chat gpt and in chat gpt I have a custom gpt that does something with this nice, nice.

Speaker 2:

Yeah, I was even thinking, for I mean again, it's like all small quality of improvement yeah quality of life improvement, right.

Speaker 2:

But even thinking like for linkedin the stats, there's no api. So I was even wondering, like now today, if you want to actually do it, you have to go to each one, like scroll down to where the analytics are, copy paste, go to the next one, copy to the next one or something. Go back and maybe these things can also help automate some of these things, especially if you have the clipboard right, maybe you can just append stuff to your clipboard so you can kind of.

Speaker 3:

That's a good question. I don't know if the available API allows for, but probably does.

Speaker 2:

I think you can definitely put stuff on the clipboard just that. I've done, but then I guess you could probably take the text and then append more to it and then push it back, so I thought it was pretty cool. Yeah, I think it's also definitely lower barrier than asking for an API key for something.

Speaker 1:

True.

Speaker 2:

For a bigger corporation. So very cool. Thanks for sharing, bart. What else we have here? Harelba, or maybe RouteLLM. Since we're talking, we did touch on LLMs, which it's a bit hard not to do these days. What is RouteLLM, bart?

Speaker 3:

It's just a quick library that I found. I haven't looked into it in detail, but I think the premise is interesting. You're going to put it on the screen, yeah? So the tagline is, if you go up a little bit, it's a framework for serving and evaluating LLM routers, safe LLM costs without compromising quality. So what you uh router typically is a component in lm where you say, okay, I have a certain query, certain question to my conversational uh tool okay and based on the topic or based on the intent, like you're routing to some logic I see.

Speaker 2:

So it's like if I have a chatbot for a financial, like a financial institution, maybe I can have a question about my card, my credit card, or maybe I can have a question how to open a savings account, and then, yeah, based on the initial question, the router will say this is something about opening your account. So I'm gonna give, I'm gonna hand it over this information to a more specialized exactly or something.

Speaker 3:

Okay, but piece of logic, another lm these type of things, I guess it could even be a department, right like if you don't if it doesn't need to be a bot, right redirect to the right person cool, okay, um, so what you see, typically, when this is used at very large scale let's say in an internal chatbot where you can like or like the example that you mentioned like a bot, that for a bank where customers chat to that needs to take a lot of actions. If you always use the best model out there, which more or less is GPT-4.0, it becomes very costly. And what RouteLLM allows you to do?

Speaker 3:

it allows you to evaluate for which type of routes I can use I think they call it weaker models, slash, cheaper models to also optimize the cost of your setup without decreasing performance I see they have this evaluation framework I don't know exactly how they build it to evaluate, like, what is the threshold where we can go to a weaker model while still maintaining the same performance, which is an interesting yeah uh so I guess it's.

Speaker 2:

But like is it all data-driven? Based on the questions and the quality of the answers, they can say, okay, this is a question that a cheaper LLM can take care of. Or is it something hard-coded?

Speaker 3:

I think they have an optimization framework. I haven't tried it out myself, but that is a good question. So they have an evaluation framework. So I assume that they have a way to test that.

Speaker 2:

Very cool. So I guess there's still like a fine-tuning kind of phase.

Speaker 3:

Yeah.

Speaker 2:

And then after that you still have a deployment and I guess it still uses. Well, you need an open API key or any scale API key. So I guess it still uses LLMs.

Speaker 3:

It still uses LLMs, but it allows you to go to quote, unquote weaker lms for where it's relevant. And I think we do this. When we, when we build these things at large scale, we do this manually by building like for every router, for every piece of logic tests, and then you say, okay, let's switch this one to a weaker model to see if we still have the same performance.

Speaker 3:

But it's a very manual process yeah, this this, this uh, running test and seeing, like, what is the threshold? Where can we move to a weaker model without it, without decreasing performance?

Speaker 2:

I think if you can have a way to automate that, I think that's very valuable I think it eliminates a bit the guesswork, right, because you probably say I think the llama with billion parameters is good for this type of questions. But I think if you have a framework that objectively assesses these, yeah, it's okay. I think the LLAMA with billion parameters is good for this type of questions, but I think if you have a framework that objectively assesses these, things it's a stronger argument, pretty, pretty cool.

Speaker 2:

And what else do we have while we're on the tech corner here? We have Haraoba or what is this? Q yeah, but like what is Haraoba? Is the user, the, this guy, hold onauba q um q run sql directly on csv or tsv files. So tsv and csv is comma separate value or tab separated value, right? Uh yeah, for people that are not familiar with these, it's basically just a way to describe a table, right? So you have A B C. That means it's A B C, different columns over the same row. So very simple way of expressing this.

Speaker 3:

What is this? Q is a library. It's actually a command line tool that I came across. It's already a few years old, um, but uh, I like what it does. So it allows you to do from your command line basically run a sql query and against the csv file. So you can just say, uh, q, my sql statement, and then you point to the csv file it really easily allows you to to explore these csv files um and actually how does it?

Speaker 3:

uh, because I guess it's a it's a python package, uh, that they built, but it's you, use it as a command line tool I didn't use the.

Speaker 2:

You specify the file name or the file path. Yeah, as the the table name. So, like the example here uh, skew dash, d, dash h. I don't know what the flags on me, but select my field from my filecsv yeah, exactly yeah, you can build a query and then.

Speaker 3:

So typically how I do this now, if I don't have something like this, is that I just open the csv file and something like vs code and then I and I command f, do what I'm looking for. But I think this is a cool setup. I think it would be even better if we would have something like this but with a DugtB engine behind it, so that it's not just CSV files, but that you can do, that you can broader support, that you can maybe do joints, like these type of things.

Speaker 2:

I was going to ask exactly that. Actually, what's the difference between this and DuckDB? Because DuckDB there is a Python thing, but you can also install DuckDB just as an extension, like a C, I think, c or C++, and you can use it in your CLI as well.

Speaker 3:

No, but I don't think as simple unless I don't have a full view but I don't think as simple as this, with just a. This is my query, this is the file I want to query. I think that simplicity is what the value is of this thing.

Speaker 1:

Okay.

Speaker 2:

Maybe I could be wrong or maybe you can check real quick, but I thought that you could do this on the CLI.

Speaker 3:

Yeah, that could be On the yeah.

Speaker 2:

But still cool, and I think you said that this is not a very new. This is the bit. Well, you can also check. How old is this? Yeah, this commits from two, three years ago. Any reason?

Speaker 3:

why this came up now, I don't know. It was somewhere in my newsfeed.

Speaker 2:

Cool, cool, maybe related news, actually, something that I did see, but I forgot to put it on the on the notes ducktb has community extensions. So, uh, this is from, yeah, the fifth of july, um, tldr. Ducktb extensions can now be published via ducktb community extensions repo. That's nice, uh, so this repo makes it easier for you to install extensions using install extension name from community syntax. Yeah, so they kind of talk a bit more about the design philosophy, the extension. So, for example, spatial data is something that you could do, but I also think it's a good move. Quote unquote.

Speaker 3:

So it's basically their own PyPI but for DuckDB, yeah, for DuckDB. Making it easy for the community to share stuff with others. Exactly that's nice.

Speaker 2:

So I think there's some published extensions of gear. Crypto adds cryptographic hash functions to HMAC H3, lindo, prql which I didn't running PRQL commands I don't know what this is Scrooge and shell fs cool stuff, huh. I also think it's a good move because, uh, it kind of frees them from having to do all these things is there any validation of new stuff that is published, or is it just a free for all?

Speaker 2:

so if you go here, this is their community extensions repo, so I guess you need to make a merge request. Right, there is a pull request here.

Speaker 3:

Oh wow, it all ends up in a, in one single repo. Yeah, I think so.

Speaker 2:

We have all the extensions so people can install stuff. So there's probably a quality assurance yeah, that's probably yeah but at the same time it's maybe a bit dual, because if there's an issue, they say, oh, yeah, but this is open source, we don't, we're not on the roadmap. Or if they say, yeah, but this is yeah, so there's a bit of an assurance.

Speaker 3:

But maybe they can still delegate a bit the uh same time, big bottleneck, like if someone needs to approve all these merge requests.

Speaker 2:

That's true but don't you think that it would be a bigger bottleneck if they need to implement, if they're the only one that can implement these things?

Speaker 3:

no, no, but I mean. The alternative is that you build something like pi pi, where like basically, oh, yeah, there's no everybody and their cousins can send something to yeah, yeah, I see what you're saying.

Speaker 2:

True, true, true, true. Do you think? Actually? I'm just wondering about the how do you say, the security concerns there for what?

Speaker 2:

for this, this setup you mean with the no, if you like, again putting myself in the shoes you want to expand it to the community. If you just open it up there's, I guess, like, the quality of the extensions may not be great or maybe this or maybe that, but at the same time does it reflect on your product right? Is it a? Is it a duck db problem? The same way that, like, if you have a shitty pi pi packages, it's not a python problem no, because piPI is very, very much like.

Speaker 3:

Everybody can send something. So also the expectation is not, like Python is in any way responsible for.

Speaker 2:

Indeed, and that's what I'm thinking here, right Like, if you want to create community extensions, like you mentioned, it's not PyPI. I'm assuming there is a reason why they went with this community extensions setup. I don't know if it's because it's easier to install or something, but I'm wondering what reasons do you have against having a pi pi for duck tv?

Speaker 3:

yeah, maybe the the worry that it's a free-for-all right yeah if you have a pi pi for the gdb, I think at the same time, like this, like without having looked into the needle at all, like, like this gives a bit of view, like okay, these are community extensions, but they have been't been vetted.

Speaker 2:

Yeah yeah, yeah.

Speaker 3:

That's the vibe it gives.

Speaker 2:

Yeah, I think it gives that vibe. But like there's no, like you cannot say it's vetted, but at the same time, if there's an issue, you say, oh, but I didn't do it yeah, true. Right, I think you kind of either. You say I vetted this a community extension, I'm going to build this, I'm not going to maintain this, so it's a bit dual, but still. I think, having these plugins. I think it's a good move. I saw that Polar's. They also had some extension stuff.

Speaker 3:

Okay, that is also like.

Speaker 2:

Similar to this setup? No, I think. Well, I don't know exactly all the setup. Maybe I need to look into it more. I know you write the extensions in rust actually um, and then it has a certain contract and then I think you can install like you're installing a pi pi project so it can be a local thing, I don't need to see and uh, actually after that a lot of different extensions, like for data science, for different like, uh for business data Like.

Speaker 2:

But I think it was a good move in the sense that it opens up for all the people's needs. You don't have to guess what the community wants. They can kind of do it themselves and it kind of frees you the work right.

Speaker 2:

So I thought it was cool, looking forward to see what else pops have here, what else we have, what else we have Maybe semi-related to DuckDB Not DuckDB, but semi-related to this strategy of like not taking accountability for things. I read into this article Apple Intelligence and AI Maximalism, so it's a fairly long article. Basically, it's a reflection on the Apple intelligence, okay, and I did touch upon this last week, but I wanted to zoom in a bit. Reading this, too, I put a lot of stuff in retrospective. Right, it's how, for example, apple.

Speaker 2:

I think we can agree that apple has a reputation for releasing very polished products. Okay. I also, I think we can agree that gen ai hasn't always delivered on the polishedness, let's say, um, and, like apple, took a long time. So they so in the they mentioned, like Apple, risk Apple intelligence, apple. They also have a lot of large foundational models on Hugging Face and they're not bad at all. Right, it's not like these models are performing way worse than other models there. Okay, but why don't they? But you still can't. You don't interact with them directly or in the sense that you don't chat with these things.

Speaker 2:

Like, even on the apple intelligence, the, the keynote, I guess yeah they show that the ai part is actually to have context being sent from one app to another. Right, like, you create a emoji of your mom and then you send it to her right, like, but they're two different apps that you have the context and you can share the things, um, and then he kind of reflects that. Well, two things.

Speaker 2:

One, if you do need to chat with a chatbot, they're actually using chat gpt today, which frees them from the accountability of hallucination okay and I remember also we mentioned on the apple intelligence that it was a bit strange that apple was naming competitors or other companies within their keynote, which I thought it was a bit of a they're admitting defeat but at the same time, by announcing saying, yeah, if you have any questions, that like you have a chat, like you ask Siri something, Siri's not going to say, oh, this is what I found on the web. They're going to say, oh, this is what ChatGPT tell you.

Speaker 1:

Yeah.

Speaker 2:

So then, if ChatGPT tells them to put glue on pizza, it's not Apple's reputation on the line, it's ChatGPT and everything that Apple does for Gen AI. It's about services, apis, so it's more constrained.

Speaker 3:

So the hallucinations is like Apple, Apple what they do themselves.

Speaker 2:

you mean, yes, for example, if you say Siri, send a message to this, they will just send a message, right? So there won't be a hallucination. It's not like you will do something completely different, right? All in all, what they're saying is because the actions from the Apple Intelligent AI I guess the Apple LLMs they're more constrained, it won't hallucinate as much Still creating that very polished experience, okay, and they're still allowing the power of lms, the very unstructured stuff, but they're delegating the potential consequences to the actual chad gpt. They may have gemini later, right? So in that way, they still have.

Speaker 2:

they can still maintain that very polished things look and still give the users the ch gpk capabilities right, which is a different taker, because I thought when I read it at first I was like, okay, apple was just kind of. When I saw first the keynote, my first impression is that apple was kind of admitting that it's behind on ai. But I think when you look at this way it also makes a lot of sense to make sure that's where they themselves build something on ai.

Speaker 3:

That is really polished exactly right, it won't.

Speaker 2:

they will never do these goofy mistakes of, say, put glue on your pizza. That's where they themselves build something on AI that is really polished, exactly Right, it won't.

Speaker 3:

they will never do these goofy mistakes of say, good, put glue on your pizza, right, it will never do these things. Because what I do have the feeling that I see, you see popping up more and more in the newsfeeds is a bit of a disillusion, also with Gen AI, like there's oh yeah, so so much promise, but where is the delivery? Um, and I think it's definitely changed, but also like, at the same time, it's not like it looks very um a year ago. Well, how long ago did we get chat gpt?

Speaker 2:

a year and a half ago yeah, a year and a half ago, I think it was like wow, is this possible, right?

Speaker 3:

yeah, and today, like the wow factor is a bit gone yeah, yeah, yeah and then you ask uh, chat gpt, uh, where can I get fries on sunday morning at eight o'clock in leuven?

Speaker 3:

and then we'll give an answer and it's like 90 wrong yeah, yeah, yeah, I think that bothers us more now and that that bothers us more, yeah, exactly, and I think that is that. I think that's a smart take of apple to like those type of things where there's a big chance of, indeed, hallucination and where you, where you have this feeling, yeah, this is not, this is this is not creating value for me to offload that to someone else, exactly. But at the same time, the word does create value. I think, for example, speech to text, yeah, um, where they can very much improve from what they had with Siri, for example.

Speaker 2:

Exactly. No, I agree, but I also think it kind of well. There's another reason why you wouldn't just use ChatGPT behind, but you would put a layer on top and say it's Apple right Like you actually expose ChatGPT all the way.

Speaker 3:

Yeah, yeah, yeah Right.

Speaker 2:

Which I guess has a bad deal right, like the good things is ChatGP, pt, but the bad things which I think, like you said today, bothers more. Yeah, gets linked to the chad pt image, not apple image. Yeah, that's true, right. The other thing that they mentioned that I thought was pretty interesting is that they're offloading the cost of compute to users because in what way?

Speaker 2:

because they say, like they, they frame it as privacy. But, like today, the apple intelligence is only for iphone 15 pro and later devices. And they say it's because of the hardware. Right, um, which I think is good for different reasons. Right privacy, also the efficiency. But that also means that if people start using a lot the ai stuff, it's to first run your device.

Speaker 3:

Like the energy, you're paying yourself.

Speaker 2:

basically the energy, but also the hardware right, like if they shoot up the price. But they say, yeah, but this has a very expensive chip to run your AI models. You're paying for it.

Speaker 3:

Yeah, that's right.

Speaker 2:

You're not sending to a different server. So also the cost and scaling these things up. If everyone starts using AI, who's paying is the users. They're also shifting the cost of running these models to the user, which I also thought it was a different, was an interesting insight. I didn't think of it like that.

Speaker 3:

Yeah, I see what you mean.

Speaker 2:

It's not as costly to run these models because they're running on the like for Apple point of view.

Speaker 3:

But then we would see more significant price increases.

Speaker 1:

If they, yeah, I mean we see a price increase with every iPhone that comes out right.

Speaker 2:

Indeed. We've never really complained or it's like well. I mean, I always say, well, this expensive was like well, but this is really it's really good, you know, spikes on the server costs, or maybe they will, because I know that they say that if it's not enough on the device that we use the cloud, but there is another layer to it, right and now the like they frame it as privacy, but there's a lot of stuff for them to do it as well.

Speaker 2:

I see, yeah, which is another thing that I didn't when I was reading the article like, well, that's actually true. So, yeah, it was an interesting I think it was. I thought it was very insightful a lot of different point of views that I didn't really think let's see how hot our iphones will get in the future it's sure it's like the the apple cases we have a fan on yeah

Speaker 3:

some like cooling uh fluid around, yeah when it's winter and you put them outside on your window still like the performance gets better like when it's winter you turn off the ac in your house.

Speaker 2:

You just left your phone there doing some computations, yeah, well, yeah, I thought it was a cool, interesting. I also wonder if more people are also going to take this uh approach there I'm gonna let you introduce the next one.

Speaker 3:

I'm gonna take one minute.

Speaker 2:

One minute, okay, so the next topic it's about glue. Huh, glue. Yes, it's about glue. I'm going to talk to you, alex I know you don't have a mic on. Yeah, okay, cool, you're going to join us in Bart's absence. This is an article that is not necessarily new but it's something that someone shared semi-recently.

Speaker 2:

I thought it was pretty interesting. The blog is called no Idea Blog. I'll go over the slides here to set the scene a bit. Um, this is also a talk, so the slides are here, but you can also see the, the link for the actual talk here. This is from a technical leadership and glue, I don't know. So it's from a lead dev conference, I guess. So I'll go over quickly. Huh, so to set the stage, we ready, alex, yes, all right, so let's talk about here being glue. They also talk about like uh, this is a software. Still, it's like a software. Uh, engineering leadership. So then they're talking a bit about the different skills and profiles you need on engineering team. Right, and then she also introduces the glue work. I mean going very quickly, so people that are actually interested at once. I also advise you to watch the talk.

Speaker 2:

Glue work, basically, is a something when you're a senior, but it's also risky if you're not a senior and you start doing glue work. What is glue work? Maybe I'll jump ahead a bit. Glue work is a bit everything that needs to happen for a software project, a software team, to be successful. That is not engineering work necessarily it's not programming, it's not all these things right. So there's a cautionary tale here. This is not. She does say that this is not a true story. It's like more of a group of true stories. So the idea is that there was a software engineer. She just joined the team, the team is very friendly but busy. So she feels a bit bad because she's also trying to contribute. But because she's new, there's a lot of stuff that is not documented. And then she has a win.

Speaker 2:

She realizes when she's talking to the customer of who's going to use that piece of code that maybe they're not actually looking at the right problem. So they actually set a meeting with the customer and the team and they actually get at the right problem. So they actually set a meeting with the customer and the team and they actually get on the right track. She also remembers how she had a hard time being onboarded because everyone was really busy. So she actually takes up the onboarding role. She also realizes that there's a lot of crashes. So she actually pushes for more standardization, more unit tests. She also checks like yeah, some people are complaining about deadlines and whatnot. And she also unblocks some people to deploy some work.

Speaker 2:

Right, basically awesome coder. She calls it here, right. She helps that person have the information and set everything and not be bothered with other things, anyways. So this is a bit the cautionary tale. The idea is that there's a promotion coming soon and, uh, she's a bit hopeful that she will have a shot at this promotion. She's doing a lot of work that is considering senior work, but turns out that they're actually promoting the people that wrote the code, the awesome coder or the person that designed the, the systems engineer, right, um, and then there's a bit comes a surprise. So this is a bit of cautionary tale. Glue work here is basically everything that needs to happen. So setting those meetings, finding out what's the actual problem, and all these things so so glue work is non-coding tasks that are crucial for the success exactly.

Speaker 2:

Right, but even things like onboarding right, you have a new person on the team, but also that person needs to be up to speed, right? So there's a question on whether this work like if someone that is not as senior does this work Should that qualify that person to be a senior engineer? Right, and that's when glue work for someone that is not senior can actually hurt their career, right. Right, and that's when glue work for someone that is not senior can actually hurt their career, right. So the the argument here is that she's making, not me just to be clear that whenever someone is more senior engineer, people expect you to set the direction of the project, to take care of these things, make sure everyone else is efficient right, but if you're not senior and you take up this role because you're not coding're not contributing, you're not doing the value-adding tasks, they don't see that as actual productive work. So it's a matter of what people expect you to do versus what you're doing.

Speaker 3:

I think this also very much depends on the environment you're in. I think you're talking about very stereotypical, purely engineering teams.

Speaker 2:

Yes, and I think also this talk is from a while ago. Uh, let's check, do we have a date here? Oh, we can actually. Um, yeah, I'm gonna go. Yeah, five years ago. So the youtube video, this video is from five years ago. I also feel like things. So I feel like the reason why I was also drawn a bit to this is because I feel like I do a lot of glue work. Uh, today, um, also, this is the stereotypical calendar right of someone that does glue work.

Speaker 3:

It's very hard to actually, oh, wow when I see this, maybe I also do glue work yes, I probably do so.

Speaker 2:

I have a lot of meetings, right, and the time in between is the time that you would code. But, uh, realistically, if you have one hour, two hours, per day in between meetings, it's not really enough time to really get in the code and start being productive, right?

Speaker 3:

um so, yeah, other things that I thought it was interesting, um that but isn't it just like, if this is the case, and like is it not just a wrong definition of the role if you say that you're only adding value if you're coding?

Speaker 2:

yes, so that's the thing. So actually, she talks about this later.

Speaker 2:

So the one advice is that wow, I don't think I can find it, but in any case, People sometimes this happens like people say oh, you're so good at communicating, you're so good at soft skills, you should be a manager. That's usually the people's reflection, because then if you change your job title, you match the job title to the work you're currently doing, instead of match the work you're doing with the work you want to do with your job title. And then again there's like a bit of a decision for people, right, like, do you actually want to? Uh, what do you want to do? Right, where do you want to focus your time in?

Speaker 2:

right um, yeah, I think for me, uh, personally, right, I did say I'm doing a lot of glue work, but I do think today this is expected of my role, right? Yeah, I do feel like I'm someone that has a technical background-ish, I guess sometimes, most of the time, hopefully, but I'm not doing this at all today, but I do feel like this is expected of me and I do think that this reflects on the valuations and stuff that I have. Right, because it's an expectation. I do think that it would be a problem if you have a team. You have like, uh, five people in a room and then one person takes this role, but there's no clear distinction between why is this person doing then that person not doing it no, I agree right and then she's also saying uh, that well, maybe before that, uh.

Speaker 2:

One other thing that I thought it was interesting and worth mentioning and I know, bart, this is a bit the focus point here um, there were studies that were done, uh, that women volunteered more for doing this glue work.

Speaker 2:

If you have a group of people, um, and again, the, actually the study says women volunteer more for tasks that don't lead to promotions exactly because that's the thing, yeah, like if you do glue work and you're an engineer and it's not expected you to do this glue work, you're not going to be promoted. That's the the the pieces of the the talk, but I don't agree with the original premise that this is in every engineering setup is the case no no

Speaker 3:

I think, like we're probably very typical from a standard engineering that we are consultancy company. I think glue work is here recognized more than coding, because coding is the default, like it's. Like I think it brings the attention of people if someone do something, does something outside of that coding.

Speaker 2:

Yeah, I also think that I also think that today again, this talk was from five years ago I haven't experienced much of that like, uh, you're in a group with four or five people and then everyone has the same role and one person is actually taking up all these things. But I thought it was uh. Well, I thought it was interesting. The the person who did this talk is also a woman.

Speaker 2:

She shared this uh study. She also mentioned that the interesting part of that study if it's like five engineers and they're all male, they have no, no problem finding a male volunteer to do this work. But if there is a woman, they observed that men were less likely to volunteer because they, in the back of their minds or rationally or whatever, they expected or they waited for the women to volunteer. Wow, so a bit of a social dynamics there as well, which also can hurt the career of women as well in in these uh, technical roles, right, um, so yeah, and the other, that's the other thing. Yeah, like if something that cannot be. What she proposes here is that the glue work should actually be split across the whole team, which I think. But and that's a bit of the question that I wanted to ask you I think this is a nice idea, but I don't know how realistic it is um, I think it really depends on the tasks.

Speaker 3:

If it's about, let's say, um uh hard time coming with examples but uh, let's say uh, coaching more junior people on how to code. Yeah, within the context of your project, I think it's probably should be someone that everybody is expected to take up if you're more senior I think so, but I think at the same time is like how do you split that evenly right?

Speaker 3:

sure, but like, take another example. Like from the moment that you say, okay for the project that we're doing, stakeholder management for with the people from the counterparts from the business, that is super valuable, like that is a specific task, right, like you need to yeah, I see what you're saying you need to give clear ownership, like it's but it's essential to the project that's to me the biggest thing.

Speaker 2:

Yeah, like I think if you try to spread the things evenly yeah, the ownership aspect of it it's less clear. Like if you say everyone's responsible for coaching and no one coaches that person, who's responsible?

Speaker 2:

yeah, but indeed that's a fair point and responsibility exactly, and I think when there's a stakeholder management, usually you do want someone to take ownership and you do have someone that you go to to ask questions, right. So I think in those cases, yeah, and that's why to me, and I think, you can have different tasks and you can um have different responsibilities, right, like, okay, you're the main responsible for coaching, you're the main responsible for making sure in the right direction you can make. So I think it's more like a vertical thing, not to say that all these things take the same time, but. But I also wonder how, how successful a team would be if you say, okay, we're going to split all the glue work evenly across all the team members.

Speaker 3:

Then you need to be very explicit on expectations.

Speaker 2:

Yeah, indeed, and I think again, it's easy to be explicit If you say you're the one that needs to make sure this person is onboarded or coached. No Right Than to say all of you have to do one-fifth of the onboarding.

Speaker 3:

Yeah, and I think in a lot of realities like this, this is also not the default that all these stars that you mentioned here are seen as non-value-adding. Yeah, I think it depends I think it depends very much on like this sounds to me like someone that is working in a tech company, in one of the 50 uh coding teams, engineering teams of the tech company, or like where it's yeah but I think that is often.

Speaker 3:

You're often also in a bit more of a holistic environment, right like where tech is in a supporting mode, where you need to divide yeah where there is a clear governance around coaching yeah, no, but I agree.

Speaker 2:

And I also agree that I would be very surprised if I hear from a colleague that they said, yeah, I'll be doing all this glue work or however they want to call it, but they don't see me as a leader. Yeah, which is something that kind of came across on this presentation, but I don't know if that's the reality today.

Speaker 3:

Like I mean not saying that it doesn't happen, right, like maybe someone can contradict me, but I'll be very surprised if I hear that from someone but I do think for, like, for a tech company where there's this very extreme strong engineering focus, like the best coder is the most visible, yeah, but they still. It's maybe a bit of a black wine, but like I think the because that's a bit what you're, what she's saying like coding is valued more than other things.

Speaker 2:

I think that's very dependent on the culture of the of the company that you're in yeah, I, but I think, especially if you're an engineer, right, but I do think if you're leading the project like you need to do this, you need to do that.

Speaker 2:

This is what the customers want to me, that is leadership yeah, but I agree, I agree maybe you're not the most like I think, maybe maybe my head. You may not be the most valuable person, but you are the leader like. Maybe the most valuable person is the one, the coder guru, that can get everything done, but he's not the lead like. I still don't see that as the leader, but if it is, not valued.

Speaker 3:

That is a problem, right like within the company, like yeah, for sure if they're they're looking at that at the end of the month, like how many story points that this person solve? Yeah, and this glue work is not translated into story points, then that is a problem yeah, right, no, no, I completely agree.

Speaker 2:

I completely agree, but I also think it's arguably it's harder. It's harder to have this like it's not as common. I think it's harder to to know that, hey, we're focusing on the wrong stuff here. Let's take a step back then to say how do I write this application? You know, like the passive thinking, like the questioning yourself, that there's no moment that people is asking you stuff, but just like the second thing, oh, maybe I don't think we should be doing this, or maybe we should be focusing on that. I think that's way harder to acquire or to have than a coder guru, right, so all right, and I think that's it. Um, for today. Maybe, um, we're going talking a bit about the next weeks. Yes, summer is here in belgium. Finally, after some delays, let's say. I saw alex.

Speaker 3:

She was like looking so super quick um the summer months, we're gonna be, we're gonna try to be there. Yes, might take a bit of a different shape, might be less regular, indeed it may not be Indeed, it may not be live streamed.

Speaker 2:

May not be live streamed, it may not be as a timely, like newsy thing, but we still, we're still. We don't want you to forget about us.

Speaker 3:

We might play around a bit with different concepts, but See what sticks, see what doesn't. We will be here, maybe not all three of us for every episode, but we you can expect us to be back regularly, weekly, as of September, right, I think?

Speaker 2:

that's a fair second half of September.

Speaker 3:

Second half of September.

Speaker 2:

What is in the first half of September? I have some commitments, really.

Speaker 1:

Yeah, can we go into this?

Speaker 2:

Can I get an applause there again? He's getting married. I was actually sure if you should put the applause or the money, anything you know, but yeah, so, uh, I'm not gonna be.

Speaker 3:

Uh, yeah, but we'll, we'll be back so good things coming in the coming months, right definitely definitely, and also to enjoy enjoy the nice weather.

Speaker 2:

Hope everyone is also enjoying this enjoy, Enjoy the summer everybody. Yes, all right, thank you. See you next time, ciao.

Speaker 1:

You have taste in a way that's meaningful to software people.

Speaker 3:

Hello, I'm Bill Gates. I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong.

Speaker 1:

I'm reminded, incidentally of Rust here Rust. This almost makes me happy that I didn't become a supermodel.

Speaker 3:

Huber and Netties.

Speaker 1:

Well, I'm sorry guys, I don't know what's going on.

Speaker 2:

Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here Rust Rust Data topics.

Speaker 1:

Welcome to the data. Welcome to the data topics podcast.

People on this episode