What's new in Cloud FinOps?

WNiCF - March 2024 - News

The FinOps Guys - Stephen Old and Frank Contrepois

Send us a text

In this episode, Frank and Steve discuss various news and updates in the cloud industry. 

They cover topics such as 

  • New AMD instances in Azure
  • Holographic stickers <- most important news :)
  • Azure cache for Redis, 
  • BigQuery cost savings
  • Amazon RDS custom for SQL Server
  • Azure SQL managed instance improvements
  • Changes in AWS cost management
  • AWS Cost Category
  • Retroactive application of cost allocation tags
  • AWS billing and cost management data export with CloudFormation
  • Reserved instances for Amazon Aurora Postgres Optimized Read
  • Dataflow streaming commits use discounts
  • Azure updates for enterprise customers 
  • Azure AI speech
  • Amazon SageMaker Canvas pricing
  • AWS Compute Optimizer supporting more EC2 instance types
  • Free data transfer out of AWS and Azure
  • Tagging with registering or copying AMIs in Amazon EC2
  • Updates to the FinOps framework, 
  • Azure carbon optimization tool for tracking emissions 
  • Increased default quota for CloudWatch log APIs 
  • AWS extending the lifespan of servers. 
  • They also briefly discuss Intel's losses and changes in the chip industry.


Frank (00:15.893)
Hello everyone, welcome to March news for what's new in cloud, Phenops, the podcast for Phenops that you are all following obviously because you're listening to this one with myself, Franck Contrepoix, my great friend. There we go, hello.

SteveO (00:31.758)
even old. Hi Frank. It's interesting isn't it we say hi but we've always been chatting for a bit beforehand to work out what on earth we're going to do. Not for very long listeners as you probably know but we talk about a bit about the news. We've probably done more independent research this month than I can remember ever having done. We've done at least 10 minutes. We're not reading the things. Yeah.

Frank (00:38.453)
Yes.

Frank (00:51.157)
Yes, yes, you've done research on AMD pricing. I looked at Aurora pricing pages and discovered a new type of RDS for me, at least new for me.

SteveO (01:01.838)
Yeah.

Yeah, yeah, some of my guys knew about it. I didn't know about it either. Me and you thought it was brand new, but it sounds like, yeah, we'll get into that. Also exciting, Frank, I've not shown you these. I think I sent you a picture on WhatsApp yesterday, but I accidentally ordered holographic stickers. So we've got some stickers for people we meet. And I clicked a button to see how much more expensive it would be with holographic. And it didn't seem to go up. And I thought I went back, but it has stayed this. So we've got.

Frank (01:11.637)
We'll get there.

Frank (01:19.989)
Oh

SteveO (01:33.774)
small amount of holographic stickers so if you see us at the events coming up let us know and we'll share those with you. I don't really know where to put it on mine because I don't put it on my laptop anymore I've got to work that out. But shall we do the first bit of news which means you need my noise?

SteveO (01:52.11)
Instances and compute it's me. I'm not used to being first. It's because I for once put mine in before you put yours in. So we're starting with Microsoft and we've just briefly touched on this one. And I've got the wrong one open, but this is some new instances are available. This is the new generation AMD VMs DSV6, EASV6 and EASV6. Oh my goodness. If we edited that would have been taken out.

Frank (01:53.813)
Yes.

Frank (02:18.581)
Ha ha!

SteveO (02:20.75)
So these are released on the 4th of March. They are the latest ones based on the fourth generation EPIC 9004 Genoa CPUs. What's most interesting to us is when we looked previously at the pricing changes for moving to these new AMD 4s, Gen 4s.

Frank (02:42.805)
Yeah, indeed. Yeah.

SteveO (02:46.446)
AWS had gone up in price. So the M680, the M780 had gone up by 34%. They, yeah, they stated it's 50 % more performant if you look on the website, but it went up significantly. So I did the same thing here and I looked at the DAS and the EAS. The FAS isn't available in version five, so I can't compare. And they've gone down around 4%.

Frank (02:51.445)
Yeah, it was massive.

SteveO (03:15.694)
It's a little bit.

Frank (03:16.725)
So 4 % down on instance type using AMD chip in Azure and 30 % up on AWS. Wow.

SteveO (03:23.054)
Yeah.

Yeah, that's exactly how this is reading. Now it's not exactly 4%. It goes from anything from 4 .42 % to 3 .92 % reduction, but they've all gone down. Yeah, exactly. But that'll be down to the rounding, I imagine, because it goes, you know, it's all done by the second, isn't it? So the other thing that's interesting, no saving plans pricing for the new instances. So we can't compare that.

Frank (03:36.053)
Yeah, someone played with Excel, yes.

Frank (03:46.709)
but yeah.

SteveO (03:55.694)
And the spot pricing is significantly cheaper on the old one, which makes sense if we think about why they use spot, but you're getting about a 90 % saving on the V5 spot instances and only a 75 % saving on the V6 spot instances.

Frank (04:10.837)
But what is the difference between price between the two vendors? If you take

SteveO (04:17.262)
Oh, OK. Well, unfortunately, one's in hours and one's in months. So let me just quickly do this time. So what are we going to say, 7 .30?

Frank (04:23.061)
Ah okay, because it's interesting. Yes 7 .30 from months.

SteveO (04:31.31)
Oh, which machine am I? Ah, I haven't looked at what machine I'm comparing. That's a mini. Oh, which is a one four. So, no, no, no. I think that's gonna be maybe comparable. So it looks to me.

Frank (04:37.173)
Right.

Frank (04:45.045)
Yes.

SteveO (04:46.734)
that, oh, is that based on a large? Hang on, oh, do you know what? I actually kept this open. The seven large is the price, which is two eight, and I'm comparing that to a two eight as well. This is Linux for Linux as well. Let me just make sure I've done that correctly. Yes. Ah, so the Azure price is lower. However, the...

SteveO (05:15.118)
Hmm, I'm just trying to see if there's any storage included. Nope, neither of them have storage included.

Frank (05:18.517)
And is it low -range by 30 % or it's...

SteveO (05:22.382)
It is $84 versus $60.

Frank (05:26.485)
Ah yeah, it is massive. Yeah, okay. Yeah, yeah, yeah. Okay, now I was just trying to understand if AMD was heavily discounted in the previous AWS generation so that the increase would be...

SteveO (05:30.51)
It's

SteveO (05:35.63)
Well, let's compare. So the previous generations were very similar priced, 63 .01 and 62 .78. Now, I think I'm including, I think I'm looking at general purposes. Apologies if I'm looking cross purposes, but they're both two CPU and eight RAM. But yeah, the AWS one has just jumped to 84 and the other one's gone down to.

Frank (05:41.909)
Okay.

SteveO (06:04.462)
So I think this is the biggest price gap change I've ever recalled. But anyway, there we go. So that was the first piece of news. And that's all of the research that we talked about done. I'm exhausted. Yeah, that's as far as we go.

Frank (06:10.965)
Yes. Yep.

Yeah, that done. That was the first news. That's why we dedicated lots of time to it and then now it's over. So the next one is I have lots of instances on AWS to announce. And they are all metal instances. So this is the C7GD, the M7GD, the R7GD. They're all being released on the 6th of March but also on the 26th of March the C7GD.

SteveO (06:32.366)
Hmm.

Frank (06:44.949)
so they finally installed the network cards into those instances because that's probably what happened are available so C7GD, M7GD, R7GD and C7GN metal instances which mean that you really have the dedicated instance for you you can go if you have if you write kernel kind of stuff or you really go deep into using the hardware

then that's what you need, probably also for licenses. Yeah. So that's the two news for me and I think we're done.

SteveO (07:16.078)
Also kind of licensing benefits. Yeah, licensing benefits. Yeah.

SteveO (07:23.662)
Yeah?

Data and databases. So my first one is a general available release of an additional cache size for Azure cache for Redis Enterprise. Actually, I lied, I've done a little bit of research on this. So they've reduced, basically they've created a lower size. So it starts at a four gig cache size now. Previously, the smallest was a 12. Only an enterprise, right? If there is, it goes from kind of basic, let's see if I can.

I don't know, I've got the pricing open still, but it goes through, I'll have it open. Basic, then standard, then premium, and then enterprise. I got to premium and thought that's where it probably stops. And I was like, there is no four gig cache. What I've done here is I've had a quick look. And so it goes from E5 to E10, and the E10 is 12 gig cache. But.

In terms of price performance, you're far better off with the 10. So this four gig makes sense if you're not using the 10 and you need something smaller, but they've made it still be that the bigger you go, the more cost efficient it is. That remains slightly the case going from an E 10 to an E 20, which is 12 to 25. They are pretty much exactly one is double the other. You're getting one extra gig capacity, but.

Frank (08:49.589)
Good.

SteveO (08:51.534)
The smaller one is very much there for if you just need something smaller, but you still need that level of service around the enterprise piece, but your price performance is going to be less.

Frank (09:02.261)
So if you're not buying Enterprise, you get only the 12GB.

SteveO (09:06.894)
And note, so for premium, you've got 6, 13, 26, 53, and 120. For standard, you've got 250 meg, one gig, 2 .5 gig, six, 13, 26, and 53. And it's the same for the basic version. So this is more, I think they had only done the enterprise for bigger stuff. And some people asked for, you know, I want that level of service, but I don't need that big thing. Yeah.

Frank (09:11.509)
Okay.

Frank (09:16.053)
Okay.

Okay.

Frank (09:30.357)
Yeah I want smaller, I don't need 12 gig or I need a cluster, I prefer clusters smaller than big.

SteveO (09:36.526)
Yeah, and if you want to go mad, you can do the enterprise flash ones, which start at 384 gig.

Frank (09:43.253)
I'm out for gushing, that's kind of...

SteveO (09:45.166)
Yeah, that's where it starts. The biggest is $24 an hour.

Frank (09:51.669)
Yeah, so 1 terabyte.

Frank (09:59.349)
expensive. Next is still yours but it's a Google.

SteveO (10:00.494)
Yeah.

SteveO (10:03.918)
It is. Oh, yeah, right. So BigQuery customers save up to 54 % in TCO compared to alternative cloud data platforms. This is a data analytics blog released by the product manager of the Google Luna. I think I've met this. Very nice. But ignoring how nice the person is, this is based on a study by Enterprise Strategy Group. But unfortunately, this study is behind a paywall.

So we'll have to believe what it says. But across three categories, which are predictive AI, machine learning, agenda AI projects, ESG found that BigQuery eliminated upfront investment and planning requirements, reduced operational costs, and improved business agility. In fact, BigQuery customers saved up to 54 % in total cost of ownership compared to alternative cloud EDW, enterprise data warehouse, offerings. But.

And it does show some others on here that I'm not going to name. But what they've got included in those other pieces is quite hard to say without going into that. It might not be behind a paywall. It might be behind a register wall. But I've just got so many things emailing me already. I decided not to register to this one.

Frank (11:21.205)
Cool, next is mine. And it is something which was completely new to me. So it's Amazon RDS custom for SQL Server supports X2 .i .e .dn and R5 .b instances. I've never heard of the R5 .b instances, but that's probably me. And I've never heard about RDS custom for SQL Server. So I did my little investigation, which is following one link, okay, that's the extend.

SteveO (11:49.71)
You went one link further than I did.

Frank (11:51.189)
And you see this, so it seems that there is Amazon RDS custom is for Oracle, it is for Oracle and SQL Server. And they are supposedly made so that you can use, you bring your own media or medium to AWS so that if you already have an Oracle or SQL Server license, you probably have a better, you have a...

One more way to bring that license without going into all the crazy things you need to do usually to bring those kind of licenses. The price is on a per instance. You have even the T3. It goes to M5 and M6i. For example, I was quite interested that for Oracle there is nothing AMD, there is nothing else. And same for SQL Server by the way. You have some

SteveO (12:43.086)
Interesting.

Frank (12:47.701)
R -Rise, which are the non -standard ones, they're one year. So it's a completely new thing for me. Stephen was saying that, yeah, people in his company have used it. I've never heard of that.

SteveO (13:00.398)
Yeah, so in one of the things that our business does is OLA's, Optimized Licensing Assessment, I guess. It's not my part of the world. But I spoke to those guys because they basically look and say, hey, we've got this workload. It's in Oracle, SQL, IBM, whatever. And you want to move it to somewhere like AWS. Where would you put it? And.

Even got this battle of, you know, EC2 versus RDS, etc. So they've they've used it. The two guys I spoke to because I was like, oh, look, it's brand new thing I'm showing you. Like, yeah, we've done multiple engagements using that. So I felt a little bit silly. But then they were saying it's really was aimed more around legacy things. But.

As we got more into the conversation, which hopefully people didn't hear the beeps on the call, but they said that one of the customers used a DBE 19 on RDS custom, which is a new, you know, a newer version of Oracle. So it does look like you can bring newer stuff as well. I don't know if we've done any SQL stuff on there, but for the Oracle bits where you can, you know, bring that license, which you've probably got a good level of discount versus paying it on demand on AWS and they, and.

I was kind of like, well, you know, what's the point in this? And, uh, you know, arguably AWS should take care of the DB management still. That's why it's still labeled as RDS. It's just, you have a bit more access and you can bring your own media. Yeah.

Frank (14:21.941)
Yeah, and how AWS specialists can assist with cost optimization opportunities and eligibility for additional promotional credits. Anyway, it's a thing. It's new. Even a new instance type, I don't know what the B means at the end. And so that's going to be my next research is what does R5B means, but hey.

SteveO (14:41.39)
Yeah I don't remember that being on our list.

Frank (14:43.349)
No, me neither. So, you see, we're discovering every day. That's why we do the podcast also. That's one of the reasons.

SteveO (14:48.398)
Exactly. Yeah, we've I think we've learned more today than we've learned in a while as well. Right, I think I need to make a noise again.

Frank (14:52.085)
Oh yes.

Frank (14:57.045)
and the news are yours, two of them.

SteveO (14:58.414)
Storage, just me. I'm beginning to wonder whether they were worth doing. I could have avoided one, but no, they are. So the first one is optimized costs for Windows workloads using persistent disk async replication. This is a blog around the fact that you can use the asynchronous replication of persistent disks to pilot light a DR scenario in Google.

So rather than holding two sets of machines like cold running in a different region, you can with a relatively low RPO recovery point objective of like less than a minute, is it? Let me just check what that says. Under one minute, you can have all of the persistent disks, both boot and data disks available in other region.

and then automatically rebuild into the region using, interestingly enough, it talks about Terraform before it talks about Google SDK. But yeah, you'd still therefore have to do that, but you could do it programmatically and your data's there. So you'd have low data loss and you'd do that and you'd basically just be paying for the cost of the replication and the disks. So that's quite cool.

I mean, it says optimize costs in the title, so we kind of had to include it, but I will let people decide whether that's going to optimize their costs. It depends on what they're using. The next one. Is.

Frank (16:28.565)
You had an S01 in here somewhere, which was a public preview next generation of...

SteveO (16:31.886)
Yeah, I've jumped it. Yeah, you're right. I've skipped one. Yeah, next generation of general purpose tier for Azure SQL managed instance. I'm just going to read this part of this out because I think they say it better than I can. I'm just going to skip a bit. The next generation of general purpose service tiers for Azure SQL managed instances is a major upgrade that will considerably improve the storage performance of your instances while keeping the same price as a general purpose tier.

This will greatly improve your price performance for existing Azure SQL managed instance workloads and allows you to migrate more of your SQL workloads to Azure SQL managed instance. I mean, does it allow you to do that? It just makes it cheaper if you do. And it can include support for 32 terabytes of storage, which is significant. But yeah, so you will get more for the same price. So it should be more cost.

Price performance, we'll say, rather than cost efficient. And that is that one. You haven't got any storage ones, so let's press that magic sound again.

Frank (17:37.013)
Here we go. Visibility.

SteveO (17:38.958)
Visibility, yeah, which includes things like billing conduct to tags, cost categories. Oh my goodness, I've got a load again, haven't I? So, this one was super interesting, I thought, and I put this on LinkedIn, a few people commented actually, support for Connect for AWS in cost management is ending on the 31st of March, 2025. So the Connect for AWS once built to consolidate Microsoft Azure and AWS cloud cost data in Microsoft cost management will be retired.

the 31st March 2025, and we encourage you to consider an alternative solution prior to the retirement date so that you can complete your transition on time. It doesn't tell you what that would be. I'm assuming this will be based on focus and there'll be a focus ingestion or something. Yeah, that's what I thought. But the interesting thing is the ability to add a new connector for AWS in cost management will be disabled for all customers on the 31st March 2024. So you can't make a new one now.

Frank (18:23.797)
That was the same thing in my head, but yes.

SteveO (18:38.158)
and it will be gone in a year. On 31st March 2025, the connector and cost reports containing AWS data will be lost. In addition, all AWS cost data stored in Microsoft Cost Management will be deleted. Please note, we won't be deleting the cost and usage reports files stored in your S3 buckets in the AWS console because they couldn't. But yeah, like...

Frank (18:38.421)
over.

Frank (19:00.277)
Yeah.

SteveO (19:06.222)
It's not even going to rain, it's all going to be gone. That's interesting, isn't it?

Frank (19:06.357)
Yeah.

It's interesting that hopefully, as you said, it's going to be focused, but they're not saying that thanks to focus, we will be able to import AWS stuff, so there will be a transition to a new format and this will continue. It's we kill it, your problem, good luck and buy something new.

SteveO (19:17.07)
Yeah.

SteveO (19:24.654)
Yeah, see you later guys. It's gonna be gone in a year. Yeah. The next one is the general available cost analysis add -on for AKS. We talked about this when it was in preview, so I'm not gonna go into too much detail. But this Azure native experience provides visibility into underlying cluster infrastructure costs associated with AKS workloads. Costs are broken down by Kubernetes constructs, such as clusters, namespaces, in addition to the Azure asset categories.

view cost allocation data directly in the cost management blade of the Azure portal. So that's cool.

Frank (19:58.613)
I didn't understand half of it, but that's me.

SteveO (20:01.518)
Um, so basically directly in Azure cost management, you can start viewing AKS costs by Kubernetes concepts. So by the workloads and namespaces, sorry, the nodes in there. Yeah. Rather than just being able to see AKS as one homogenous block or tag blocks, you can actually see it based on namespaces. So you don't have to double, double do things.

Frank (20:13.461)
Yeah, the node. Yeah, okay. Cool.

Frank (20:27.157)
Cool. And on a completely different topic that has nothing to do with what we say, I'm noticing that all my tabs just opening the news, which is only text, is 52 meg each. And some of them is 100 meg. How can a page be 50 meg? Or the browser. Anyway.

SteveO (20:37.23)
Really?

SteveO (20:43.822)
Well, so here's an interesting one. So, so this is an interesting one, right? Well, it's not interesting to anyone other than probably me and you, Frank, but unfortunately listeners, you're gonna have to listen as well. So one of the things I do is I go and find all the news and I put them, and not just Finobs ones, anything I find interesting, and I put it on, oh, the name is Escape Me, Frank. You use the same one now, don't you? What's it called? Buffer, sorry.

Frank (20:53.941)
fine.

Frank (21:08.725)
Maybe ever buffer yes yep

SteveO (21:12.558)
And then it puts out LinkedIn, right? And when I do that, when I do the Microsoft pages, it takes ages. There must be something in the coding of Microsoft pages, how they build it, which means they take ages. The AWS and the Google and anywhere else is really quick. So I do notice that my Azure pages seem to eat more. Eat more Amazon. Anyway, it's your one now.

Frank (21:33.205)
Okay.

Anyway, yeah it's mine. It's AWS Cost Category launches a revamped user interface. So I'm not going to say much more than that but there is a new split view panel and it shows the costs that are not captured by the Cost Category as uncategorized which I'm surprised was not there before anyway. But yeah so that was on the 8th of March. If you use Cost Category, have a look at it, it might be new and you might have to go through where the

heck is that button again. The other one is still on cost allocations but now you can cost allocation tags now support retroactive application. So that's really cool is that if you had if you'd applied the tag sometimes before in the last 12 months and you've not upgraded it to cost allocation which is which needs to be done as a second step but you want that you can ask

SteveO (22:20.174)
Hmm.

Frank (22:33.109)
for it to be applied retroactively. So you need to fill a backfill request which seems to be to look very much like a support ticket and they will be able to do it. If you look a little more at the documentation you will also discover that you cannot request a backfill if there is already a backfill happening and you cannot...

So it seems to be on a tag by tag basis, it seems to be you need to do one and then wait for it to happen and there is a maximum of number of things you can ask per 24 hours. So overall it seems to be a reasonably semi -manual process but the feature is really cool. As far as I understand it is not going to change the previous curve file adding for example a column with a tag but it will impact mostly cost explorer.

So if I said something wrong, please come back, correct me, very happy, as usual.

SteveO (23:40.366)
Have you gone to the door?

Frank (23:40.725)
And I think, well yes I do have another one. Sorry it was back, I was on the wrong page and my mouse is all over the place. So thank you listener for the patience. AWS billing and cost management data export now support AWS CloudFormation. So I was unclear what this title was about but my understanding is that now when you use CloudFormation you can set data exports. So what is data export? Is the curb 2 .0? So it's the new version.

So now you can say in a CloudFormation script, I want a new data export. So that's quite cool. That's going to help again the deployment of everything that uses Focus in the future. That's my guess, that's the reason.

SteveO (24:16.942)
Right that is cool yeah.

SteveO (24:24.014)
Yeah.

Well, if we think about what we used to have to do when we worked with the TracteG Blue, we used to say to people, right, the first thing you need to do is create a care. Here's a manual way of doing it. And then here are some either Terraform or CloudFormation stack to then run the IAM permissions I need. If you can give someone both as a CloudFormation stack, that'd be really useful.

Frank (24:42.069)
Thank you, actually.

Yeah.

I think it is now doable with one cloud formation to do the current generation, but it was missing from this new one. There is another news, which I was in big orange, but I'll say it's a tax dashboard now generally available for AWS Marketplace seller. That's it.

SteveO (24:50.318)
Yeah, I think so. Okay. It was the new one. Yeah.

SteveO (25:01.582)
I think we should talk about it.

SteveO (25:06.798)
Yeah, yeah, that's fine. Yeah, we've talked about it in preview anyway, so that's fine. Oh, Frank, it's on with you. I've moved some, mate, because I think they were in the wrong place. So now it's on you.

Frank (25:15.669)
So you wanted to do me on me, huh?

Frank (25:22.261)
commitments. I didn't hear the sound very much, so...

SteveO (25:25.038)
Didn't it? Oh I did. It was very loud for me for a change actually.

Frank (25:27.573)
Interesting. So it's a really really big thing. AWS announces a seven -day window to return savings plans. So that's really cool. It means that if you've made a mistake with savings plans, which is quite easy because there are lots of little things you need to be on the right region, you need to buy them correctly, as we said, you need to also the price that you need to set is a discounted price that you cannot know in advance. So it's quite easy to make some mistakes in savings plans and until now...

SteveO (25:32.046)
Massive.

Frank (25:56.629)
it was you could ask, you could beg, but now you don't have to do that anymore. For seven days you can check it and say no that doesn't work. Now there are some caveats, so seven days needs to be in the same month and that discounts or anything that saving if that saving plan was applied for those seven days or whatever amount of day before seven you've been using it that's going to be completely removed and so all the effect of the saving plan that is cancelled.

SteveO (26:01.71)
seven days.

SteveO (26:08.974)
Yes.

Frank (26:26.101)
disappear which is fair point. So...

SteveO (26:28.398)
Yeah, very fair. Yeah, cause it will just remove that those lines from the care and then the cost will move back up to the.

Frank (26:34.101)
Yes, so someone was telling us you still have the line and you have a cancellation line so you should, as far as I understand, from the curve file you should be able to understand that there was a saving plan that was reversed but it's not going to be applied so you'll have probably a line that is, did the arrive, you say this is a saving plan, this is the cost and then another line this is a saving plan, this is a remove cost but then you don't see any application of the saving plan anywhere on the rest of your curve.

SteveO (26:40.206)
Uh oh.

SteveO (26:46.51)
Okay.

SteveO (27:00.91)
Wow. It's going to be interesting to see how that's going to impact tools taking this data in.

Frank (27:07.573)
Yes, well they'll have definitely so everyone will have to do some sort of a patch to manage those line items but normally if it's one that cancels the other it's possible if you just do some it cancels it works but anyway another line item too.

SteveO (27:26.926)
Cool, you again.

Frank (27:27.317)
And mine again, yes, I'm checking. Ah, so you have commitments. So AWS announces reserved instances for Amazon Aurora Postgres Optimized Read. So it's very specific. It's quite interesting after talking about all those seventh generation instance that this is available for R6 GD and R6 ID. Happy days. But there are new instance.

There are new RIs for these ones. I went to the Aurora page for pricing. I got overwhelmed, let me be clear. It's wow, it goes all over the place. But overall, so there are new RIs for Aurora Postgres Optimized Reads. So it's very specific. And that's another one. I think I have... Do I have another one? No, it's yours now.

SteveO (28:03.534)
Yeah, you fool.

SteveO (28:23.47)
No, yeah, yeah, we had a duplication which I've just removed. So my newest one is save up to 40 % with Dataflow streaming commit use discounts. Dataflow is an industry leading data processing platform that provides unified batch and streaming capabilities for a wide range of analytics and machine learning use cases. I'm purposely speaking quite fast sometimes because Rob told us on our recent guest podcast, they'll be out next week.

that he listens on in crew increased speed. So I'm going to see if he can understand any of this. And it basically goes into how this works. You get 20 % off for a one year, 40 % off a three year. Um, it's quite, I said, I've said this before. I quite like how Google do the blogs. It actually explains how these things work. I don't know if these cuts are new. I don't remember talking about them. And I helped write the playbook.

Frank (28:54.773)
So you wanna force him to slow down?

SteveO (29:15.886)
So for the Phenomenal Foundation, I don't remember talking about base flow codes there. So I think these are new, but you can never tell without Google put things out. Right. I'm going to blast through these next ones on this next topic.

pricing. So first, Microsoft are excited to announce that enterprise customers with MCAs or EAs discounts on top of their regular Azure saving plan discounts are now able to view in their calculator estimates. So you can stack them, which you couldn't before. They've also made some apparently fantastic updates to Azure AI speech, including additional commitment tiers with more pricing options, a new null high definition offer,

Well, HDL for restroom time definition and a new pricing set up for text to speak avatar and personal voice and

Frank (30:05.429)
You know what we should try to do? We should try to record one podcast, get the transcript and put it into one of these machines and say, now generate back it, please.

SteveO (30:13.23)
Well, the the transcripts we're getting now is so much better, aren't they? So.

Frank (30:17.493)
Yes, and so if we put that into text it will be the new podcast of Robot Frank and Robot Steve.

SteveO (30:23.886)
Well, we should certainly try a translation one. I know you've put it into French a few times, but fantastic. Oh, you've got one as well.

Frank (30:33.749)
I've got one as well, which I do not understand. So Amazon SageMaker Canvas announces new pricing for training tabular models. So I don't know SageMaker Canvas, but it seems to be. So it's a no code that enables customers to easily create highly accurate ML models without writing code, which is super cool if you know what you're doing. I was really cool until then. Then it says SageMaker Canvas support numeric prediction regression to...

SteveO (30:37.87)
Let me open it.

SteveO (30:51.15)
Yeah, yes, sounds like magic.

Frank (31:01.077)
category prediction binary classification and three plus category prediction multiclass classification and time series forecasting that's the only one I understand for tabular models so anyway the idea is that

SteveO (31:12.046)
Yeah, it's a bit it's a visual point and click interface where you can create flow charts, I think of what you want doing.

Frank (31:17.652)
Yes, but I was not understanding how you would use it. But the interesting bit is they changed the pricing model. So in the past you had a minimum of $30. Now they're making it by the number of... So this is tabular data by the number of cells. And so you could have, for example, if you have a quick build model with 16 megs of data and 3 million cells can be less than $2. So compared to what you had before, 30 minimum, that's an improvement.

SteveO (31:30.03)
Oh.

Frank (31:44.533)
And as usual, you discover that all of that stuff, you see five instances. So yeah, I have the optimizer twice, yes. Then, what one do we have? Where are we? I'm lost. That's it. Savings! And they're both mine. So the first one is AWS Compute Optimizer now supports 51 EC2 instance types.

SteveO (31:55.054)
Right.

Frank (32:14.261)
So that includes the M6ID, the C7I, the R7IX2IDN, X2IEDN, HPC7A and others. You can see that but yeah so overall that's positive because Compute Optimizer is helping you save money, making sure you are choosing the right thing.

SteveO (32:28.75)
load.

Frank (32:42.197)
So the more instances it supports, the better I'm... yeah.

SteveO (32:44.718)
Yeah, it has a great feature that allows you to choose to stay in the instance family as well. So that's quite a nice piece where I think you could stay in, like if you're on a six I, it would only recommend a seven I. No, it wouldn't go outside of that. It might only be within six I. So yeah, that was quite a nice one. And they talked about the removal of deduplication in the optimization hub recently as well, which I think was very cool.

Frank (33:00.981)
Okay, nice, thanks.

Frank (33:10.837)
Next one is still mine and is AWS following everyone else on this one and strangely after an EU regulation which is free data transfer auto to internet when moving out of AWS. So...

SteveO (33:23.342)
Well, do you know what? I've got it in a separate section, but you're quite right. It should go here. I would like to also announce that Azure, as of the 30th of March, now available free data transfer out to internet when you're leaving Azure. Yeah, it's all because of this thing. Google got to it first. That's the, they...

Frank (33:39.349)
Yes, Google did it three days before the legislation passed. It was quite fun. And everyone was there, oh, they are removing oligarchs fees. Oh no!

SteveO (33:48.014)
And then some people in the comments, you kept reading only when you leave Google. Like I think Anthony was fantastic with that actually our friend. All right.

Frank (33:54.485)
Yes, but yeah, remember also that most of the time, that's why they say 90 % of customers on AWS are not consuming the 100 gig they have available by default as free tier. So yes, and you need to contact AWS support to ask for that transfer to be reduced. Exactly. And it follows the direction set by the European Data Act.

SteveO (34:05.934)
Mmm. Over you go.

SteveO (34:15.118)
You have to contact Azure support for details on how to start the data transfer out process. Yeah.

SteveO (34:23.63)
Yeah, and it's actually a credit. The credit back there. In certainly in Azure anyway. Right, we need to blast on.

Frank (34:27.221)
Interesting.

Frank (34:32.341)
Yes, there we go. Phenops General, that's new. So Amazon EC2 now supports tagging with registering or copying AMI. So it seems that now you can tag at creation your AMI and so that then when you will create something from them, they will be tagged already, which is way better than before. So that's an improvement. Tagging, so for automation.

That's it.

SteveO (35:03.758)
Nice. Um, Finops framework has been released with the updates for 2024. We're not going to talk about it too much now because our friend Rob has come on to do a podcast and that will be getting released next week. Um, right. Sustainability. We've actually got sustainability news. Um, the Azure carbon optimization tool now available through the Azure portable device, IT admins and engineers emissions data at the resource level. They can monitor and track their emissions.

Frank (35:07.893)
Yes.

Frank (35:16.309)
Yes.

SteveO (35:33.55)
data and analyze trends. The tool also provides recommendations to reduce the emissions and the associated costs. And the resource level emissions data is now made available through the Microsoft Azure emissions insight preview and sustainability data solutions in Microsoft fabric as well. So two lovely updates. Only Azure to my knowledge are doing it to that level at the moment. Yeah.

Frank (35:54.293)
Yeah.

SteveO (36:00.494)
Interesting. It's one of the challenges we have in sustainability space. So that is great.

Frank (36:07.957)
That was a misc silly. So one is announced an increased default quota for CloudWatch log APIs. So in the past there was a default quota which was 1500. Now it's 5000 transactions per second in selected regions. And so no change is required. As far as I know you can just, as far as I understand, consume more. That's why I've put it in. But I am not even really sure if you're going to pay less. I'm just noticing you see that.

SteveO (36:10.958)
Oh yay, perfume.

Frank (36:37.173)
Last news, bad review. Other things that we had, so there was a news on the register which was coming from AWS presenting their shareholder where returned the usual classic thing, and they've done a useful live study of their servers and they're gonna use them for an extra year in...

SteveO (36:45.87)
Interesting, this.

Frank (37:01.237)
technical term that's sweating the assets. What is quite interesting is they've already increased it from 4 to 5 in 22 years ago or less than two years ago. So it's quite interesting. Let's put it this way. If you look at it cynically slightly, you say yes, they are under financial pressure, especially that the growth of cloud is not as fast as it was. And so they are using the same assets, the same servers for longer, which is not bad for the environment, by the way.

SteveO (37:03.502)
Yeah.

SteveO (37:11.182)
Bye bye.

SteveO (37:29.87)
That is good and also it could be a partial response to the the loss of margin due to actual electricity prices increasing. This is another way of reducing cost.

Frank (37:37.621)
Yes.

Frank (37:41.077)
yes so and they say that it's gonna change that with the news it's yeah anyway so it's gonna be way better it's gonna yeah it's gonna 900 million of net income increase on q1

SteveO (37:57.902)
Well, for AWS.

Frank (37:58.965)
That's huge for AWS in Q1 of 2024 alone just because you've extended by one year things. And I do believe that AWS has other ways like Lambda, like lots of services where you do not see the CPU that's behind that they can use to continue with those servers and run the servers for longer.

SteveO (38:11.726)
where they can use the old one, yeah.

SteveO (38:17.518)
Yeah, we've always said that as they've had those assets. They knew the piece of random news I had was the Intel losses of $7 billion from the chip side of the business. And someone, was it you that told me that they don't expect to hit profit until 2027, was it? Wow.

Frank (38:29.845)
Yeah.

Frank (38:37.141)
Yes, yes, I was reading some analysis on that. They've decided for so long just to stay for the last 10 years, they've just decided to do their own stuff and building on what they had and they say the problem is it now it reaches and they made some bad decision, previous CEO obviously because there is a patent, but he is changing things and the idea is that currently they are definitely underperforming the Intel's.

SteveO (38:48.942)
Yeah, let's go back to scratch.

SteveO (39:05.934)
Hmm.

Frank (39:06.197)
but they consider that the next generation, they are again on the leadership of the next generation. We'll see.

SteveO (39:11.374)
This is the cycle, isn't it? We said this when the AMD 6 and 7 started outperforming the Intel chips. We said they will be back with a vengeance. It's what happens every 20 years.

Frank (39:19.605)
Yes and they've done it well both Intel too has started to outsource the production of some of their chips which they've never done before so there is really a change in there. Anyway good news.

SteveO (39:31.214)
Right.

It's interesting because if you look at the AMD results in comparison in terms of the how the stock I'm not telling people how to how to put money into stocks or anything, but you know, it was it was quite significant. But this seven billion loss, I think it was expected only created a six percent drop that I saw when I got my kind of notification on the podcast recorded.

Frank (39:51.349)
Yeah, but it is expected he's changing everything. So yes.

SteveO (39:56.43)
Yeah, yeah, well, there we go. Hard to do in a big company, but fair play to them. And Frank, I was really worried when after 17 minutes we'd only done like two pieces of news, but we've somehow brought it back in a reasonable amount of time. Listeners, thanks for being with us. There was loads of news this month. It's a nice change because we've had a few quite dry months and hopefully didn't mind the analysis. But yeah, Frank, great to speak to you. Have a great weekend.

Frank (40:06.165)
Me too.

Frank (40:13.621)
Yes.

Frank (40:22.197)
weekend. Yes.

SteveO (40:23.822)
And I hope listeners, you have a great weekend too. Hopefully we will see some of you at the events that are coming up as well.

Frank (40:29.557)
Yep, bye bye.

SteveO (40:31.31)
Bye bye.


People on this episode