Tuesday, August 29, 2023

How to save $ billion on AWS cloud?

good podcast info!

Who Wants to Save a Billion Dollars? | CloudFix

the principal areas for optimization: compute, storage, network, and RDS


COMPUTE

* What do you think is the resource utilization for compute across all AWS customers?

It’s only 6%.

94% of compute is just wasted away.
with analyzing enterprise software companies. That number drops to about 2%.

if you are starting from scratch, as you are, I would say start with Graviton. Don’t try to start with the Intel machines and then migrate over later.

for folks who are already on AWS and have compute that’s running on Intel-based machines, the first and the simplest things that I would do is switch over to the latest generation of AMD processors because they provide 20 to 30% price performance improvement over the Intel ones, and they are cheaper, right?

you get better performance with the latest machines, and they’re a lot more cost-effective


STORAGE

* a vast majority of spend happens to be in storage, specifically in S3.

storage is a very, very common area for over-provisioning and inefficiency

AWS, not too long ago, published a stat that said that over 90% of objects that are stored in S3 are accessed exactly once. And then they just stay there forever. And here we are talking about many tens of trillions of objects that are currently stored in S3.

S3 Intelligent-Tiering is a really, really awesome feature where AWS does all the analysis. They figure out how often you access objects or don’t access them, and then they can basically move those objects into other storage tiers that can be as much as 90% cheaper.


NETWORKING

AWS infrastructure is split up into 20-plus AWS regions, right, all around the world. Each region is then further broken up into anywhere from three to six or seven availability zones. And now, each availability zone has at least three data centers that constitute that availability zone. (?)

When you have any data coming into AWS, AWS says, “Hey, that’s awesome for us. It’s completely free.” But when data leaves AWS, there’s a hefty penalty you ought to pay.

within a single virtual private network, you basically have zero data transfer charges. But if your VPCs span multiple different availability zones and you have data transfer happening across availability zones, then there are certain availability zone transfer charges that apply.

if you don’t really need that kind of reliability, you don’t need to set up multi-region redundancy. We have never in the history of AWS had an entire region go down for a while.


RDS (Relational Databases)

have been around for about 45 to 50 years, very little has changed in that world

AWS Aurora, a database that was built from scratch, which has 10X the performance of any other relational database out there at one 10th the cost of a commercial database.
over 80% of RDS instances are actually running non-Aurora databases.

could be so much more cost-efficient if you were on Aurora. Not just cost-efficient, you just get so much more performance by just using Aurora.

Twitter API v2 => mostly not free

in the effort to "clean up" from bots, and make twitter profitable
classing twitter API v1 available since 2012 seem to be deprecated and not available anymore

the V2 API cost starts from $100/month, clearly not intended for causal usage and testing

Twitter to remove free API access in latest money making quest - The Verge

also a good example that when making a training material depending on external APIs (and libraries, too) is not reliable


Getting Started with the Twitter API | Docs | Twitter Developer Platform


Twitter API v2 tools & libraries | Docs | Twitter Developer Platform


twitterdev/Twitter-API-v2-sample-code: Sample code for the Twitter API v2 endpoints @GitHub