cloud,

7 Ways to Save Money in the Cloud

Oct 02, 2021 · 4 mins read · Post a comment
7 Ways to Save Money in the Cloud

Cost optimization is just one of the five pillars of Well-Architected Framework, which is identical across the top 3 public cloud provider players. Although, deploying a billing alarm is the first step after you create a cloud account or subscription, I’ll share with you a few other tips to have in mind before provisioning compute resources.

Prerequisites

  • Cloud Computing account/subscription

Instance type

First and foremost, you need to choose the right instance type you will be using before spinning up any instances. In order to do so, identifing the workload type would be the first step. Is it a database, a web app, batch processes, machine learning, serverless functions, docker containers, and so on. This is important, since each type of workload require a specific resource optimization, hence the most common virtual machines types are:

  • General purpose
  • Compute optimized
  • Memory optimized
  • Storage optimized
  • GPU

..and a few more. Each of these VM types comes with a different pricing and sizes.

Right-sizing

Speaking of VM type sizes, right-sizing is one of the underrated best practices and it applies mostly on underutilized VMs. Continuous Monitoring being as one of the DevOps lifecycle phases, could help us understand the workload performance better, before and after each deployment.

Reserved Instances

Reserved Instances is another cost optimization best practice, that I really haven’t seen it in my career yet. I guess RIs are not so important in the startup world, but I could see them apply in medium and enterprise companies. RIs could save you up from 70% to 80% costs at most, and usually it comes with one-year and three-year terms. The upfront payment is the keypoint here, which leads to the biggest savings. Although, the public cloud provider giants gives us an option including no upfront payment, which leads to smaller discounts and savings.

Start / Stop Instances

The keyword for scheduling VMs start and stop operations is automation. I’ve written a couple of blog posts in the past, that are worth to check, including:
Azure VMs Auto-shutdown
Schedule Automated Start and Stop for Azure VMs

Autoscaling

Autoscaling dynamically adds or removes VMs to match the performance requirements. There are two types of scaling:

  • Vertical scaling, also known as scaling up and down. For example, upgrading or downgrading a VM to a larger or smaller VM size.
  • Horizontal scaling, also known as scaling out and in, which means updating the number of instances running, by adding or removing instances from a pool. Horizontal scaling is the preffered approach when applying autoscaling.

Choose the right compute service

Most of the public cloud providers offer a dozen of different types of compute services. These services fall in three major categories: IaaS, PaaS and SaaS, also known as the Shared Responsibility Model. Further more, choosing the right compute service for your application depends on other key factors as well, including security, type of workload, management, OS type, containerization, etc. Ask yourself the following questions:

  • Do you need full control of the underlying OS, middleware and runtime?
  • Are you using microservices?
  • Can your app be containerized?
  • Do you need a managed service?
  • Are you doing Lift and Shift migration?
  • Are you using serverless functions?

Hence, the most popular compute services are VMs, Batch, Serverless, Managed Kubernetes and PaaS offerings for web apps.

Remove unused resources

Last but not least, unused resources, or so-called zombie assets, are the most common reasons behind having a higher cloud cost. These zombies assets usually could be:

  • Leftovers from a development, feature, or POC (Proof-of-Concept) environments, including: an EC2 instance, Load Balancer, static IP.
  • Unmanaged resource dependencies while deploying IaC with Terraform. For example, Terraform doesn’t manage the Lambda’s function Cloudwatch log group by default. If you destroy the Lambda function, the Cloudwatch log group will stay.

Although, zombie resources are not so easy to spot, using cloud provider’s Machine Learning backed Trusted Advisor services will help us identify these leftover assets.

Conclusion

Be aware of any hidden costs along the way, my 2 cents. Most of the cloud services are billed per second, known as micro billing. This could be good and bad in the same time, if you don’t know what you are doing. Also, make sure to avoid writing infinite loops while developing serverless apps, because I’ve read some horror stories about racking up the bill. If you have any additional tips, please let me know in the comment section below.
Feel free to leave a comment below and if you find this tutorial useful, follow our official channel on telegram.