AWS introduced 1,430 new features and tools in 2017, including 497 in the 4th quarter alone.
This means that it can be a challenge for even the most aware and experienced AWS practitioner to stay abreast of cost-saving and other opportunities.
If you are considering a move to the cloud or are spending more than you expected in your current AWS environment, a bit of brainstorming and guidance can be extremely helpful in navigating your options. As part of the Cloud Services practice at 3Pillar, I live on the front lines of AWS and its ever-expanding array of technical services.
Based upon projects we’ve recently undertaken for clients, below are three Cloud Optimization projects that I believe will pay for themselves in 12 months or less.
AWS offers a compelling alternative to the classic disaster recovery model – purchasing and managing a completely isolated secondary environment to be available if some calamity befalls the primary facility. The drawback, of course, is that these emergency servers sit idle – hopefully forever. Even though these servers are never used they must still be maintained – hardware repaired or replaced, operating systems patched, software updated, applications installed. All this takes time and money.
Transforming your application to be cloud-native removes almost all the need for a disaster recovery solution. A cloud-native application can move workloads from one data center to another automatically. It can move data, protect backups and create air-gaps to protect the last-line-of-defense physical backups. A cloud-native application is not just an application but a machine that builds and repairs itself.
Even if we don’t have a cloud-native application we can still apply some of the same techniques to provide disaster recovery options to more mundane environments that can save nearly 100% of the cost of your DR installation while still meeting (and perhaps exceeding) your RPO (recovery point objective) and RTO (recovery time objective) standards.
One example is using the ideas of Infrastructure-as-code to create an environment definition that can be used to recreate the complete production environment in minutes. The environment definition – typically a few small json files and data backups – can replace complex (and unused) DR installations.
Having your DR environment defined using only a few files provides additional benefits such as the ability to rehearse the DR process frequently. Knowing that the environment can be built from scratch within your RTO is helpful – but knowing that those backups that you are paying for are actually usable is priceless.
So AWS can replace an expensive DR environment and provide greater certainty that your DR plan will work. A genuine win-win.
Managing large amounts of data efficiently and cheaply is a problem that will never have a permanent solution. Not only is there a constant flood of new data but there will always be new ways that business want to use this data.
One 3Pillar Client thought they had a universal solution to data management – DynamoDB. Not only did it provide the flexibility they needed in data structure, the day-to-day management of the databases was practical as well. Ad hoc access to source or processed data was easy for the admins and most data analysis was straightforward. But there were use cases where queries would run for days or weeks. Not ideal.
With this in mind, they asked 3Pillar to explore alternatives. Improving how they managed DynamoDB was one of the options and we did find ways to improve their use of the database. However, the real improvements came from considering directly how the data would be used.
Taking the time to interview users and develop a comprehensive understanding of the multiple use cases allowed us to recommend two solutions to the client: (a) an EMR + S3 + Athena for cold data and (b) a Lambda + Elasticsearch solution for hot data.
Cost savings were astonishing; 99% cost reduction on storage – from $12,000 to $30 a month – while query times improved from days to hours.
Lots of querying solutions let you query on anything – but we study use cases and design a store around use cases that meet all needs – and nothing gets us more excited than reducing client expenses to .25% of what they had been.
One of the simplest ways to reduce your AWS bill is to address how you manage your fleet of EC2 instances.
Each instance only costs pennies per hour to operate, but when you have the 1,000 instances that one 3Pillar customer operates, small costs add up to something substantial. Even when operating EC2 on a small scale there are many simple ways to reduce cost that will not tie up your engineers for weeks.
Here are a few options to consider.
Does your use case benefit from horizontal or vertical scaling? Understanding whether it is better to have a lot of small instances or a few large ones will help you manage costs. Experiments to determine the correct balance are easy to achieve and you can compare costs directly.
Leveraging the multitude of prepaid options for reserved instances makes sense for predictable workloads. If you know that your workload is going to last at least a year, there is really no downside to prepaying with reserved instances. Scheduled Reserved instances can now be successfully applied to variable workloads – say an autoscaled instance that is only needed 12 hours a day. Basically you are prepaying an instance for a predetermined time each day.
Spot instances can be used where the workload is not time sensitive – a classic example being video encoding. Spot instances can provide 50% or more in cost savings where the workload allows. Note: if you rely on spot instances there is always the chance that there are none available – but if that happens, you still have the flexibility to move that workload to a reserved or on-demand instance as needed.
AWS is a journey, not a destination, and our view is that this is a positive. Once you have moved to the cloud and are fully installed, we recommend revisiting everything. Every 6 months on the cloud everything is new – let us help you find the opportunities that have emerged since your last assessment.
An example of this ongoing modification is what is taking place with the Reserve Instances mentioned previously – it is now possible to defer the size of instance you are going to prepay. This is great if you are unsure about what your long term workload will be. A new array of prepayment options exists today where it used to be all or nothing, and the driver of that decision is autoscaling. Our team would be excited to review, rethink, and help you identify newer, better options.
If you have a more than 20 instances, you should seriously consider your fleet management policy. Because we have the opportunity to focus on AWS and its rapidly-increasing number of features and tools (plus everything we learn from implementing these solutions with a wide array of customers) our team works efficiently and can help with just a few hours of work. Contact us today to schedule a no-cost conversation with an AWS subject matter expert to see where you could save by moving to the cloud.