Every year in Las Vegas, AWS holds their biggest conference of the year. Tens of thousands descend upon the desert. I heard numbers last year of about 45,000 attendees. They go for the thousands of training sessions, the events and celebrations of success in the Amazon cloud. Learning, seeing, and more partying (for some) than you can shake a cloud at for a week every November.
AWS takes this opportunity to announce the majority of their new features and products at this conference during their two keynote addresses. I’m going to take a semi-educated guess for what might be coming up in a few short weeks. I may be right, wrong, or something in between, but it’s a fun exercise to think about how people are using the cloud, AWS in particular, and where I think they could do better. It’ll be interesting if they address that, by some accounts, Azure has caught up to them, revenue-wise.
Okay – so this is a pretty easy prediction – AWS has been moving more of their capabilities from ‘run a server to do it’ – to ‘call our service to do it’. There are a number of areas that I think are primed for replacement by a serverless approach.
Every solution I’ve helped build that processes files inevitably ends up needing to stand up a SFTP server. Typically, there are small instances that capture the file, then upload it to S3, where the rest of the AWS serverless ecosystem can take over. Replacing this with a simple service you enable on a VPC would simplify many infrastructures. Just have it route to S3 or to an EFS mount – and you make a LOT of people happy. Throw in FTP and/or FTPS, or any other file transport protocol to offer ‘file transfer as a service’ as a capability.
The other servers that most ‘serverless’ solutions need to deploy is a VPN solution – whether via OpenVPN or via other marketplace offerings – securing your cloud resources, but allowing authorized access at the network level. This is critical to most product infrastructures. I think providing a service that you can enable on a VPC that provides a basic VPN solution (possibly even compatible with the OpenVPN client) would be a godsend for most product infrastructures.
There are a number of AWS solutions that need end users that are, in actuality, separate from IAM users. For me, IAM is a control system for AWS API calls – not for capabilities that are independent of those APIs. Allowing Cognito User Pools to control access to the two above solutions, act like an LDAP service for other software products. Along those lines, Cognito as an authentication system for EC2 instance authentication would be pretty amazing too. Centralize username/password as well as ssh access (by storing a user’s public key) could be much easier to manage than even the Simple Directory offering. Lastly, extending Cognito support for CodeCommit or a Docker image repository separates IAM from the these product’s workstreams, making them much less awkward to work with.
I may have beat the horse to death with another dead horse on this request – I pretty much ask for it on any conversation with AWS regarding Fargate. The lack of persistent volume support limits the potential workloads that Fargate can work on. Providing the means of mounting an existing EFS volume to the Docker container would allow for a lot of flexibility and usage possibilities.
I know that AWS just did the great change of tripling the Lambda timeout limit from 5 minutes to 15 minutes. The challenge is that they showed that they can increase the limit. I would like for them to increase this to 60 minutes – or unlimited for that matter. You’re paying for usage by-the-millisecond anyway. I do think putting a user-defined limit on it is good though to avoid denial-of-wallet attacks by recursion and/or infinite loop issues.
This one is a bit out there, to be sure, but hear me out – if AWS can run a database engine that autoscales the CPU and memory usage dynamically, it’s not out of the realm of possibility to do that on a ‘regular’ instance. On launch, you would set min and max levels of processing units and the thresholds to increase the number of units – again, very similar to the serverless Aurora model. I would expect to pay a premium for this – but for when you have workloads that are not under your control – customer data file processing, traffic analysis, etc. – this may be the perfect fit.
We’ll look at some of the other offerings outside of the serverless space and make some additional predictions.
If you’re going to be at re:Invent and want to discuss – drop a comment below! 3Pillar is an AWS Advanced Consulting Partner, and we’re always looking to learn more about how people are leveraging AWS to improve their product.