The Road to AWS re:Invent 2018 – Weekly Predictions, Part 2: Data 2.0

Dan Greene

BY Dan Greene

Director of Cloud Services

Last week I made the easy prediction that at re:Invent, AWS would announce more so-called ‘serverless’ capabilities. It’s no secret that they are all-in on moving from server management to service management. I guessed at a few specific possibilities – SFTP-as-a-Service, ‘serverless’ EC2, and a few others.

This week, I want to look at some of the other capabilities provided by AWS and make some predictions as to what announcements we might see. Why should any or all of this matter to you? If you’re in the business of processing, storing, and analyzing large sets of data, these updates may significantly impact the speed, efficiency, and cost at which you’re able to do so.

Week 2 Prediction: Data 2.0

While AWS has a number of existing tools to manage data ingestion and processing (e.g. Data Pipeline, Glue, Kinesis), I think adding in an orchestration framework optimized for all the steps in a robust data processing framework would really allow for AWS’ data analytical tools (Athena, QuickSight, etc) to really shine.

Data-Mapping-as-a-Service

I cut my teeth with data integration on platforms like WebMethods. While it may have had some drawbacks, it was, as a solution set, really excellent at:

  • Providing endpoints for data delivery
  • Identification of data by location, format, or other specific data elements
  • Routing the data to the right processors based on the above features
  • Mapping of each data entry from one format to another
  • Delivery of transformed data into target location

I can see an equivalent of something akin to a managed Apache NiFi solution – in a manner like AWS’ ElasticSearch Service. Tying in the ability to route various tasks to be executed by Lambda and/or Fargate, supporting Directed Acyclic Graph (DAC) modeling, and a tight integration into writing out data to S3 as both final and intermediate steps would be a game-changer for products that have to import and process data files – particularly from third parties.

S3 Lifecycle on read time

One of my pet peeves on the S3 lifecycle management is that moving from Standard to Infrequent Access storage class has nothing to do with the frequency of accessing the file. While I would imagine that the underlying capabilities of an object store makes it very difficult to actually do this, it would provide a much-needed metric to make storage decisions.

DynamoDB deep document mode

DynamoDB is a great hybrid key and document store. I use it often for small document store and retrieval. However, the current limits on document size and scan patterns make using DynamoDB as a managed MongoDB-level solution is a challenge. Providing more robust document-centric capabilities, while still supporting the scalability, replication, and global presence would significantly “up the game” for DynamoDB. As a wish-list factor for DynamoDB I would like to completely remove the pre-allocation of throughput for reads and writes. Let each request set an optional throttle, but charge me for what I actually use rather than what I might use. The current autoscaling is a significant improvement over nothing – but it can be improved.

RDS – Polyglot edition

For a while there, there was an interesting trend to try to combine multiple database paradigms into a single view – combining document + graph, etc. I think that AWS may try to tip their toe into this view.By combining a few of their existing products together behind the scenes, it would be interesting to link ElasticSearch, Aurora, and Neptune together for a solution that tries to combine the best of each of the storage paradigms. Like most all-in-one tools, I’m honestly not sure if it will just do the multiple features equally mediocre. I often recommend a multi-storage solution for clients for their data – each one optimized for a particular use case, so there may be something there.

S3 auto-crawling and metrics

Imagine setting a flag on a data bucket so whenever a data file drops there, it is automatically classified, indexed, and ready for Athena, Glue, or Hive querying. Having some high-level metrics on the data within would be useful for other business decisions – row count – average values, etc. Adding in some SageMaker algorithms for data variance (e.g. random cut forest for discovering data outliers and/or trends) to fire off alerts would be incredible, too.

Wrapping it Up

In closing this week, I think there will be a lot of different announcements around data processing as an AWS-centric framework. AWS has most of the parts in play already – having AWS manage the wiring up of them so you only have to focus on the business value you are extracting from the data would realize the promise of the cloud for data processing.

Going to be at re:Invent? Drop a comment below and let me know what you hope to see there or your thoughts on what’s next.

Interested to hear what the #cloud experts think is coming in the way of #data announcements at #reInvent? @MrDanGreene weighs in with a few predictions in this post. Click To Tweet

About The Author

Dan Greene is the Director of Cloud Services at 3Pillar Global. Dan has more than 20 years of software design and development experience, with software and product architecture experience in areas including eCommerce, B2B integration, Geospatial Analysis, SOA architecture, Big Data, and has focused the last few years on Cloud Computing. He is an AWS Certified Solution Architect who worked at Oracle, ChoicePoint, and Booz Allen Hamilton prior to 3Pillar. He is also a father, amateur carpenter, and runs obstacle races including Tough Mudder.

Leave a Reply

Related Posts

The Road to AWS re:Invent 2018 – Weekly Predictions, P... Every year in Las Vegas, AWS holds their biggest conference of the year. Tens of thousands descend upon the desert. I heard numbers last year of about...
Stand By Your Lambda – Overcoming Lambda’s Limit... A short while ago, I pointed out some Lambda anti patterns. Following up on that post, I thought we’d also like to point out some tips and tricks in o...
3 Cloud Optimization Projects That Will Pay for Themselves AWS introduced 1,430 new features and tools in 2017, including 497 in the 4th quarter alone.This means that it can be a challenge for even the mos...
Why Isn’t My Cloud Raining Money? It is a common belief that many organizations can save money by moving their products “To the Cloud!” (One of my most hated catchphrases). However, on...
Microservice & Serverless Architectures – Is Eith... On this episode of Take 3, we explore microservices and serverless architectures - what they are, why microservices have gained so much in popularity ...