AWS re:Invent Day 2 Recap

The second day at AWS re:Invent started off with an impressive display – each year at re:Invent, there is a charity run that sponsors a certain charity. This year’s charity was ‘Girls Who Code,’ an event that has raised over $600K for worthwhile causes over the years. On day two, hundreds of runners, joggers, and walkers participated showing that even coders like to exercise.

Once I got into the meat of the day, I realized that navigating the sea of 45,000 attendees is starting to feel easier. I began my day at an event type called a ‘Chalk Talk’ – this is essentially a 45 minute Q&A with AWS experts, where they will answer audience questions and whiteboard the answers to the crowd. This is an absolutely excellent format, despite the “Migrating to Serverless” topic being a bit broad to fully cover in only 45 minutes. This style of talk is something that I highly recommend for any attendees. The overall dialogue shares many different perspectives, and hearing where others are having challenges can substantially improve your product’s cloud strategy. Additionally, a nice find was a recently written white paper by one of the presenters, which is a collection of serverless principles that are applied to AWS’s Well Architected Framework.

Session one – American Heart Association

A surprisingly great session from the American Heart Association showcased their precision medicine platform (precision.heart.org). This approach, summarized in this AWS blog post, shows a very impressive framework for taking real-world incongruous datasets and building a solution to allow for scientific analysis, while optimizing the approach for cost and still applying the principles of peer-review to ensure that all the data analysts “show their work.” Precision, or personalized medicine, is an area of health technology that is poised for massive growth over the next few years once some of the initial algorithms and data harmonization get worked out, which will provide a stable foundation for deep, detailed analysis to be done.

Session two – Advanced DevOps Workshop

Leaving the great data lake and harmonization discussion, the next session I attended was an advanced DevOps workshop. This multi-hour session focused heavily on AWS’s continuous integration/continuous delivery (CI/CD) toolset – Code Commit, Code Build, Code Deploy, and Code Pipeline. The session was structured as a series of labs, and as a set of labs, it was excellent. I definitely preferred the prior day’s workshop, which was operated more as a working session of a fictional but real-world scenario. I would prefer a broader approach to DevOps – as the CI/CD piece of it is already heavily addressed – that focuses instead on how to leverage the elasticity and capacity of AWS’s offerings to get more into provisioning, ‘infrastructure as code,’ and other capabilities that are substantially more ‘cloud native.’

Session three – Big Data and Hadoop

Next, I went into a massive conference room of likely about 1,000 seats to discuss more Big Data. This session offered a good overview of approach choices of data formats and the challenges in running your own Hadoop cluster. After 20 minutes of what was really marketing material of EMR, the Big Data migration use case of Vanguard I, I was a little unimpressed, especially compared to the earlier, incredible use case discussion from the American Heart Association session.  The first 10 minutes of the Big Data session were a regurgitation of the benefits of EMR verses an on-premises Hadoop cluster. Besides a few slides that visualized a few of the concepts well, there was not a lot of new information out of this session. The main takeaways were:

  • Use S3 for storage
  • Store in Parquet
  • Focus on data governance
  • Be willing to split your workloads into batch and persistent
  • Have separate clusters for each to leverage different operating models (e.g. spot clusters for the batch jobs, longstanding clusters for ongoing work [queries, streams, etc.])

Session four – Serverless Foundations Workshop

My last session of the day was another workshop, this time focused on taking the serverless foundations that I’ve been hearing about and really start some of the hard work to take them from prototype to production – meaning ensuring that they are cross-geographic region, resilient, and highly available. Having seen many customers begin with serverless, then struggle to take the ‘Hello World’ and Google-search-driven examples to reality, I was looking forward to this session. I’m glad to say it didn’t disappoint. Taking the lessons learned from other sessions and projecting them forward to a more applied and practical scenario is incredibly valuable. Taking advantage of new features, some of which launched in the last few weeks, makes it exciting and moves serverless to a lot more production-ready for many people.

In closing, the benefits of the multi-day immersion into AWS, not only in terms of learning, but also in terms of interacting with the other attendees, speakers, and mindset is really starting to show. As much as I was looking forward to this conference, I was afraid I wouldn’t have found it worth it. This mental state that this experience is shaping really provides a solid foundation for thinking about the needs of our customers. I’m looking forward to day 3, and the first keynote, where AWS will inevitably announce new features!

Are you here at AWS re:Invent and want to meet up? Connect with me on Twitter at @MrDanGreene to follow along with my live-tweet of my experience and to set up a time to meet. Or just keep an eye out for me in the sea of thousands upon thousands of fellow re:Invent-goers.

Dan Greene at AWS

.@MrDanGreene praises the precision medicine platform from @American_Heart in his #reInvent day 2 recap. Click To Tweet

Dan Greene

Dan Greene

Director of Cloud Services

Dan Greene is the Director of Cloud Services at 3Pillar Global. Dan has more than 20 years of software design and development experience, with software and product architecture experience in areas including eCommerce, B2B integration, Geospatial Analysis, SOA architecture, Big Data, and has focused the last few years on Cloud Computing. He is an AWS Certified Solution Architect who worked at Oracle, ChoicePoint, and Booz Allen Hamilton prior to 3Pillar. He is also a father, amateur carpenter, and runs obstacle races including Tough Mudder.

Leave a Reply

Related Posts

Why Isn’t My Cloud Raining Money? It is a common belief that many organizations can save money by moving their products “To the Cloud!” (One of my most hated catchphrases). However, on...
How to Develop Microservices Using .NET Core & Docker With increasing business demands, we now develop very large and complex projects that take more time to build and deploy. Whenever QA reports any issu...
Highlights of the AWS re:Invent Conference & The Future... In Take 3, Scene 30, we debrief you on the AWS re:Invent conference that was recently held in Las Vegas. We're joined by Dan Greene, the Director of C...
Becoming an Anticipatory Organization – with Daniel Bu... On this episode of The Innovation Engine we look at why becoming an anticipatory organization just may be the key to long-term business success. We'll...
How to Initialize a Postgres Docker with a Million-Plus Reco... Besides the official Docker documentation, there are several good sources on the internet where you can read about how to kick-start your Dockerized e...

SUBSCRIBE TODAY


Sign up today to receive our monthly product development tips newsletter.