April 27, 2021

Speed is Critical to Digital Success (and It’s Not the Same as Velocity)

Speed is a significant competitive advantage in the digital economy. If a company is able to respond faster than a rival to new information, the impact on current customer perception and new customer acquisition opportunities can be huge. In business terms, this means faster customer acquisition, more loyal customers willing to renew for longer periods or for premium rates, reduced propensity to explore other vendor offerings. New information takes many forms. This can be something small like a defect discovered in production or gaining an insight into how users are using your product. It can be something large like a new competitor or new feature from a direct competitor. Everyone tells you to stay close to your customer, but can you act what you learn? How fast can your organization/team respond to this information and deploy a quality product with the targeted changes into the hands of customers?

When I ask this question of product development teams, I often hear that the developers can make the changes to the code quite quickly, but that the code sits in testing or staging environments for weeks or in some hardening process to ensure the release is “solid.” What they really mean is that coding is easy, but ensuring that the release is quality (behaves as expected, is secure, performs, etc) takes up the lion share of the cycle time from when the developer starts coding to when it can actually make it to production. Customers don’t care about software you have in your staging environment. Frequent high-quality releases that are on time and as promised increase, or at least reinforce, trust that users have in your product. In contrast, releases that are delayed or introduce problems erode first user experience and trust and then extends to the business’ reputation.

What they really mean is that coding is easy, but ensuring that the release is quality (behaves as expected, is secure, performs, etc) takes up the lion share of the cycle time from when the developer starts coding to when it can actually make it to production.

Speed = Cycle Time

Cycle Time, or its cousin “Lead time to Changes”, is a better measure of this kind of speed because it looks at how long it takes to go from deciding to make a change and seeing that change deployed to production. “Lead time to Changes” is one of the 4 most critical signals of a healthy product engineering organization or team identified in the well-documented research by Nicole Forsgren, Jez Humble and Gene Kim in Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations (2018). Their focus is on healthy organizations and they demonstrate a strong correlation to business performance. I seek to underscore that for enterprises that hope to succeed in the digital economy, this particular facet (Cycle Time) is more important than you may realize. From an execution perspective, this is how you “excel at change” – a core tenet of the Product Mindset.

Let me give an example of how impactful this can be. Say the team just pushed out a normal release and a customer calls to say they are experiencing a problem that was missed or did not occur under testing conditions. This could be an unexpected error message that is pretty minor and unlikely to occur often or something significant such as client A being able to see data from client B. In either case, there are 2 options:

  1. Diagnose, develop, test and deploy a “hotfix” for the defect (unscheduled deployment with minor changes)
  2. “Rollback” the entire product to its previous version meaning all changes are lost (re-deploy the older version of the product)

The impact on customers’ perception of your product and organization are vastly different in these two cases. In the case of a “Rollback”, the customer is likely to be doubly frustrated. They not only experienced a problem that leads them to have doubts about your product’s overall quality and reliability, but they have also lost access to new features that were promised without knowing when they will be re-released.

In the case of a “hotfix,” the customer could still be frustrated by the defect, however, the ability for the team to identify, correct and deploy a fix in a matter of hours with high confidence that they will not cause another problem would demonstrate that, while mistakes happen, the organization behind the product is responsive and reliable. These are bedrock factors in customer trust and satisfaction. Everyone makes mistakes. How you respond differentiates you.

Everyone makes mistakes. How you respond differentiates you.

How to Measure Cycle Time

Teams sometimes pick the starting point for this to be the date a user story is created. User stories can remain on a backlog for a long time for a variety of reasons so I tend to focus on the time from the moment the engineers start work on it to the time it is deployed. Modern tools like Jira make this pretty easy to track and Jira has a built in cycle time metric.

Improving Cycle Time requires investments in code pipeline automation (also known as “continuous integration”), configuration management, static and dynamic code analysis tools to catch “code smells” and security vulnerabilities, automated (and maintained) unit tests, automated (and maintained) regression tests, etc. This is not a trivial set of investments, but the payoff is huge when you can predictably release when you say you are and do so with confidence.

The confusion here is understandable. Agile practices like Scrum, XP and SAFe have mistakenly led us to believe that “velocity” is the same thing as speed. The “velocity” of a team as defined in these methodologies is impacted by many variables, but the two biggest ones are:

  • Number of team members that can complete user stories
  • Number of days in the sprint

Efficiencies can be gained or lost in a number of ways. I have seen huge gains in the amount of functionality delivered per sprint without any corresponding change in velocity. Why is that? Most teams norm on point sizes relative to how long it takes to do something even if the team isn’t using hours or days as story points. I commonly hear in sizing sessions, “That should take me half a day so give it a 1”. If that same user story would have taken 3 days a year earlier, then it would have gotten a 3. In other words, you’re getting 3x the amount of functionality or throughput per point. This increase in throughput per unit is invisible in the velocity number. (As a sidebar, I spent some time trying to solve this problem and interviewed other engineering leaders. We were able to agree that none of our current measures really tracked throughput as defined by “functionality per story point”. This is fodder for another article.) The point is that what Agile methodologies commonly call velocity is not the same thing as speed.

High Return on Investment

A shocking number of companies that I’ve been a part of myself or seen as prospects or clients have under-invested in this area. One reason for this is that leaders like myself failed to connect the dots between investments in fancy sounding things like “CI/CD” or “static code analysis” and business value. It is not fair to ask a business leader to prioritize a feature customers are asking for explicitly against this type of investment without articulating the value. I have seen firsthand how a disgruntled customer who is used to being disappointed in their vendors became a boisterous promoter after we deployed a hotfix 48 hours from when the issue was reported. Customers want to be heard and made to feel important to your organization. Improving your capability to respond to them quickly is a tremendous opportunity to gain a competitive edge.

At 3Pillar Global, our product development teams meet our clients’ product organizations where they are and make targeted recommendations for how to improve based on the unique conditions of the client, their organization and the product. We have frameworks for evaluating the quality of the process, collaboration and the kinds of non-functional quality elements that directly impact cycle time and overall product quality. We use those frameworks to start a conversation about what we see and where we think we can improve with the client and client organization. This collaborative approach means that our clients get value both from the product(s) we build as well as from our expertise in how products should be built. Contact us today to discuss how 3Pillar can help you execute on your product roadmap.

 

About the Authors
Scott Varho is the Senior Vice President Of Product Development at 3Pillar Global