May 11, 2022
Measuring The Success Of Your MVP
What do WD40, Kleenex, and Post-it® notes have in common? They’re all products that only exist in their current form because of a series of failures. If you’re not familiar with these brand stories, here’s a quick recap of how each brand has triumphed despite failing according to the original MVP success metrics:
The makers of WD40 set out to develop “a line of rust-prevention solvents and degreasers for use in the aerospace industry.” It took them a whopping 40 tries to perfect the formula, and the name stands for Water Displacement 40.
Kleenex was originally launched as a makeup removal tool and eventually became a disposable handkerchief. Yet, even though it was marketed alongside the cosmetics industry, a user survey showed that “60% [of people] used Kleenex tissue for blowing their noses.”
Post-it® notes were discovered on accident by Dr. Spencer Silver from 3M, who was attempting “to develop bigger, stronger, tougher adhesives.” His colleague, Art Fry, was searching for bookmarks that would stick to paper but without damaging the pages. Together, they created a product now known for bright colors and so much more than its original design.
So what can we learn from these three products? Failure doesn’t always mean the end IF you set yourself up to learn and iterate. Your MVP success metrics should always include the opportunity to ask: “What did we learn?”
“Fail fast, fail often” is a phrase that’s often bandied about in Agile and Lean circles, but it may not be the best way to measure MVP success.
An article in DZone posits: “It should be ‘Learn Fast, Learn Often.’ It’s not about failing. It’s about learning. The purpose of failing fast is to learn and adjust course more quickly, saving both time and money. The Lean methodology takes this concept one step further with the statement ‘Think big, act small, fail fast; learn rapidly.’”
What do all these examples of MVPs have to do with determining metrics for your MVP?
First, the only failed MVP is an MVP that teaches you nothing. Even if your product fails spectacularly, if you define how you’ll measure the success of your MVP, you’ll be able to glean some valuable insights.
Second, the best and only way to learn and validate is to measure the right things. Even if you ultimately discover something drastically different from what you set out to measure and validate, it’s essential to understand what metrics are helpful (and which aren’t).
To be sure, measuring the success of your MVP is a complex topic, and every product and situation is unique. This article covers the top-level concepts and key MVP metrics to ensure you’re learning the right things.
Set Yourself Up For (Learning) Success
Based on your prototyping and user research, you should have identified problems you believe your product will solve and come to an understanding of your potential users and their needs.
We also recommend completing your Lean Model Canvas, creating hypotheses to test, and identifying your riskiest assumptions.
“As part of the new product development process that leads to the development of the MVP, we recommend clearly articulating goals of differentiation,” says Henry Martinez, Senior Director, Data Science at 3Pillar Global. “When you test—and measure the success of—your MVP, you’ll learn whether the specific differentiation points have been delivered in a manner that creates a favorable impact upon end users. In an ideal situation, the data will demonstrate that you’ve achieved your primary objective while also delivering a better, higher-touch experience to your customers.”
As a next step, you need to use your MVP success metrics to validate whether or not your product can solve the user problems you identified early on. In truth, product development is a service. To succeed, you must learn as much as you can about users’ experiences with your product. To that end, your MVP testing should include:
- Determining how to measure relevant MVP metrics to evaluate success. It allows you to identify where to iterate and improve or make an informed decision to kill your MVP.
- Identifying which metrics are relevant. With this step, you’re sure to gather the right data from the get-go.
- Developing a system to compare your assumptions with the actual data you gather. Here, you can validate your hypothesis or let the data lead you to a very different conclusion.
Scott Varho, SVP of Product Development at 3Pillar Global, explains that: “[Measuring success] validates or invalidates the product concept’s commercial viability (can you charge enough to build a business on it). This means buyers are willing to pay a price that makes it worthwhile to build the product and users find it useful.”
The specifics are going to vary somewhat depending on your product—just like there’s no one formula to calculate budget, as we discussed. Still, there are some common metrics and concepts to consider. You want to gain valuable insights and learn how to iterate on future versions or determine if your MVP is even viable.
What’s more, when designing your MVP success metrics, you need to consider whether or not the market still supports the product. Changing technology, sociopolitical status, and any number of other events can affect the success of your MVP.
Martinez recalls, “In the early smart-phone days, getting the “keyboard” (and stylus) right was a big deal—and lots of R&D was focusing efforts on developing that tech. Unfortunately for those developers, as touch screens suddenly matured and were more reliable, the R&D for keyboards and styluses were negated overnight.”
This is true when it comes to measuring the success of any MVP. You might deliver a rock-solid feature. However, shifts in the market might decrease the perceived value.
Measurements and Metrics
When it comes to gathering intel on your MVP success metrics, you’ll need to dive into two main types of research—qualitative and quantitative.
Quantitative research is about metrics and hard data. It tells you the what; e.g., “X% of users converted,” or “Users spent an average of X minutes with a feature.” Conducting quantitative research is also a critical aspect of segmenting your market and defining target users.
Qualitative research digs into the “why” through observation and feedback gathering. For example, suppose quantitative research tells you that a majority of users abandon their cart at a certain point. In that case, qualitative research can uncover the reasons behind the behavior and help inform how to improve the experience.
Both quantitative and qualitative metrics are important in order to gain a complete picture, determine viability, and measure MVP success.
When testing your riskiest assumptions, measure MVP success and learn about viability, consider critical quantitative metrics, including:
- Customer acquisition cost (CAC): The amount of money spent on marketing to and acquiring each customer.
- Conversion rate: The percentage of leads who convert into customers.
- Retention and churn rate: Where retention rate is the percentage of return customers, the churn rate speaks to the number of customers who leave.
- Session duration: The average length of time users are active in each session.
- Average actions per session: The average number of actions a customer takes on an active session gives you insights into how robust your MVP is.
- Most and least used features: If there are features your audience doesn’t use—or uses significantly less—you can dig deeper to explore whether it’s due to a lack of value or a lack of knowledge.
- Daily users, monthly users: This MVP success metric gives you insights into how many people are using your product on any given day or month, allowing you to pay attention to trends.
- Active users: The number of people currently using your product.
Behavioral analytics software, including but not limited to heat maps, mouse maps, funnel analysis, and cohort analysis, can help you track the MVP success metrics above. However, to gain context, combine this quantitative data with qualitative research. This could include surveys and questionnaires. Additionally, you might consider scheduling interviews and observation sessions with actual users to understand where they’re getting value, where they’re experiencing confusion or frustration, and most importantly, why.
As you decide the systems, analytics, methods, and metrics to use to measure the success of your MVP, make sure that you’re gathering the right information.
Ultimately, you want to be sure your MVP success metrics can help you gain insights into a variety of factors that include:
- If the MVP is producing value. Determine whether or not you want to proceed with the next phase of development.
- Whether or not there’s a product-market fit. Decide if you want to continue with this market, find a new market, or scrap the MVP entirely.
- How much your target market would be willing to pay for your MVP. Judge if it’s commercially viable, needs to be re-engineered or rebundled, or needs to be tabled.
- Prioritization of features or functionalities. Understand which features or functionalities should be prioritized in further iterations of the product.
Success Hinges on Your Ability to Learn From MVP Success Metrics
As Varho says, “If the MVP teaches you something about your market, that’s worth something. If you built it and hoped it would sell, but learned nothing else, then all you did was waste a lot of money and time.” The takeaway here is that when it comes to measuring MVP success, the testing phase is invaluable for ensuring you’re not leaving any learning opportunities on the table. And who knows? You might just have the PostIt Notes of the 21st Century in the product development queue.
Ready to learn how 3Pillar Global can help you measure the success of your MVP and validate your next big idea? Contact us today.
Special thanks to these members of FORCE, 3Pillar’s expert network, for their contributions to this article.
FORCE is 3Pillar Global’s Thought Leadership Team comprised of technologists and industry experts offering their knowledge on important trends and topics in digital product development.