July 12, 2021

Problems with Minimum Viable Product

Two Startups, Two Failures, One Common Problem

Electroloom was a 3D-printer company that sought to simplify and revolutionize clothing manufacturing through a high-tech desktop device. Launched as an idea in 2013, co-founder Adam Rowley won a prestigious design competition and garnered a lot of early attention–and funding–from investors. Rowley told Engadget that this was the point when things began to snowball: he and his team began making unfortunate assumptions. He says they figured it would be easy to get a workable device up and running that would attract a community of users. Ultimately, they were mistaken: The machine was difficult to use, and the user experience was “terrible.”

Theoretically, this should have been an opportunity to learn, tweak, and improve the product, but Electroloom’s lack of pre-work was their undoing. As the aforementioned Engadget article relates, “Rowley didn’t really know what Electroloom was for, or who would wind up using it.” After the early hype and a promising first funding round, Electroloom failed to scale and ran out of funding.

Another company, Standout Jobs, sought to revitalize the job recruiting industry via a customized platform. They were so sure of their product that they didn’t do much (really any) customer research. It resulted in their first release being a big flop.

What do these two very different failed businesses have in common? They both failed to do their due diligence before releasing their MVP. Instead of launching a minimum viable product based on data-driven decisions combined with evaluative user interviews with potential customers, along with a clear measurement of success, they went with their gut. For Electroloom, that meant addressing the wrong market segment. In the case of Standout Jobs, there was no understanding of the customer. In the end, they both lost the opportunity to provide value and learn about the viability of their products before it was too late. When things went wrong, they didn’t know why, so they couldn’t try to fix it. Ultimately, they failed.

Both of these products may have been successful—although we’ll never know—if the MVPs had been built and released only after the proper amount of “product viability testing” to determine if these ideas could achieve product/market fit.

Before we discuss what “product viability testing” is, we should acknowledge that these founders operate in an environment that favors execution over exploration. The traditional business plan pressures investors to fund ideas that look like good bets. In this scenario, money is the only thing keeping this idea from hitting the market and being a huge success. These plans include revenue targets and dates. Founders and CEOs raise capital by showing some evidence that their ideas are worth funding, along with a plan they are asking to be funded. This is the single biggest source of failure for new products because it overlooks the fact that this product idea is still a hypothesis. The riskiest and most expensive way to test a commercial hypothesis is to build it and hope customers buy it.

There is an alternative.

What Kind of Viability?

Even product development industry professionals frequently overlook the different types of “viability.” There is commercial viability: Will customers buy it and pay a price sufficient to support a thriving business and, at some point, contribute to profitability? There is feasibility: Can we pull this off from a technology perspective? There is production viability, otherwise known as scalability: Can this product stand up under real-world scenarios? There is user adoption: Will users choose to use this product over the alternatives?

Before you get into building an MVP, you need to confirm that your pitch to investors or internal stakeholders is true and that if you spend the money to build it, users will use it, and customers will buy it. (Note: In business-to-business products, the user and the buyer can often be two very different groups with different agendas.) Beyond just confirmation, however, you need to understand these groups better. How does your initial target segment self-identify? What words do they use to describe the need? What motivates them to action—particularly to seek a solution, purchase it, and use it? If such a product existed, what would they expect to see? What would they need to buy it?

The fastest and cheapest way to explore these questions and build a mental model of your users and buyers is rapid prototyping. The most common format is working with low-fidelity wireframes. These are easy to produce and easy to change so that you can test multiple hypotheses. You cannot do this with working software. You may decide you need something higher fidelity, like a website where customers sign up for a service that seems automated but is actually being done manually behind the scenes. The format that your prototypes take is irrelevant as long as they:

  • Conveys the core product concept
  • Are realistic enough to provoke a genuine response from users and buyers

Note that I said “prototypes,” plural. You should be doing these in short rounds because you are trying to provoke a response. Users and buyers can’t tell you what to build, but they will tell you when you’ve gotten it wrong, and this approach allows you to see patterns. While product viability testing aims to bolster confidence that the product will indeed deliver commercial results, the real value is in the insights. Armed with these insights, if the MVP doesn’t get the response you hoped for, you’ll have a rich set of knowledge to base your hypotheses on how to tweak it to get the adoption you were aiming for. Without these insights, a flop on release can mean death for a company as it did for both Electroloom and StandoutJobs. Investors can tolerate a pivot if they think you’ve learned something that can make good on their initial investment.

Measuring Success

To avoid the dangers of confirmation bias, we recommend setting out success criteria for the product viability testing phase before starting the tests. You’re looking for proof that people in your target market value a solution that looks like your product or service concept. You can deviate from the original success criteria, but be honest with yourself about why you’re doing so. Is it because you’re getting impatient and love your idea even when the data suggests there isn’t a market for it? Or do you have a legitimate competitive threat that is pushing urgency and the need to take a bigger risk?

Often, the biggest challenge people face is figuring out the right assumptions to test—in other words, coming up with the proper hypothesis. In scientific terms, a hypothesis is a data-based starting point for further investigation and measurement. In other words, if you can’t measure it, you can’t improve it. So if you don’t nail the hypothesis, along with your measure of success, your learnings will be less concrete and harder to use as a justification to increase spending.

Hence, you must ask yourself some important questions:

  • What am I testing?
  • What do I think will happen?
  • What do I need to learn?
  • How will I measure success?

The answers to these questions must be informed by data gleaned from market analysis as well as prototyping, user research, and interviews. Sussing this out directly impacts how you’ll scope which features to include and the extent of those features for the MVP. There is a fine line between lacking, good-enough, and overdoing it, but the impact can be the difference between success and failure. Answering these questions before building your MVP is foundational to getting the MVP in the sweet spot between too much and not enough.

So how do you arrive at the proper hypothesis? Simply put, you need to begin in the right place: Seek to understand your market segment, and more specifically, your users and buyers. The Pragmatic institute explains, “For an MVP, each feature must be tied to tangibly solving a top customer problem.” Then, it makes sense that you can’t find your MVP’s sweet spot unless you understand what those problems are and the minimum required for a segment of your target market to consider your product a solution.

To that end, the Pragmatic Institute coined the acronym NIHITO, which stands for “Nothing Important Happens in the Office.” Their point is, getting out into the real world and collecting data about real people is the proper way to understand their needs. In turn, this data informs how you can scope your MVP properly.

With that in mind, let’s look at the top-level issues that lead to an MVP that teaches you nothing.

The Main Problems With an MVP

Problem: There is no proper market research and validation

As Javier Trevino, Director of Technical Services at 3Pillar Global, says, “Companies and organizations may think they know what the end-user wants. If there is no real data, like that obtained from executing market research or from surveys/polls, then the MVP could be missing the mark.”

To avoid missing your target, start with market segment identification. Segments are discrete groups of users connected by qualities such as age, gender, profession, location, and other distinguishing demographics. The Chron.com offers this example: “If you own a local pizzeria with a sit-down restaurant and delivery, two key market segments might be families and college students. For a forensic consulting company, market segments include trial lawyers, in-house counsel for businesses, and law enforcement agencies.”

You identify these segments by conducting quantitative research. The research allows you to segment customers and prospects into high-level groups based on specific markers, such as demographics, buying habits, affiliations, etc.

Problem: There is no target user defined

This is a more granular and personal category than market segments. Segments can help narrow down the big-picture “who,” while user personas help you understand the underlying “why.” Keep in mind that users and buyers may not always be the same group. B sure to differentiate users and buyers in your research.

This research involves identifying and understanding the problems you’re solving and the pain points people face. Because you want to make data-informed decisions, perform a combination of quantitative research (drawing on behavioral analytics) and qualitative research (including user interviews and surveys) to understand the pain points, underlying motivations, and problems of users and buyers.

Problem: You didn’t generate user journeys for your personas

Creating user journeys involves mapping out steps people take as they try to solve a problem. It can further reveal the emotions, pain points, and motivations involved and illuminate where your solution fits into their lives.

A user journey map can also help you clearly visualize the minimum number of steps (or features) needed that will also provide the maximum value in relation to your MVP. These help you understand where your product fits into their workflows.

Problem: There is no problem statement defined

Creating data-informed personas and mapping user journeys will naturally lead to the creation of your problem statement. Using the pizzeria example, a problem statement might be, “College students who are gluten intolerant need an affordable gluten-free menu option so that they can enjoy pizza with their friends.”

Avoiding all of the problems we’ve mentioned in this article, as well as doing the right preparation, will help you form your data-driven hypothesis to test, along with the measure of success. This will, in turn, inform your scope.

Problem: Features aren’t scoped/prioritized properly (as in there aren’t enough or there are too many)

It’s challenging to whittle down features. However, it is an essential step that must be done and done thoughtfully. As Michael Rabjohns, User Experience Practice Leader at 3Pillar Global, explains, “You really need to understand which features and functionality are most important—if there’s not enough ‘there there,’ users won’t adopt it. You need to be able to live with that decision; [It’s] harder for product owners when we know about all these other great features under consideration that users would love.”

However, if you’ve done the due diligence discussed above, then you can let go of your “want to haves” and focus on the “must-haves.” In other words, you’ll be empowered to make data-driven decisions about what realistically falls into that “just-right zone.” Why? Because you began by understanding your users and buyers. You now understand their problems, where and how they encounter these problems, and you have assumptions to test regarding how your product can solve the problem and deliver value.

Conclusion

Think back to the failed startups we mentioned at the beginning of this article, Electroloom and Standout Jobs. While it’s true that hindsight is 20/20, it’s easy to see how they could have given themselves much better odds of success if only they’d done their due diligence. Had they entered the market with a data-informed understanding of their potential target market and a clear hypothesis to test, they would have understood how to iterate and improve their product when things went wrong. Instead, they were unable to make any data-driven decisions. As a result, they failed.

Just about every MVP-related problem can be traced back to improper preparation. For your MVP to be the valuable learning tool it’s meant to be, set yourself up to learn BEFORE you ideate and build it. Do your due diligence in the form of quantitative and qualitative market- and user-research. This will lead to data-driven decisions about your hypothesis, measure of success, and feature scope.

To learn more about 3Pillar Global’s services and how we can help you create a minimum viable product to test and validate your assumptions with real customers, contact an expert today.

Special thanks to these members of FORCE, 3Pillar’s expert network, for their contributions to this article.

FORCE is 3Pillar Global’s Thought Leadership Team comprised of technologists and industry experts offering their knowledge on important trends and topics in digital product development.