Sep 22, 2020 4 min read

I’ve abandoned “MVP”

I’ve abandoned “MVP”

After years of struggle, I’m advising all of my clients and product leader coachees to stop using the term “MVP”.  Not to stop doing validation, discovery, prototyping or experiments they may associate that that acronym, but to remove the label from all of their docs and presentations and talks.  To delete the letters MVP from roadmaps and product charters.  To banish it from their vocabularies, not let it cross their lips.  Here’s why…

Almost without fail, I find that the “maker” side of software companies (developers, designers, product folks, DevOps, tech writers…) and the “go-to-market” side of software companies (sales, marketing, support, customer success..) have irreconcilable definitions of MVP.

The maker organizations refer back to Eric Ries (and much earlier) for MVP definitions like “any offering or capability that requires the least effort to build while giving the organization the greatest ability to learn about customers.”  Critically, this might be a text description or Balsamiq sketch or non-working web prototype or flipbook that gives us a chance to show an outsider what we mean.  It’s not intended for sale or revenue, but for rapid learning.  The ‘P’ nominally stands for ‘product,’ but it’s almost always not a product.

The go-to-market organizations really want something to market and sell.  So in spite of our long-winded academic arguments, they focus on the word ‘product.’  In fact, the most frequent definition I get from GTM execs is “something I can sell right now to selected customers for current revenue, instead of waiting another few quarters for a perfect product.”  And within minutes, this is shortened to “something I can sell right now.” Everyone is excited about pitching (and closing) live customers, even if we only have only mocked-up slideware so far.

Subtle distinctions are lost.  Our fussy maker-side warnings and caveats and qualifiers are forgotten.  Filed under “whiny excuses from product management for missing delivery dates.”   So IMHO, calling something an MVP invites chaos.


Some frequent bad outcomes from this confusion:

  • We never finish our MVP. Stakeholders keep expanding the definition of ‘done’, since we can’t ship a real revenue product without features A, B, C, X, Y and Z.  This short-circuits learning and slows down delivery.  Engineering and Product are written off as intellectual time-wasters.
  • We spin up outbound marketing/support efforts too early. Marketing needs lots of late-stage assets: screen shots, validated benefit statements, ROI calculators, crisp segmentation, reference customers.  Support needs installation guides, training sessions, FAQs, bug reporting categories.  These (of course) don’t exist yet, since we’re still concept-testing problem statements and feature/function and technical requirements.  Most of this will change, creating several rounds of rework as we add/remote/update features, replace mocks with final visuals; revise pricing & packaging.  Lots of frustration and wasted time.  But the pressure for too-early assets is irresistible.
  • We rush our “learning” MVP to paying customers or serious prospective. Sales folks think this is real, and engage too early.  Since it has only a few partially working functions (or none at all), everyone is embarrassed.  Field teams swear never to sell the finished version – whenever it might arrive – and conclude that the maker team is incompetent.  The fully-feature product is dead on arrival, having already been labeled as a failure.
  • Product and design/UX folks are barred from ever showing any more prototypes to paying customers. The clear distinction between validation and selling is lost, with account teams protecting individual customers (and our reputation) at all costs.  Makers settle for interrogating our own employees about likely user reactions.  This undercuts the reason designers and product managers need direct feedback and reactions from live users early in the product cycle: to balance out internal myopia, recency bias, lack of technical context, and our many wrong assumptions.  I prefer that we learn our hard lessons many months before shipping, at little cost, instead of apologizing for v1.0.

What To Do?

This is often more than just a vocabulary problem.  But we can reduce confusion/frustration by choosing unambiguous words with clear meanings.   Here is one set of replacements:

“Problem validation” or “Concept test” (NON-REVENUE)

  • Tools might be Jobs To Be Done, canvasses, journey maps, paper prototypes, hero stories, what-you-bought-instead interviews
  • Goal: understanding if we identified the right root problem. Can we learn what’s really broken and test-close a few imaginary solutions?  How excited are potential prospects?

“Conceptual workflow” or “UX test” (NON-REVENUE)

  • Assets might include mocks or flipbooks or clickable web interfaces with no internal logic
  • Goal: See if sample users can complete a task. What inputs are we missing?  Have we named things correctly?  Does workflow A or layout B test better?

“Engineering concept validation” (NON-REVENUE)

  • Might include sample data run through partial algorithms or scalability tests or “hello world” code for API integration
  • Goal/questions: will X work? Does it blow up our cloud servers?  Can we find actionable signals in the data?  Is our architecture sensible?  Do early technical trials change our scope or identify missing capabilities?

“Technical beta test” (NON-REVENUE)

  • Might use an early build with core functions but limited security, error checking, corner cases, documentation. We probably install/configure it for the victim test user.
  • Goal: See if one or two friendly customers can get an early product version to work at all (for free). What can we learn about their environment?  Where do they get stuck?  What else do we need for successful volume deployments?

“Early Adopter Program” (REVENUE)

  • We’ll need a pretty complete working product for a narrow set of real customers, supplemented by lots of TLC and hands-on help. Product management should have a detailed qualification checklist to make sure these are “friendly” customers who meet all of our technical requirements.  Ideally, none are currently negotiating big licensing deals with us.
  • Goal: upon completion, we want them to pay us something and be a sales reference.

“Full Market Availability” (REVENUE)

  • We have everything we need to sign up a broad set of successful paying customers. That includes fully tested software, documentation, pricing/packaging, part numbers, marketing assets, targeted lead gen, competitive intelligence, support training, problem escalation, partner/channel materials, etc.
  • Goal: roll out a winning product for noteworthy revenue, market acclaim, excited customers, and reflected glory for the maker team.

Sound Byte

You can pick whatever stages and artifacts fit your situation.  But it seems critical to give those stages plain, hard-to-misinterpret names that clearly signal whether revenue is involved.  No BS, no positioning, no obfuscation.  Our maker teams and go-to-market groups work too hard for us to confuse each other.  Let’s skip the next thousand hours arguing about whether something is minimal, viable, or a product.

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Rich Mironov's Product Bytes.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.