Feb 6, 2020 8 min read

Starter KPIs for B2B/Enterprise

Starter KPIs for B2B/Enterprise

I’m often asked what KPIs B2B/enterprise product folks should use, or what OKRs they should choose. This is (of course) an unreasonable question, as every product/ business has its own uniqueness. Vaguely like be asked by a stranger what books they would enjoy: hard to guess unless I know their preferences.  But this raises three broad questions:

  1. Shouldn’t we spend more time choosing our specific KPIs (or OKRs) than deciding whether to adopt metrics in general?
  2. Why KPIs from consumer companies don’t fit well with B2B/enterprise
  3. Some B2B KPIs starting points, knowing  that every company is different

[1] Generic KPIs?

I know a dozen companies that spent months deciding whether to have KPIs (or adopt OKRs) and less than 15 minutes choosing which KPIs. Much thought into the general need for business metrics, then a few moments brainstorming in exec staff about what to measure. This feels like an instance of chasing pre-digested best practices: “Amazon uses metrics and is very successful” or “a board member just gave me a copy of John Doerr’s book.” If we think of KPIs as generic and universal, then it makes sense to copy them from directly from successful companies. Just as every organization needs a finance/accounting team that follows GAAP and tracks cashflow.

But metrics aren’t generic. A music streaming service needs different health indicators than an aircraft manufacturer or online ad marketplace or security software vendor or dating app. It’s important to find KPIs that will provide insight into your business and help uncover underlying issues. Good KPIs should raise interesting questions and challenge prevailing wisdom. So we should take the time to propose various metrics, review them with our teams, argue a bit, and consider our first choices as experiments rather than instant full-year commitments.

And I entirely reject gross revenue as a company-wide KPI. Knowing how much money we’re supposed to bring in doesn’t help us decide what to do next. (“Let’s raise prices. That will boost revenue.” Maybe. “One-time deep discounts will pull in more marginal deals.” But mortgages next quarter. “Companies in the foreign exchange markets make a lot of money. Let’s switch out of ERP systems and become currency traders.” Grass is greener. “We should keep hiring salespeople, since each carries a $2M quota.” Diminishing returns. “Let’s acquire Company X and upsell their product to all of our current customers.” Rarely pays out. ) While the Board constantly asks about quarterly sales, that doesn’t help Product and Engineering validate what else customers really need, deliver consistently valuable products, or weigh the impact on the installed base.

I find that undifferentiated company-wide revenue goals encourage short-term thinking, opportunistic selling outside our target audience, and a predictable shift from products to professional services. (“BigCorp has $2M set aside for Blockchain research. Let’s bid on that, and assign just a few folks from core development if we win.”)

IMHO, setting gross revenue as a primary KPI encourages sloppy short-term thinking, whale hunting, and fictitious business cases. Yet it’s the first KPI proposed by many exec teams.

Instead, I look for metrics that are cross-functional, track value as customers perceive it instead of our revenue scoreboards, and push us to ask ourselves serious data-informed questions. (“If churn is up, what does our real data about actual lost customers tell us — beyond our own recency bias and anecdotes?”) Hard to get this right in 15 minutes. We need to be humble, experimental and intellectually honest when proposing company-wide indicators.

[2] The Most Visible KPIs are from Consumer-Focused Companies

pirate ship

Most of us have seen Pirate Metrics, Qualified Marketing Traffic, or Net Promoter Score. These often come from visible, credible consumer tech companies with household names and high transaction volumes. But I find they don’t map well to enterprise companies. Many of underlying B2C assumptions don’t work for B2B:

  • The buyer is the user
  • Buying decisions are made quickly based on a few messages or touches
  • Products are easy to try or test before purchase
  • We can interview hundreds of prospects without stepping on current sales efforts
  • Our audience is large enough, and any one individual sale small enough, that we can run pricing or messaging or packaging experiments on live customers

For instance, most consumer companies are deeply analytical about customer acquisition costs (CAC). This makes sense when most outbound spending is through Marketing (not field sales) and we can isolate campaigns or messages or channels with enough volume to accurately compute CAC.  (Fitbit sold 28M units before being acquired, having tested hundreds of distinct marketing campaigns.)

Enterprise sales cycles are 6-18 months, with dozens of touches and contributions from every department. So CAC might be our total Sales+Marketing+Support spend divided across 30 or 50 new customers per year. Everyone who interacts with prospects will claim some part of our success. Likewise, most departments support current customers/users – making it unreasonable to break out fixed versus variable costs.  We need metrics that match our audience and business cycle – and ideally reflect customer value rather than just our financial results.

So Where Should B2B/Enterprise Folks Start?

Even though I don’t believe in generic KPIs, I have a few suggested starting places. At the broad portfolio level, we should track KPIs that will help us track real customer value, spot trends early enough to respond, and avoid organizationally siloed metrics that push problems onto another department.  (We’ve all heard the shouting when Marketing and Sales have mis-aligned KPIs: “Marketing delivered 17% more qualified leads this quarter, beating our goal, even though Sales couldn’t close a barn door” answered by “Marketing thinks anyone who fogs a mirror is qualified. Sales is discarding half of what they send us. Let’s divert their budget to more reps in the field.”)

Also, in most enterprise sales, buyers and users are different people. RFPs, vendor selection committees and purchasing departments distract us from how we help actual end users. And nearly all software (deals) have moved from perpetual licenses to annual subscriptions. So upfront revenue becomes less important than renewals and upsell (“land and expand”). We’re in a long-term relationship with our buyers and users, where we have to demonstrate value monthly or quarterly, not just at annual contract renewal.

TTV: Time-To-Value

I often suggest a starter KPI around time to value (TTV): measuring the number of days between when a customer signs up for our product/service and first gets the core value we offer. If we’re supposed to manage a corporation’s employee 401(k) program, how many weeks after invoicing can their first employee change a payroll deduction? If we help manufacturers speedup their assembly lines, how many months of analysis before they see more widgets coming off their conveyors?  If we offer machine learning insights on product trends for retailers, when does that first brilliant analysts boost actual in-store sales? If our credit scoring app for ecommerce vendors is supposed to reduce false positives and reject fewer legitimate purchase, does it take half a year to implement? If we offer improved sales qualification screening, when does our customer first see revenue boosts attributable to our process? Note that good time-to-value metrics force us to identify exactly when paying customers actually get value from us, not just when they pay us.

Wait time here is wasted time. Wasted money. A 90 day installation cycle means that our customers paid for 3/12 of the year with no benefit. And we risk disheartening our buyer-side champions, or never get our products working.

Consistently collecting and sharing our actual numbers, though, can get us thinking clearly and cross-functionally. If the twelve customers we signed in Q4 took an average of 65 days to achieve what we promised, then we can dissect each individual timeline. Average of 8 days waiting for contract signatures and NDA? 15 days in their IT queue for system access? 3 months for software connectors that were promised but not finished? 5 weeks to schedule and fly our only trainer to their 4 sites? Ten day fire drill to create one-off SKUs for Accounting and Operations that match the non-standard software we spitballed for a special deal?  Crawling through real events and real timelines helps highlight real issues. We have to look facts in the face.

And that helps us think creatively about improvements. Could we submit IT requests for customer system access sooner, or reduce the kinds of access we need?  (Maybe.) Could we train a second onsite instructor?  (Maybe.) Could we create an installation wizard that reduces manual configuration and shrinks support calls?  (Maybe.) Could we fix the login bug that often blocks user access? (Maybe.) Could we delay commission payments for deals sold ahead of released software, or give Product/Engineering pre-approval on non-standard bids?  (Probaby should.) Here’s where cross-functional collaboration pays dividends.

Once we understand the current situation, this shapes a more formal OKR: “Reduce average customer Time-To-Value from 65 days to 55 days by end of Q2.” No surprise that faster TTV pleases customers and also saves us money. Perhaps unblocks our onboarding pipeline.

Inactive Customers?

empty desk

Another starter KPI might be inactive customers: paid corporate accounts where not a single user logged into our system in the last few weeks. (Pick a relevant duration for your product/service.) If we’re an analytics platform with 90 corporate customers, how many paying accounts had no users doing analytics in the last two weeks? If our online image repository helps marketing teams locate images for collateral, which companies haven’t searched for any pictures this month? If our database of adverse drug interactions helps drug companies spot problems early, which Big Pharma players are paying us but haven’t looked at our data recently? If we offer online surveys, which product marketing teams haven’t fielded a survey in the last quarter?

This is the flip side of Time-To-Value, and can highlight customer-side problems long before renewal. (If Customer X has gone quiet in Month 3 of our annual subscription, we have 8 or 9 months to solve this and earn back next year’s contract.) It’s also a chance to identify product or competitive challenges early: we should be proactively calling up to understand what’s happening. We’ll generate some interesting C-level reactions by presenting the list of currently inactive customers at every week’s executive meeting.

Inactive customers might be a poor metric for your business. It won’t work well with API-based products if customer system pulls fresh data every minute and then no one looks at it. Or for security scanning software, since intrusions and alerts may be sporadic. Or insurance offerings, where customers hope they never have to file a claim. Apply your own best judgment. But it’s more useful than just looking at quarter-to-date revenue and philosophizing about how to make that bigger.

Sound Byte

Useful KPIs reflect real customer value, push our thinking, and help align departmental goals. They are early looks into how our users/buyers are doing. But we have to put real work into picking them, and then iterate as we learn. One size doesn’t fit all; copying from another company can be counter-productive.


Photo credits: pirate ship by Austin Neill on Unsplash, empty desk by Lowie Vanhoutte on Unsplash.

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Rich Mironov's Product Bytes.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.