May 2, 2019 10 min read

How (Industrial) Hardware Is Different from (B2B) Software

How (Industrial) Hardware Is Different from (B2B) Software
Photo by Lenny Kuhne / Unsplash

Product are products, yes? When we talk broadly and generically about product management, we assume that all products/services are similar enough that we can apply the same tools, techniques, financial planning models, design approaches, goals and metrics. But I’ve been working with several clients who build long-lived industrial hardware — and whose core operating assumptions can make it difficult to add revenue software products to their portfolio.

Serious industrial companies build serious long-lived industrial equipment. Think about automotive assembly line infrastructure. Or massive cranes to load/unload containers from commercial cargo ships. Or power plant turbines and large-scale solar installations. Or elevators for high-rise office buildings. Or offshore oil drilling rigs. Or jet engines. These companies need a special mix of physical engineering expertise, manufacturing savvy, tight financial controls and long planning horizons.

Increasingly, these same companies build software to monitor, adjust, and analyze the real-time performance of their underlying equipment. To sell supplies based on actual usage. To share data with partners. To meet customer expectations for consumer-like software usability. To answer threats from software-only competitors.

But product management and development processes that work well for long-lived hardware can handicap software organizations. Companies with decades of industrial hardware success often have trouble adapting their standard operating procedures to digital offerings.

Let’s identify a few underlying assumptions about how industrial hardware is designed/built/sold, then map those to B2B software products. Depending on which side of the aisle you come from, half of these assertions will be completely obvious and the other half mystifying.

[1] For industrial hardware, the development/design cycle is separate from the manufacturing cycle. We may spend months (or years) building out a near-perfect physical prototype of our next version, completely specifying our next product iteration by rigorously designing and constructing a test unit or two – then releasing the detailed specs, bill of materials, cost estimates, and performance stats to our manufacturing/operations/purchasing teams. Most of the time, cost, risk, scheduling and financial optimization is in the manufacturing cycle (not the engineering and design cycle), since every unit is expensive to build. We may need to retool an entire assembly process, source specialty parts, and semi-accurately forecast physical demand for physical units.

For software products, the development and design process *is* the manufacturing process.  We work from higher-level user stories or epics or goals/KPIs, making incremental product and technical decisions every day.  With solid DevOps or CI/CD tooling, our formal “release to customers” is a few keystrokes or a push to our multi-tenant SaaS platform.

Unlike for major hardware, there is no perfectly accurate software spec.  It’s impossible to completely describe every data validation check, option, logical branch, dialog box, pixel placement, error message and workflow without duplicating the code itself.  So we answer some questions by running our application – to see exactly what happens in some obscure situation – or by looking directly into the software. The layered nature of enterprise software, where we build on operating systems and cloud platforms and commercial toolkits that are each evolving under us, which means that we don’t have a permanently stable (“finished”) product as our hardware partners do.

Implication: it makes perfect sense for industrial hardware executives to ask for precise features, manufacturing costs, performance stats, bills of materials and delivery schedules before committing to full production of a new product.  We use the hardware development/design cycle to drive out risk before committing $M’s to actually producing the goods. This makes little sense for software products, though, where we spend all of our time and money creating the first unit. Software product folks are generally struck speechless by executive requests for precise schedules, accurate costing, fixed features delivery dates or performance guarantees.

For industrial hardware, per-unit margin is our primary success metric. Per-unit margin is the difference between “net revenue on each turbine we build” and “total cost to build and deliver one more turbine,” with our design costs amortized across annual turbine production. We spend lots of energy improving per-unit margin and reducing manufacturing costs, making sure that every unit is individually profitable. The company’s P&L puts “Revenue” and “Cost of Goods Sold” at the top, for the world to see. Most employees know the gross margin on their products to a fraction of a percent.

And the company spends tremendous energy on “margin expansion:” squeezing out more profit per unit by reducing part counts, substituting materials, finding alternate suppliers, outsourcing assembly, and managing down inventory carrying costs. (“If we can reduce our per-unit production costs from $95k to $92k per conveyor belt, we can boost earnings by $15M/year.”) The Purchasing team insists on secondary and tertiary suppliers to give us purchasing leverage and avoid down time. (“We source 70% of our custom 5mm platinum fasteners from Supplier X, but also buy from Supplier Y in case X has a production issue or threatens to raise prices.”)

Per-unit margin makes little sense in the software business.  We spend most of our money building a releasable product (the first copy, if you will) but can sell additional copies at almost no cost. All software economics are about scale – selling thousands of copies of the identical bits – to get past break-even and reach absurd levels of profitability.

And applying hardware cost reduction lessons to software is incoherent.  We want to hire the best developers and designers and product managers (rather than the cheapest) because great products dramatically outsell inferior ones. Brilliant designs grow our fan base and get us past break-even sooner. And there are few costs other than salaries: skimping on monitors or whiteboards or GitHub repositories isn’t helpful or material.  Let alone having Purchasing select our test automation tool vendor based on lowest price.

Implication: industrial hardware companies live and die on per-unit margin, which is reflected in every company process from sales reporting to staffing to budgeting.  That’s meaningless for software companies, however, who need to focus reporting and goal-setting on cumulative subscribers and churn. Executive teams need a distinct set of operating metrics and financial tools to understand their two distinct businesses.

[3] For industrial hardware, we “know” in advance the key dimensions of product improvement. Especially in long-established markets for high-priced capital equipment, we and our competitors maneuver to incrementally improve our products’ essential physicality: cost, throughput, weight, reliability/MTBF, dimensions, ease of servicing. If we can tune our injection molding equipment to be 4% more energy efficient or 2% faster or 6% less likely to need off-cycle servicing or 3% less expensive to build, we can boost margins and win more deals. The need for these improvements is (relatively) obvious to us and our customers – the hard part is how we might engineer such improvements. We spend a lot of time/money/energy researching how to make our products a little better, concentrating on technical exploration. No need to validate if industrial customers “want” such improvements.

The competitive options for software products are much wider, with less predictable outcomes.  Rather than competing on physical limits and performance, we have a broad improvement palette including easier workflows; data integrations; analytics; self-configuration; branding; automatic updates; faster onboarding; reports that show customer savings-to-date; cloud storage; voice interfaces; multi-factor authentication; delegated security permissions; self-diagnostics; text notification; and customizable dashboards.  Meanwhile, our competitors are picking their own improvement strategies. All of which depends on understanding the relative value that different customer groups might see in our various improvements.

Implication: this puts a huge premium on rigorous user/market validation ahead of building potentially exciting features.  It’s much less obvious which software improvements will strike a chord with which audience segments, especially since users often want to see our software rather than hear it described.  And building out a fully working feature means spending real money.  Our development team gets paid (handsomely!) whether they build a feature that our users love, or a feature that’s never used.  So we invest first in sketches, paper prototypes, clickable demos, and other low-cost design artifacts. And we continuous user-test our early concepts.  It’s easy to get this wrong by substituting personal opinions for good, continuous field validation — the experiential nature of software makes it much harder to predict what will motivate users.  And meet our schedule or budget commitments by delivering something that fails to excite our audience is 100% waste.

(Of course, this assumes a stable hardware market with few disruptions. But change creeps up on us. The manufacturers of clocks, cameras, calculators, security tokens, GPS devices, pedometers, bulk mailing equipment, radio broadcasting gear, typewriters and residential thermostats have each *suddenly* discovered themselves less relevant in the digital era. So even industrial hardware companies need to do ongoing market/customer research to avoid disappearing.)

[4] Industrial hardware is installed once, at great expense, and stays in place for years or decades. So that unit must be nearly perfect on delivery and already include all of the features that our customer needs. Maintenance and upgrades are infrequent and scheduled months in advance, since they require shutting down equipment or halting production processes.

, we practice every step in advance rather than discover problems for the first time on-site. (Hint: sometimes waterfall is precisely the right approach.)

Adding a new feature or tuning our existing hardware’s performance is probably an enormous undertaking. So our next chance for dramatic improvement may be at the end of this equipment’s lifecycle, ten or twenty years from now, when we replace it. Which means end-of-lifing a major product could take decades.

SaaS products work exactly opposite.  Our customers pay us every month for a stream of improvements, updates, and new capabilities — an assumed part of every subscription.  So we need to flawlessly and transparently push bug fixes and minor updates as often as necessary (hourly!) after testing that no current user will lose data (via fully automated regression testing!)  And our roll-out plans for major improvements balance customer urgency against sufficient training/notification/promotion.

For example, we might be rolling out a series of new database connectors for our multi-tenant ERP system.  Different customers need to exchange data with different sources, so we have a roadmap to add 15 new connectors this year.  As each is finished, we add an option to configure that database and announce it to all of our customers. Some use it, others wait for later updates.  We maintain one codeline.

On-premise software is halfway between.  Customers want to think of on-premise software as permanent and ignorable.  And they don’t install our updates very often, since that requires time and effort and planning and testing against other systems.  But they occasionally press us for hot fixes when something breaks. So we probably have as many codelines as major on-premise customers, and we struggle to add value over time.  And sunsetting on-premise software can take a decade.

Implication: hardware product managers assume mostly stable equipment; software product managers learn to love constant updates and frequent.

[5] Hardware pricing starts with cost-of-goods and manufacturing. We calculate what it costs to build one more unit, add our target margin, and adjust for competition or market conditions. If it costs us $150k to build a construction crane and our investors want 50% margin, our average selling price needs to be near $300k. Competitors with similar cost structures may win deals by giving up margin. Savvier manufacturers may reduce costs, and therefore prices. Customers have a pretty good idea of what goes into our product, so can spot outrageous profits.

We pay our software engineering team whether we sell one unit or ten thousand, so per-unit software costs are not useful.  (Hardware logic would suggest we charge the first customer $8M and all subsequent customers just enough to cover sales commissions.)   

And customers buy software based on value, not cost.  If our product can save them $200k/year, then we’re entitled to charge a portion of that savings ($35k).  No one cares how hard we worked on our software, or the size of our development team, or our methodology. Does it do the job they’ve hired it to do? Does it deliver the performance or savings or efficiencies or increased revenue that we promised?  Software pricing should always start with a computation of likely customer benefits. And if we can’t make money at those prices, we should build something else.  

Implication: pricing based on costs is mostly inward-looking.  Pricing based on customer value is mostly outward-looking.

[6] If we give our software away for free, it’s hard to justify additional investments in software. Hardware companies tend to use software as a way to prop up hardware prices: remote diagnostics or cloud analytics or improved interfaces are included for free to differentiate our hardware from competitors. We assign all of the revenue to our base hardware, which makes our software teams pure cost centers. Executives focus selling and marketing efforts on core hardware, since that’s what shows up in quarterly financial reporting. Sales teams don’t spend time understanding software “products” since they don’t earn any commission. And we struggle to measure (or justify) the top-line impact of better software — by surveys, usage statistics, and anecdotes.

We inevitably discover that the software team needs to grow, but our ROI arguments are very weak. (“This UX redesign will boost customer satisfaction. But our equipment replacement cycle is eight years, so we may not see much impact for a while.” “Better equipment monitoring will boost parts and supplies orders, but all of that revenue is assigned to Service Operations. So we need that group to fund our work.”)

When we charge for software, we assign value to it.  It gets its own row in revenue reports.  Sales teams get paid to convince prospects that it’s worth buying.  Product managers can argue that roadmap items will improve renewal rates or generate new deals.  Development teams can proudly point to share prices or customers counts. We can have hard-nosed business discussions about where to invest.  (“Our validation interviews suggest we could sell another $20M-$25M if we deliver this SAP/Hana integration. Likely cost is $2M-$3M, so a 7x to 12x ROI.”)

Implication: making software free devalues it for us as well as our customers.  We’ll always think of it as an expense rather than an opportunity for strategic advantage.


Sound Byte

Industrial hardware and enterprise software can both be great business. But they work very differently — their economics and development models and scorekeeping are in direct opposition. So executive teams need to retool some of their operating processes as well as their fundamental assumptions about how we successful manage profitable products.

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Rich Mironov's Product Bytes.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.