Product Consequences and a Product Code of Ethics?

Prompted by discussions with Jumana Abu-Ghazaleh, Phil Wolff and John Sebes, I’ve been thinking about tech products that are doing harm in the world – intentionally or otherwise. And whether a code of ethics or other agreement among product leaders/product managers would be helpful. This post is more about framing the problem as I see it rather than proposing solutions, with a call to action. Let’s break it down a bit.

1. Unintended Consequences and Product Misuse

We’ve relearned over the last few years that products can have unintended side effects, and that bad actors can misuse good products. We keep forgetting that technology is not neutral, and usually includes some social engineering. For instance:

In our comfortably furnished ivory tech towers, we focus on “happy path” scenarios, with little thought to how malicious folks might abuse our products/services — or ways non-typical users could get into trouble. Even though we’ve seen generations of social networks hijacked by trolls; network security tools subverted; and IoT doorbells tracking neighbors. As product professionals, we’re often not taking time to anticipate bad outcomes.

What To Do?

We (as product leaders and product managers) should be thinking about misuse and unintended consequences very early in the product conception cycle, and again before scaling up. Alongside business models and target audiences and success metrics, perhaps we need to make time to “war game” or red team” our proto-designs for potential problems. A conceptual checklist might include:

  • How might bad actors exploit our tool / platform / product / service?
  • Could it become a platform for hate, disinformation, online shaming, or other bad behaviors? Can we define unacceptable content for this application? How might we spot inappropriate content and bad actors fast enough to make meaningful decisions (e.g. remove content, flag NSFW, ban users)?
  • If “voting things up” is included, can a bot or malicious person create puppets that overwhelm real opinions, for instance with fake travel recommendations? See Randy Farmer’s 2010 book on this.
  • Are we storing users’ personal information, beyond legally-defined PII? Anticipating the consequences of hacks or leaks, for instance to actresses who’ve uploaded revealing photos? Do we have obligations to warn users or limit what we collect?
  • Are we enabling employers or governments to monitor new kinds of activities? If so, what obligations does that create for us?
  • Are we making unsubstantiated or dangerous claims about outcomes and efficacy? If someone uses our product as directed, what are possible bad outcomes?
  • Problems or side effects if our product doesn’t work? IoT garage door openers might stick in the “open position”; AI-based smoke alarms could make mistakes; cloud backup of consumer medical records might be lost. What’s the harm to paying customers if our entire virtual data center disappears?

As product folks, we should raise these issues ourselves — make them more important to our product/design/development teams. Reduce the surprises. Remind our optimistic/sheltered technical groups that bad people can do ugly things with good products. But that will take effort, and pull folks away from other #1 priorities.

[Let me know if you’d like to pilot a “red team” exercise for your product.]

Owned Consequences

On the other hand, I see a lot of product segments that are inherently problematic, especially in financial services, ad tech and user-generated content/social media. There are potentially bad outcomes and injury rooted in companies’ core business models and product goals: in success metrics or economics or eyeball counts that naturally lend themselves to abuse. To me, the root issue is often a split between “users” and “payers” where ever-more-clever technologists find ever-more-subtle ways of collecting and selling personal information to third parties with anti-user intentions. Or between employers and employees, where companies deploy procedures and tech that disempower workers.

  • Social networks make their money by selling our very detailed online behavior to advertisers (and governments and cybercriminals and trolls). This is fundamental to their business model, not a side effect, and intentionally hidden from view — notoriously hard for consumers to control or understand. Happily, we’re seeing a few social networks (Twitter) do bits of self-regulation to protect end users even when it costs some top-line revenue.
  • Some consumer finance apps/sites provide personal credit reports and advice to consumers about improving their credit scores, but make their money on credit card recommendation fees. Their internal OKRS are tuned toward pointing consumers with poor credit toward more credit — which seems disingenuous and possibly harmful.
  • User-generated video sites get paid on volume. So algorithms that recommend ever-more-shocking videos will naturally be more profitable. What’s the next angry/sexy/distorted/viral video that will be hard to look away from and harder to forget?
  • AdTech has been a race to the bottom, fighting falling prices with more precise ad targeting. In a battle for corporate survival, do we expect strong internal controls to avoid inferences about political affiliation or health status or religion or where we sleep at night?
  • E-cigarette companies engineer their products to be addictive, with known (and yet unknown) health risks. They’ve followed traditional tobacco companies in promoting bad science, lobbying heavily, and hiding behind consumer choice.
  • Related… app/game developers have decades of experience getting us hooked, reinforcing behaviors as we play (just) one more round or check our status (just) one more time. We’ve invested in digital addiction. What’s the right balance when our investors inspect engagement statistics at every Board meeting?
  • New fintech players offer to fund part of the down payment on your new house. But they are strongly incented to capture as much of your eventual appreciation as they can, and their data analytics outthink almost all homeowners. Can we know if consumers are getting a reasonable deal? Should they have some “fairness” obligation to homeowners as well as investors? This echoes the mortgage-backed securities and CDO crisis of 2007-08, where real people were collateral damage — losing houses and retirements.
  • Machine learning models trained on decades of mortgages may automatically reinforce historical redlining and bad lending practices. Our previous bad behavior shapes new algorithms.

So it’s not that Facebook set out to violate our privacy or help foreign actors undermine democracy, but is unsurprising in retrospect. And Facebook continues to make money from it. What’s different for product leaders/product managers is that we may be pushing against our company’s fundamental financial goals, raising concerns that oppose our top-line metrics. If (for example) we warn consumers with poor credit about the dangers of more credit cards and that lowers our conversion rates, we’ll get tough questions at our next Board meeting.

That illustrates the slipperiness of intentions and decisions early in the product conception cycle, and the need for intellectually honest examination. Long before we start the development cycle or drop MVPs on unsuspecting consumers, have we thought hard about intended and unintended uses? Imagine talking with your CEO about whether your product really serves the needs of end users as well as buyers: would that be a friendly or combative conversation?

I think that forces us to make personal decisions about our employers: whether to work internally, balancing value for all participants and reducing bad outcomes, or quit for companies with cleaner business models. We can vote with our feet. I’ve had this discussion with dozens of product folks considering quitting their current companies. As Arlo Guthrie once asked, “you want to know if I’m moral enough to join the Army, burn women, kids, houses and villages after being a litterbug?

Privacy Policies Aren’t Enough

Most of our products include a link to some “Privacy” or “Terms of Use” doc with 10-50 pages of legally obfuscated text. We know it’s incomprehensible to most humans, and anything fishy is hidden on page 18. This may protect us from lawsuits, but avoids real responsibility. We might instead apply a sunshine test: If ProPublica or The New York Times accurately described how our company makes money and shares consumer information, would we be embarrassed? For instance, my mobile carrier (Verizon) sells my real-time location data to questionable third parties — but I can’t seem to opt out, or even get answers about who they sell it to. Legalese trumps clarity.

Or perhaps a friends-and-family test: would I recommend this product to my not-very-tech-savvy uncle, or give my preschooler unlimited/unmonitored access to this content? What guardrails would I want before endorsing this to a much wider audience? (Some tech parents are limiting their kids’ screen time.)

Phil Wolff’s construct is a product management moral maturity ladder that widens our allegiance and our duty of care beyond investors and company leadership to include users, buyers, ourselves, our product teams, other employees, and the broader society. Interesting to see how different people would sort such a list.

A Small Call To Action

This seems hard to fix. Tech product managers don’t have a regulatory body to de-certify bad actors, as there are for doctors or lawyers. And we’re deeply involved in new product ideation/validation, with strong financial incentives to work on “winning” products with scalable business models for investors who fund growth at any (social) cost… which invites money-driven self-justification. But now seems like a good time to push for broader accountability within product management and within the C-suite. To stand up for our users and broadly defined stakeholders as well as our investors.