On ethical design, and the Ethical Design Handbook

We live in a world where websites and apps mostly make people unhappy. Buying or ordering or interacting with anything at all online involves a thousand little unpleasant bumps in the road, a thousand tiny chips struck off the edges of your soul. “This website uses cookies: accept all?” Videos that appear over the thing you’re reading and start playing automatically. Grant this app access to your contacts? Grant this app access to your location? “Sign up for our newsletter”, with a second button saying “No, because I hate free things and also hate America”. Better buy quick — there’s only 2 tickets/beds/rooms/spaces left! Now now now!

This is not new news. Everyone already knows this. If you ask people — ordinary, real people, not techies — about their experiences of buying things online or reading things online and say, was this a pleasant thing to do? were you delighted by it? then you’re likely to get a series of wry headshakes. It’s not just that everyone knows this, everyone’s rather inured to it; the expectation is that it will be a bit annoying but you’ll muddle through. If you said, what’s it like for you when your internet connection goes down, or you want to change a flight, they will say, yeah, I’ll probably have to spend half an hour on hold, and the call might drop when I get to queue position 2 and I’ll have to call again, and they’ll give me the runaround; the person on the call will be helpful, but Computer will Say No. Decent customer service is no longer something that we expect to receive; it’s something unusual and weird. Even average non-hostile customer service is now so unusual that we’re a bit delighted when it happens; when the corporate body politic rouses itself to do something other than cram a live rattlesnake up your bottom in pursuit of faceless endless profit then that counts as an unexpected and pleasant surprise.

It’d be nice if the world wasn’t like that. But one thing we’re a bit short of is the vocabulary for talking about this; rather than the online experience being a largely grey miasma of unidentified minor divots, can we enumerate the specific things that make us unhappy? And for each one, look at how it could be done better and why it should be done better?

Trine Falbe, Kim Andersen, and Martin Michael Frederiksen think maybe we can, and have written The Ethical Design Handbook, published by Smashing Media. It’s written, as they say, for professionals — for the people building these experiences, to explain how and why to do better, rather than for consumers who have to endure them. And they define “ethical design” as businesses, products, and services that grow from a principle of fairness and fundamental respect towards everyone involved.

They start with some justifications for why ethical design is important, and I’ll come back to that later. But then there’s a neat segue into different types of unethical design, and this is fascinating. There’s nothing here that will come as a surprise to most people reading it, especially most tech professionals, but I’d not seen it enumerated quite this baldly before. They describe, and name, all sorts of dark patterns and unpleasant approaches which are out there right now: mass surveillance, behavioural change, promoting addiction, manipulative design, pushing the sense of urgency through scarcity and loss aversion, persuasive design patterns; all with real examples from real places you’ve heard of. Medium hiding email signup away so you’ll give them details of your social media account; Huel adding things to your basket which you need to remove; Viagogo adding countdown timers to rush you into making impulsive purchases; Amazon Prime’s “I don’t want my benefits” button, meaning “don’t subscribe”. Much of this research already existed — the authors did not necessarily invent these terms and their classifications — but having them all listed one after the other is both a useful resource and a rather terrifying indictment of our industry and the manipulative techniques it uses.

However, our industry does use these techniques, and it’s important to ask why. The book kinda-sorta addresses this, but it shies away a little from admitting the truth: companies do this stuff because it works. Is it unethical? Yeah. Does it make people unhappy? Yeah. (They quote a rather nice study suggesting that half of all people recognise these tricks and distrust sites that use them, and the majority of those go further and feel disgusted and contemptuous.) But, and this is the kicker… it doesn’t seem to hurt the bottom line. People feel disgusted or distrusting and then still buy stuff anyway. I’m sure a behavioural psychologist in the 1950s would have been baffled by this: if you do stuff that makes people not like you, they’ll go elsewhere, right? Which is, it seems, not the case. Much as it’s occasionally easy to imagine that companies do things because they’re actually evil and want to increase the amount of suffering in the world, they do not. There are no actual demons running companies. (Probably. Hail to Hastur, just in case.) Some of it is likely superstition — everyone else does this technique, so it’ll probably work for us — and some of it really should get more rigorous testing than it does get: when your company added an extra checkbox to the user journey saying “I would not dislike to not not not sign not up for the newsletter”, did purchases go up, or just newsletter signups? Did you really A/B test that? Or just assume that “more signups, even deceptive ones = more money” without checking? But they’re not all uninformed choices. Companies do test these dark patterns, and they do work. We might wish otherwise, but that’s not how the world is; you can’t elect a new population who are less susceptible to these tricks or more offended by them, even if you might wish to.

And thereby hangs, I think, my lack of satisfaction with the core message of this book. It’s not going to convince anyone who isn’t already convinced. This is where we come back to the justifications mentioned earlier. “[P]rivacy is important to [consumers], and it’s a growing concern”, says the book, and I wholeheartedly agree with this; I’ve written and delivered a whole talk on precisely this topic at a bunch of conferences. But I didn’t need to read this book to feel that manipulation of the audience is a bad thing: not because it costs money or goodwill, but just because it’s wrong, even if it earns you more money. It’s not me you’ve gotta convince: it’s the people who put ethics and goodwill on one side of the balance and an increased bottom line on the other side and the increased bottom line wins. The book says “It’s not good times to gamble all your hard work for quick wins at the costs of manipulation”, and “Surveillance capitalism is unethical by nature because at its core, it takes advantage of rich data to profile people and to understand their behaviour for the sole purpose of making money”, but the people doing this know this and don’t care. It in fact is good times to go for quick wins at the cost of manipulation; how else can you explain so many people doing it? And so the underlying message here is that the need for ethical design is asserted rather than demonstrated. Someone who already buys the argument (say, me) will nod their way through the book, agreeing at every turn, and finding useful examples to bolster arguments or flesh out approaches. Someone who doesn’t already buy the argument will see a bunch of descriptions of a bunch of things that are, by the book’s definition, unethical… and then simply write “but it makes us more money and that’s my job, so we’re doing it anyway” after every sentence and leave without changing anything.

It is, unfortunately, the same approach taken by other important but ignored technical influences, such as accessibility or open source or progressive enhancement. Or, outside the tech world, environmentalism or vegetarianism. You say: this thing you’re doing is bad, because just look at it, it is… and here’s all the people you’re letting down or excluding or disenfranchising by being bad people, so stop being bad people. It seems intuitively obvious to anyone who already believes: why would you build inaccessible sites and exclude everyone who isn’t able to read them? Why would you build unethical apps that manipulate people and leave them unhappy and disquieted? Why would you use plastic and drive petrol cars when the world is going to burn? But it doesn’t work. I wish it did. Much as the rightness and righteousness of our arguments ought to be convincing in themselves, they are not, and we’re not moving the needle by continually reiterating the reasons why someone should believe.

But then… maybe that’s why the book is named The Ethical Design Handbook and not The Ethical Design Manifesto. I went into reading this hoping that what the authors had written would be a thing to change the world, a convincer that one could hand to unethical designers or ethical designers with unethical bosses and which would make them change. It isn’t. They even explicitly disclaim that responsibility early on: “Designers from the dark side read other books, not this one, and let us leave it at that,” says the introduction. So this maybe isn’t the book that changes everyone’s minds; that’s someone else’s job. Instead, it’s a blueprint for how to build the better world once you’ve already been convinced to do so. If your customers keep coming back and saying that they find your approach distasteful, if you decide to prioritise delight over conversions at least a little bit, if you’re prepared to be a little less rich to be a lot more decent, then you’ll need a guidebook to explain what made your people unhappy and what to do about it. In that regard, The Ethical Design Handbook does a pretty good job, and if that’s what you need then it’s worth your time.

This is an important thing: there’s often the search for a silver bullet, for a thing which fixes the world. I was guilty of that here, hoping for something which would convince unethical designers to start being ethical. That’s not what this book is for. It’s for those who want to but don’t know how. And because of that, it’s full of useful advice. Take, for example, the best practices chapter: it specifically calls out some wisdom about cookie warnings. In particular, it calls out that you don’t need cookie warnings at all if you’re not being evil about what you plan to allow your third party advertisers to do with the data. This is pretty much the first place I’ve seen this written down, despite how it’s the truth. And this is useful in itself; to have something to show one’s boss or one’s business analyst. If the word has come down from on high to add cookie warnings to the site then pushback on that from design or development is likely to be ignored… but being able to present a published book backing up those words is potentially valuable. Similarly, the book goes to some effort to quantify what ethical design is, by giving scores to what you do or don’t do, and this too is a good structure on which to hang a new design and to use to feed into the next thing your team builds. So, don’t make the initial mistake I did, of thinking that this is a manifesto; this is a working book, filled with how to actually get the job done, not a philosophical thinkpiece. Grab it and point at it in design meetings and use it to bolster your team through their next project. It’s worth it.

I'm currently available for hire, to help you plan, architect, and build new systems, and for technical writing and articles. You can take a look at some projects I've worked on and some of my writing. If you'd like to talk about your upcoming project, do get in touch.

More in the discussion (powered by webmentions)

  • Trine Falbe responded at twitter.com 👉 @sil says: “...this is a working book, filled with how to actually get the job done, not a philosophical thinkpiece. Grab it and point at it in desi…
  • Trine Falbe responded at twitter.com
  • Monica Ayhens-Madon responded at twitter.com
  • Julie Gunderson responded at twitter.com
  • John Drinkwater responded at twitter.com
  • Dr. Roy Schestowitz (罗伊) responded at twitter.com Ethical Design Manifesto. kryogenix.org/days/2020/03/0…
  • Bruce Lawson. European. responded at twitter.com On ethical design, and the Ethical Design Handbook - flame-haired FOSS Adonis @sil reviews @trinefalbe's groovy new book kryogenix.org/days/2020/03/0…
  • Steve Lee responded at twitter.com "you don’t need cookie warnings at all if you’re not being evil about what you plan to allow your third party advertisers to do with the data" kryoge…
  • Baldur Bjarnason @baldur@toot.cafe responded at twitter.com “as days pass by — On ethical design, and the Ethical Design Handbook” kryogenix.org/days/2020/03/0…
  • Stuart Ward responded at twitter.com "they define “ethical design” as businesses, products, and services that grow from a principle of fairness and fundamental respect towards everyone in…
  • Angie Jones responded at twitter.com
  • Scalanjava responded at twitter.com
  • Trine Falbe responded at twitter.com Thanks Stuart!
  • Trine Falbe responded at twitter.com
  • Simon (he/him) responded at twitter.com Paul acknowledges that too. His take is more holistic than I've seen elsewhere (haven't read The Ethical Design Handbook. I will tho. thanks for shari…
  • Simon (he/him) responded at twitter.com
  • Trine Falbe responded at twitter.com I like Paul’s stance too. It boils down to this: do dark pattern “work” (as in ROI)? Yes, mostly they do. Do they belong in *responsible* products? No…
  • Hidde responded at twitter.com
  • !ris 💛💙 responded at twitter.com