Facebook and the button of happiness

Recently, Facebook published a paper in the Proceedings of the National Academy of Sciences (warning: PDF) about how they intentionally manipulated over half a million FB users’ news feeds to exclude either “happy” or “sad” posts and then (using the same algorithm which detected happiness or sadness) see whether the users’ own posts became happier or sadder as a result. Turns out: they did. Slate then wrote an article excoriating this as unethical, and today it seems to have blown up on Twitter a bit.

First, let us address and dismiss the issue of ethics. Was it unethical for Facebook to publish a paper on this? Yes, yes it was. The key issue here is informed consent: basically, you’re not allowed to do experiments on people without them knowing. It’s wrong of Facebook in my opinion to have, in liedra’s memorable wording, “gussied it up as ‘science’”, and also in my opinion PNAS ought to be asking a lot more questions about what they publish, surely? This study fails the most homeopathically weak example imaginable of “informed consent”, unless we’re counting the Facebook EULA here as giving informed consent to this sort of thing. @liedra did her PhD specifically on this subject, so I trust her views. But most of the upset I’ve seen about this has not been about academic standards and rules for research, or that a paper was published. It’s because Facebook did this thing at all.

I shall now pause to tell a little story. Supermarket loyalty cards are not there just to give you money off for being a regular customer. They’re there to let the shops build up a truly terrifying data warehouse and then mine it in extremely advanced ways to determine both what the average person wants to buy and what you specifically want to buy. Store design is a deeply complex, well-understood science, and that it goes on is almost unknown to the public. A store planner can tell you what every square foot of your store is for and how to maximise the amount of time customers spend in the shop, where to put the highest profit goods to improve their sales over others, and at bottom how to make more money by how you lay everything out. Loyalty schemes do the same thing with your purchases. At a very obvious level, sending you vouchers for stuff you buy a lot can work, but the data mining drills way, way deeper than that. In one memorable event, the US store Target identified that a woman was pregnant from seemingly innocent purchases such as unscented lotion, and then sent her coupons for baby products… before she’d told her father about the pregnancy. These people are mathematically well-equipped, they’re able to deduce a startling amount of things about you that you might wish they didn’t know, they’re doing it with data you’ve given them voluntarily, and they’re doing it to make their own service more compelling and so make more money at your expense. Is this any different from Facebook?

There is an undercurrent of fatalism in some of the responses to publication of this study. “Man, if you expect Facebook to do anything other than shove a live rattlesnake up your arse in pursuit of profit, you’re a naive child.” I don’t agree with that. We should expect more, demand more, hope for more from those who act as custodians of our data. Whether the law requires it or not. (European data protection laws are considerably more constraining than those in the US, in my opinion correctly, but acting only just as decently as the law requires is the minimum requirement, and we should ask for better.) But I honestly don’t see the difference between what Facebook did and what Target did. Yes, someone with depression could be adversely affected (perhaps very severely) by Facebook making their main channel of friendly communication be markedly less friendly. But consider if the pregnant woman who hadn’t told her father had had a miscarriage, and then received a book of baby milk vouchers in the mail.

This is not to minimise the impact of what Facebook did. What concerns me is that Facebook are not the only culprit here. They may not even be the most egregious culprit. The world of modern targeted advertising is considerably more sophisticated than most people suspect, and excoriating one firm for doing something that basically everybody else is doing too won’t stop it happening. It’ll just drive it further underground. Firms are going to mine my data. Indeed, I largely want them to; we’ve decided collectively that we want to fund things through advertising, so I might as well get adverts for things I actually want to buy. Facebook ran a study to discover whether they have the power to make people happier or sadder, and it turns out that they do. But they already had that power. In order for them to use it responsibly they should study it scientifically and learn about it. Then they can use it for good things.

Imagine if Facebook could have a button which says “make the billion people who use Facebook each a little bit happier”. It’s quite hard to imagine a more effective, more powerful, cheaper way to make the world a little bit better than for that button to exist. I want them to be able to build the button of happiness. And then I want them to press it.

I'm currently available for hire, to help you plan, architect, and build new systems, and for technical writing and articles. You can take a look at some projects I've worked on and some of my writing. If you'd like to talk about your upcoming project, do get in touch.

More in the discussion (powered by webmentions)

  • An unnamed person responded at paultibbetts.uk
  • Jason DeRose responded at plus.google.com Facebook published a paper about how they can influence people's moods by algorithmically manipulating those people's Facebook news feeds. Lots of peo…