Full disclosure, I am an inactive member of Facebook following a really bad summer of updates. I just ... couldn't emotionally handle it, which is funny given the news that came out over the weekend and was widely discussed on my social network of choice, twitter. In fact, I feel almost like one of those infamous twitter wars was on the brink of breaking out over the study. It's gotten so big that it's even made it to Bloomberg.
The short hand is, Facebook manipulated thew news feeds of a bit over 600,000 users to show them either more negative or more positive content to see if they could effect their emotions. The research findings were, look we can! The IRB situation is sketchy, in that it seems they extended an existing IRB, and they stated that by agreeing to the TOS facebook users had consented to be research subjects. There was no opt in. No opt out. No debrief. I don't think anyone knows if they were or were not part of the study. The thing that was published that caused the mess looked at two weeks of data. If all a user had to do was agree though, the side of me that is skeptical always wonders if that means that they only did this for two weeks, or, if after two weeks they determined that it worked and put it in to practice (for whatever practical reason made them feel the need to do the study in the first place).
Anyway, the conversation has been heated because this is in so many disicplines. Everyone has their toe in the water of implicaitons of this research which is both good and bad. I am happy to see a bigger conversation happening around these born digital and algorithm based studies. I am excited to see people talking about humans, and the experience of human life and emotions. I'm disheartened to see people dismissing the 600k people because it is such a small percentage of the user base. I love that a wider conversation on ethics is starting in terms of methods, consent, IRB, and research that toes the line of corporate and academic research (because more and more things are turning in to both).
What I am sad about though, is my own thing. I'm not seeing a larger conversation on the Terms of Service as the site of consent. For me, I think this is as deep as my toe goes for now. The other stuff is overwhelming beacause people are.. highly invested (and this is a good thing, as long as we don't talk over each other and talk to each other instead).
So, back to the Terms of Service. It's an issue. It is a legally binding contract that the end user has no recourse to negotatiate. If you want to use these services you agree to their terms. You, the end user, has no terms and no right to demands or disclosure outside of what's written in those terms. Yet, often we don't read them. And, as this case is showing, when companies act on the promises made in those contracts, we freak out. So, my hopes from the Facebook study fall out are
- that we can have a bigger conversation on the ethics of using these spaces and tools that pull us into these contracts
- that as researchers outside of these organizations, we understand that they will have more data than we will ever have access too
- that we can come up with methods for academic and corporate research that are ethical, collaborative, and cross disciplinary
- without putting participants at risk without their knowledge
- we can have a bigger conversation on the role of data, the algorithm, and the human
I'd like to share a site, Terms of Service, Didn't Read. It's a good one.
I'd also love to hear what, if anything, the Facebook study has people questioning/asking/contemplating.