Blog Post

Does Facebook Sell My Information?

 

 

This answer, despite having been edited one week ago, remains unsatisfactory, and frankly, a little petulant. I find it hard to believe that Facebook has attempted to truly answer people’s concerns about privacy and the corporate use of their personal information in three sentences. Even worse, the first and third sentence mean basically nothing. As I wrote in my first blog post, Understanding Google's Information Empire free services on the internet usually make their money through advertising to their users. A majority of Facebook's 7.8 billion dollars of revenue comes from advertising. In the first quarter of 2012, 82% or $872 million came from advertising while the other 18% came from other sources (farmville). Facebook does not “sell” user information because they can generate much steadier profits by working with advertising companies to target ads. An ad company creates an ad targeted at a demographic section, and Facebook delivers that ad to that demographic. If Facebook were to actually sell its user data, it would lock itself out of the middleman position it plays so well.

 

So, no, it would be inherently self-defeating for Facebook to sell its own user data. But the second sentence in this post is also troubling.

 

“You have control over how your information is shared, so we won’t share your personal information with people or services who you don’t want to have it.”

 

Here, Facebook attempts to balance privacy with user control. The thinking is, if the user is totally in control of how their information is distributed, Facebook or Facebook-enabled services cannot possibly be using information in an unwanted way. It is up to the user to make decisions about how their information is used. But this philosophy of casting the user as controller and as scapegoat is actually inherently flawed. By putting all responsibility for controlling information on the user, Facebook essentially grants itself a free pass, one it does not deserve. Firstly Facebook controls the design and layout of the user controls, and second, Facebook does not grant users access to all of the information that has been collected on them.

 

When it was first launched, most Facebook users felt positively about the “View As…” feature. The option allows users to view their profiles as someone else, offering a visual check that their privacy settings were adequate. However, in the most recent redesign, the option was moved into a much less visible area.

 

 

As you can see, this option, which is supposed to be a true-to-life preview of the profile as it appears to other users, has been shafted into a miscellaneous overflow menu. While the ellipsis-button is becoming a more and more common user-interface idiom for “more options”, it might take users some time to even realize a button exists there at all. But design goes deeper than just components of the user-interface. An oft-criticized and never-addressed aspect of Facebook’s privacy stance in general is the practice of opt-in by default. When you create a new profile, unless you are under 18, your posts are public (visible to everyone) by default. Automatic tag suggestion through facial recognition software is enabled by default. Facebook’s new mobile messaging app, which it is gradually forcing users to adopt in addition to their currently existing mobile app, attaches your location to every message you send by default, regardless of whether you already opted-out of location services in the counterpart app. So, while Facebook offers several useful and comforting privacy settings, you might never know about them unless you go looking; and since you’re opted-in by default, they’re effectively not even there.

 

Part of the problem that must be highlighted is that our online presences are nebulous in the extreme. It’s no surprised that big companies have devoted entire server farms to understanding us. But part of the effectiveness of the default opt-in strategy is that it’s really, really hard to know exactly what you are sharing with whom all the time. Furthermore, there is no privacy setting that prevents Facebook itself from storing information about its users, even if they have tried to change it. In 2012, a group of European citizens used EU data protection laws to get (more or less complete) copies of their user data from Facebook. Among other things, they found that Facebook persistently stores some information even after it has been deleted by a user, including even personally identifying information such as name and address. When you attempt to delete a tag from a photo, for example, that tag is not actually deleted. Instead, it’s simply marked “inactive” and not shown. When deleting messages, those messages are marked “deleted” and not shown. How can Facebook claim to give its users control over their privacy i that information is neither accessible nor editable?

 

Sources:

 

https://www.facebook.com/help/152637448140583/

http://robinsfox.com/blog/facebook_privacy_view_as_feature/

http://www.europe-v-facebook.org/EN/Data_Pool/data_pool.html

http://www.europe-v-facebook.org/removed_content.pdf

 

120

1 comment

Sam,

The critical position you take in the beginning against Facebook's legalese is an interesting one. But even more interesting I think is the disparity between the worry users express over the privacy of their data and their actual usage of internet services like Facebook. Most express discomfort and other privacy concerns when asked about the sharing of personal data in private, corporate clouds. Data being shared means users have no control over the replication of the constituent pieces of their virtual identity accross the network. It is obvious to see how it might lead to real-world reprocussions like identity theft and equally obvious why Facebook asserts they will "never" sell your "personal information". But in reality, the anxiety over data-rather-kept-private occurs too late in the game.

The truth is, as you pointed out, the data is already in the hands of a for-profit corporation. Despite "user-generated" content/data, the user owns little more than the intellectual rights. The contractual agreement between a user and facebook upon signing up is that  personal data will be provided in exchange for Facebook's services. It doesn't matter what privacy settings the user chooses on Facebook because you can't ever elect to block Facebook itself out. The privacy settings only give the user power over other Facebook users, which is basically the same as choosing to post or not to post in the first place.

“Facebook attempts to balance privacy with user control"

I would argue Facebook attempts not so much a "balancing" act as large-scale obfuscation. With their privacy announcement, Facebook is deliberately hiding behind a veil of rhetoric which aims to mitigate the anxieties users might have about outside actors. Outside actors are the companies imagined to be soliciting information about users from Facebook, the individuals tracking internet activity in order to steal a user's identity. What Facebook does not do here is excuse themselves from the privacy convolution, which is something I find particularly interesting. As you pointed out, it leaves a giant hole in the “your information is safe with us” rhetoric and grants Facebook a "free pass". Not only do they impose which controls are available to the user and the visibility of their information, Facebook also keeps their end of the data-collection out of the picture. Facebook never tells the user how or where their data is stored, nor how it might be used by Facebook. In no part of this equation is the user given control or command over their privacy– yet the rhetoric clearly intends to obfuscate the true relationship of the user to their data (data as constituent of the platform, and less of the user).

141