This answer, despite having been edited one week ago, remains unsatisfactory, and frankly, a little petulant. I find it hard to believe that Facebook has attempted to truly answer people’s concerns about privacy and the corporate use of their personal information in three sentences. Even worse, the first and third sentence mean basically nothing. As I wrote in my first blog post, Understanding Google's Information Empire free services on the internet usually make their money through advertising to their users. A majority of Facebook's 7.8 billion dollars of revenue comes from advertising. In the first quarter of 2012, 82% or $872 million came from advertising while the other 18% came from other sources (farmville). Facebook does not “sell” user information because they can generate much steadier profits by working with advertising companies to target ads. An ad company creates an ad targeted at a demographic section, and Facebook delivers that ad to that demographic. If Facebook were to actually sell its user data, it would lock itself out of the middleman position it plays so well.
So, no, it would be inherently self-defeating for Facebook to sell its own user data. But the second sentence in this post is also troubling.
“You have control over how your information is shared, so we won’t share your personal information with people or services who you don’t want to have it.”
Here, Facebook attempts to balance privacy with user control. The thinking is, if the user is totally in control of how their information is distributed, Facebook or Facebook-enabled services cannot possibly be using information in an unwanted way. It is up to the user to make decisions about how their information is used. But this philosophy of casting the user as controller and as scapegoat is actually inherently flawed. By putting all responsibility for controlling information on the user, Facebook essentially grants itself a free pass, one it does not deserve. Firstly Facebook controls the design and layout of the user controls, and second, Facebook does not grant users access to all of the information that has been collected on them.
When it was first launched, most Facebook users felt positively about the “View As…” feature. The option allows users to view their profiles as someone else, offering a visual check that their privacy settings were adequate. However, in the most recent redesign, the option was moved into a much less visible area.
As you can see, this option, which is supposed to be a true-to-life preview of the profile as it appears to other users, has been shafted into a miscellaneous overflow menu. While the ellipsis-button is becoming a more and more common user-interface idiom for “more options”, it might take users some time to even realize a button exists there at all. But design goes deeper than just components of the user-interface. An oft-criticized and never-addressed aspect of Facebook’s privacy stance in general is the practice of opt-in by default. When you create a new profile, unless you are under 18, your posts are public (visible to everyone) by default. Automatic tag suggestion through facial recognition software is enabled by default. Facebook’s new mobile messaging app, which it is gradually forcing users to adopt in addition to their currently existing mobile app, attaches your location to every message you send by default, regardless of whether you already opted-out of location services in the counterpart app. So, while Facebook offers several useful and comforting privacy settings, you might never know about them unless you go looking; and since you’re opted-in by default, they’re effectively not even there.
Part of the problem that must be highlighted is that our online presences are nebulous in the extreme. It’s no surprised that big companies have devoted entire server farms to understanding us. But part of the effectiveness of the default opt-in strategy is that it’s really, really hard to know exactly what you are sharing with whom all the time. Furthermore, there is no privacy setting that prevents Facebook itself from storing information about its users, even if they have tried to change it. In 2012, a group of European citizens used EU data protection laws to get (more or less complete) copies of their user data from Facebook. Among other things, they found that Facebook persistently stores some information even after it has been deleted by a user, including even personally identifying information such as name and address. When you attempt to delete a tag from a photo, for example, that tag is not actually deleted. Instead, it’s simply marked “inactive” and not shown. When deleting messages, those messages are marked “deleted” and not shown. How can Facebook claim to give its users control over their privacy i that information is neither accessible nor editable?