Social Media’s Dirty Work: Contextualizing the Facebook Screening Controversy

In the past few days my inbox has seen an influx in forwards from friends and colleagues, all sharing links with me covering the recent revelation that Facebook outsources some of its dirtiest work, and that those  firms handling Facebook’s outsourced labor pay exploitatively low wages for some of the most psychologically damaging digital work imaginable: the screening of user-uploaded content (posts, images and videos) to Facebook.  My colleagues sent these links my way for good reason: this topic has been the primary subject of my own academic research for the past year and a half, ever since I discovered these content moderation practices through a small news story in the New York Times. After reading it, I became riveted both by the workers and the industry it portrayed, as well as by the implications of this practice in the greater digital media/social media ecology. How do these practices change our collective notions of participatory media and understandings of the costs – financial and human – to use said media? What does it mean about the nature of our online participation, at one time heralded as a great direct-access equalizer, to know that content undergoes screening by unknown agents, who are often low-paid and low-status? What is it about the nature of social media that may encourage the creation and uploading of prurient, shocking or just-this-side of bearable content to be shared? Who benefits from such material? Who is put at risk? I wanted to explore, too, the impetus to conceal or render invisible these labor practices, virtually unknown to those outside the industry and yet an integral part of the  production chain of user-generated digital media.  These were just a few of a veritable laundry list of questions I generated based on my initial research on this topic. Since then, I have been documenting and writing about these labor practices and the workers involved, mapping them both in terms of their material nature as well as from a theoretical perspective, in my dissertation, Behind the Screen: The Hidden Digital Labor of Video Content Moderators.

 

The image and headline that accompanied last week's Gawker story on contract Facebook content screeners

Meanwhile, the latest chapter in the popular press’s up-until-now scant coverage of the story transpired just last week, when Gawker’s Adrian Chen filed his post entitled, “Inside Facebook’s Outsourced Anti-Porn and Gore Brigade, Where ‘Camel Toes’ are More Offensive Than ‘Crushed Heads’.”  Chen’s story focused on practices at Facebook which, he discovered, takes place largely via outsourcing and micro-labor market oDesk  see Brett Caraway‘s 2010 article referenced below for a nice overview of that company’s practices). Chen’s article is remarkable in a number of  ways: first, he was able to focus on real-world examples shared with him by the workers themselves, most of whom are no longer working for Facebook via oDesk, and many of whom are located outside the US and in the so-called “Global South.”  The workers’ accounts give concrete examples of both the kinds of egregious and trauma-inducing material they were exposed to, on the one hand, while on the other being paid wages that would seem to be nowhere near reasonable given the hazards of the work.  Here it is interesting to note that much of the outsourced labor that takes place at sites like oDesk or at Amazon’s Mechanical Turk is undertaken on a per-item basis, so that workers are paid based on the number of items they are able to screen; I have taken to describing this practice as “digital piecework.” Secondly, Chen was able to provide the Gawker readership, thanks to the workers he interviewed, with a number of internal documents from oDesk, used for training and quality control by the content screeners.  This type of material is generally not available for public view and is considered insider business knowledge; not making it public allows a company to maintain ambiguity about its screening and censoring practices via more general “user guideline”-style statements that give it plenty of room in which to operate when making subjective content screening decisions.  This angle was another particular focus of Chen’s piece, where he pointed out the strange hierarchy of material, and how it is to be adjudicated by the screeners. While Chen’s piece, and subsequent takes on it in the blogosphere and in other sensationalistic coverage online, focus on the admittedly disconcerting nature of the material Facebook rejects, the more compelling facts rest just below the surface.

 

For example, the oDesk internal documents provide a great deal of insight into the kinds of material that the low-paid contract laborers could be expected to see. Images and videos of animal abuse and mutilation, child abuse (physical and sexual), gore, disturbing racist imagery and so on are frequent enough that all have specialized protocols devoted to handling them – keeping them from making it to the site if they’re not yet there, and removing them, if they are.  As Wendy Chun noted in her 2008 monograph Control and Freedom, at the moment of commercialization of the Internet, the great hysteria demonstrated around pornography on the nascent Web and net, at large, was almost invariably directed at the consumer/receiver of the pornography – particularly toward the notion that children might be exposed to said material.  Yet, she points out, there was a curious lack of concern for those involved in its production. Similarly, in the case of Facebook and of similar social media sites who employ screening to shield end users from exposure to disturbing or damaging material, we must ask why it is therefore okay for workers, often in other parts of the world to risk exposure over and over again to that very same kind of material. Is it only dangerous when viewed from the consumption side?  Is the expectation that the meager wages offered for the work be able to offset the damage it may cause? Or is this work considered disposable by nature, hidden by design, outsourced by convenience and the necessity of finding a labor pool who will work in such conditions for very low pay?

We have a responsibility to collectively get real collectively about the true costs of these platforms. Just as we have learned that they aren’t free in terms of the labor we give away and the commoditification of our own demographic, usage and other kinds of personal data [Andrejevic, Fuchs, et al.], they also have a much larger footprint than we may realize in terms of the human cost. Some of these costs are better known among academics and activists whose interests lie at the intersection with social justice, labor and digital information/media issues, so for many of us the recent revelations about Apple and their relationship with Foxconn, a Taiwanese company who is the largest private-sector employer in China and known for its unsavory labor practices, as recently widely publicized on the terrific public radio program “This American Life”, were not surprising or new.  Likewise, the issue of e-waste has been for years identified by numerous journalists, scholars and activists, but frequently fails to register on the radar of mainstream users. Silicon Valley-based labor activist Raj Jayadev has commented on this peculiar collective myopia, saying: “A profound characteristic the popular psyche has accepted about the Information Age is the presumption that technology is produced by some sort of divine intervention so advanced that it requires no actual assembly or manufacturing, the very same features our predecessors in the Industrial Era found so essential. Yet every computer, printer, and technological wizardry in-between bought at the local Radio Shack is birthed in what is usually a very inglorious assembly line production site.”  This myopia extends, too, to the end of life of these products, frequently hidden from Western consumers via dispersal around the globe in sites in China, the Philippines, India, Ghana and elsewhere in the world.

"Mr. Daisey and the Apple Factory," from This American Life, WBEZ.

Likewise, the myopia continues further into our interactions with our networked machines. Once they arrive to us, transmuted from the ether into material objects, we plug them in and then turn our collective cognitive dissonance to the Internet, where the predominating origin myth of unfettered possibility for democratic free expression, on the one hand, and the newer, unidirectional user-to-platform-to-dissemination media creation opportunities offered by “Web 2.0″ platforms, still structure our interactions. And yet, the unveiling of the existence of these content screeners and the practices in which they engage certainly challenge the end-user’s perceived relationship to the social media platform to which she or he is uploading content, by adding new unknown agents and actors whose agendas, motivations and mere existence are all unclear.  What else might be up for reevaluation? What other practices are worthy of another critical glance to identify the human values and actions embedded within them, and how does recognitions of them change our understandings of them? My colleague Safiya U. Noble, for one, is asking these kinds of questions right now about Google search in her own dissertation work. Indeed, it’s the erasure of these human traces, both physically and in a more abstract sense, that are so fascinating, and we must constantly ask to whose benefit those erasures serve.  As a mentor of mine once quipped, ‘Human-computer interaction,’ I mean, what other kind is there?” Or, as the narrator of the This American Life segment mentioned above pointed out, prior to his trip to Shenzhen, a “special manufacturing zone” in China in which tons of products are made for export by people in factories, he had just assumed that his electronics were produced by robots – a fantasy much more comfortable than the reality he discovered.

Two girls are encouraged to fight at the behest of family members and onlookers, in order for video of the fight to be uploaded to YouTube. [Source: FL-based local CBS newscast.

In the discussion of content moderation work, the vast commercialization and revenue-generating potential of the Internet is of great importance, too.  For example, how much financial gain is involved in a given video “going viral”?  What is the impetus in creating a viral video?  Or the benefit to a site for hosting it? In short, does the nature of the platforms themselves have something to do with the disturbing material uploaded to it? In a recent post, I discussed the case of a YouTube user who used the platform to show an extremely disturbing video of her own beating at the hands of her father; at the time of my writing, that video had received almost 2 million views.  Recently, another video caused a furor when a parent, apparently upset at his child for a perceived infraction involving a post to Facebook, created a menacing video rebuttal and uploaded that, punctuating his recording with him shooting his daughter’s laptop several times with a pistol. The ensuing controversy propelled the video to millions of views, and offered micro-celebrity status for the father who made it .  Similarly, Antoine Dodson of “Bed Intruder” fame was thrust into the limelight after a video of him on a local news program was turned into an episode of “Auto-Tune the News.” While Dodson entered into a partnership with the Auto-Tune producers that would provide him financial remuneration for his participation and use of his likeness and video, his videos nevertheless caused controversy as many alleged that the “humor” of the video traded on the linking of perceived attributes of Dodson’s to upsetting racial and gay stereotypes; critics levied the claim that, in effect, whatever Dodson’s intent (he was initially responding emotionally to the attempted rape of his sister), it was a case of YouTube minstrelsy. In one other situation, family members and other adults encircled two 13 year-old girls who were in a physical fight. Instead of breaking them apart, numerous cell phones flipped open to capture the altercation as it happened, and one member of the sidelines shouted out, “Put it on YouTube!” Could the potential for financial and other types of gain be at the root? If so, the ability for a video to splash on the scene and shock, titillate or cause controversy would appear to be an asset to both the uploader and the site, and could explain the motivations of those people uploading the abhorrent content the low-wage screeners screen out.  This analysis suggests that the work the screeners do and the content to which they are subjected is therefore far from the exceptional and is, indeed, to be expected.  At the end of the day, cute cat videos and documents of somebody’s wedding will drive only so many eyeballs.

 

In 2000, theorist Tiziana Terranova mapped the content production landscape thusly: “…the Internet is about the extraction of value out of continuous, updateable work, and it is extremely labor intensive. It is not enough to produce a good Web site, you need to update it continuously to maintain interest in it and fight off obsolescence. Furthermore, you need updateable equipment (the general intellect is always an assemblage of humans and their machines), in its turn propelled by the intense collective labor of programmers, designers, and workers”. Updated for today, we must add actors at both ends of Terranova’s Web content production chain: the social media users who constantly update and refresh sites’ content by uploading their own material or sharing that of others (almost exclusively for free, and for purposes of [re]distrubition, which sites offer as a service or feature to the user), and the video content workers who must screen that content before it can go live.

Human intervention and immaterial labor is indeed a key, and yet frequently hidden, part of the production chain in online sites that rely upon user-generated uploaded content to populate and draw in their producers/users/consumers. Content moderators, whose labor and even mere existence is so frequently hidden from view, nevertheless serve an integral role in making decisions that affect the outcome of what content will be made available on a destination site. Given the financial implications (read: benefits) and attention a viral video can have, the tasks performed by these workers are far from inconsequential.   The content screeners also view large amounts of material that never makes it to the site it was intended for, as they deem it unfit based on site guidelines, legal prohibition, or matters of taste – labor that will literally remain unseen to anyone who may visit the site. And while the moderators’ work may not be as physically demanding, dangerous or rigorous as that of those workers whose labor goes into IT manufacturing, it indeed is often as disregarded, unmentioned or unacknowledged – and has its own potential for danger, of the psychic variety, in terms of the nature of the material to which the moderators may be exposed.

References

Caraway, Brett. “Online Labour Markets: An Inquiry into oDesk Providers.” Work Organisation, Labour and Globalisation 4, no. 2 (2010): 111–125.

Chun, Wendy Hui Kyong. Control and Freedom: Power and Paranoia in the Age of Fiber Optics. The MIT Press, 2008.

Jayadev, Raj. “South Asian Workers in Silicon Valley: An Account of Work in the IT Industry.” In Sarai Reader 01: The Public Domain, edited by Raqs Media Collective and Geert Lovink, 1:167-170, 2001.

Terranova, Tiziana. “Free labor: Producing culture for the digital economy.” Social Text 18, no. 2 & 63 (2000): 33.

 

A version of this post appears at Illusion of Volition.