Blog Post

Artificial Intelligence and Labor

Artificial Intelligence and Labor

Algorithms seem to be computer science’s current wunderkind in the eye of pop science. Netflix’s computational machineries are constantly crawling their users’ behaviors, to feed you a never-ending stream of new content. High-speed trading bots whiz through thousands of transactions before a human trader can even process the tinformation. Somewhere, somehow, the NSA watches.

Unsurprisingly, many people have said that our lives are ruled by algorithms. We live in an algorithmic culture. Whether you see them as democratic and liberating or sinister and oppressive, you can’t deny the power of algorithms.

I want to push back against this view. Echoing the concerns of Ian Bogost, writing in The Atlantic, I’m worried that the (near religious) obsession with algorithms is corporate marketing entering the mainstream. As Bogost describes it:

Just as it’s not really accurate to call the manufacture of plastic toys “automated,” it’s not quite right to call Netflix recommendations or Google Maps “algorithmic.” Yes, true, there are algorithms involved, insofar as computers are involved, and computers run software that processes information. But that’s just a part of the story, a theologized version of the diverse, varied array of people, processes, materials, and machines that really carry out the work we shorthand as “technology.” The truth is as simple as it is uninteresting: The world has a lot of stuff in it, all bumping and grinding against one another.

In this post, I want to reframe the discussion of algorithmic culture in terms of AI as defined in my previous blog post—hybrids of organism and machine that combine and blur the two. By acknowledging the necessity of biological/mechanical collaborations, I want to strip computers of their primacy and menace, and reassert the role of human labor in enacting these systems.

Consider the canonical pipeline for Big Data, presented in all the sensibilities of a 2004 website:

Unsurprisingly, courses on data science focus on the preparation and analysis, and sometimes wander into interpretation/communication. After all, this is the site of all the excitement. This is how computers can learn to diagnose cancer, speak French, or deliver that perfect video for you. This is the magic that propels Nate Silver into near mythical reverence in pop culture.

Never mind we still haven’t asked where exactly this data comes from. In the starting arrow of “Data Acquisition,” we’re granted measurements and metrics and labels like manna from heaven: Raw and unadulterated, data is a natural commodity ready to be harvested.

In this lens, machine learning and Big Data are in many ways the perfect technologies to fulfill the neoliberal ideal: features (race, class, gender, Facebook likes, income) can be elided or emphasized a priori, data can be collected in the name of democratic participation, and then results can be returned to the service of all sorts of political or corporate machinery. Against the messiness and plurality of subjective experiences, machine learning promises epistemological clarity.

But like all commodities, data still needs to be produced. As Trebor Scholz and Laura Liu point out in From Mobile Playground to Sweatshop City, digital labor “reveals that we need to define labor itself much more broadly” as economic modes are not the only form of value production. Certainly, paid labor is a huge contributor: Google employed a small army to do meticulous data collection for Google Maps and Books.

There is also the huge body of value generated through unpaid labor, sometimes surrendered unwittingly. Time spent browsing and liking on Facebook by users has given Menlo Park the world’s largest dataset on consumer behaviors and interests. The company’s robust face recognition software is only made possible by the massive dataset of headshots, generated by tagging friends in photos.

But I should temper my surveillance paranoia and Big Brother fears. After all, this data isn’t so much coerced from users as given up, usually willingly. Yelp’s handful of ravenous reviewers turn a platform that has little inherent value into a fully functioning service, yet much of the work is done for free and for “community.” I’d be hard pressed to argue these users were exploited in any conventional sense.

Ultimately, it’s hard to believe that many people in the post-NSA Internet really think their data isn’t being collected towards corporate of federal agendas. As Scholz writes “this kind of volunteerism is part and parcel of the economy … Convenience and this American technology spectacle are paid for with privacy and the complete monetization of each and every part of our lives”.

In recognizing the labor and humans that go into producing “raw” data, I find opportunity for critique and intervention. There are first and foremost concerns about how this data labor fits into traditional notions of labor, exploitation, and value, and possible opportunities for organization and compensation.

I’m curious to imagine what work can be done by speculative design in this area as well. Previous artworks looking at the dataveillance have produced critical design pieces that imagine new modes of anonymity. CVDazzle, the Sentient City, and Stealth Wear establish tension between privacy and participation.

Perhaps rather than trying to hide, participation itself can be subversive. By presenting selective caricatures, distortions, or redirections of our data, we could reclaim agency in the face of corporate algorithms and machinery.

When IBM’s current superstar AI, Watson, was fed the contents of the Internet, it gorged itself on the worst of UrbanDictionary. To an unbiased algorithm, the vulgarities and obscenities were epistemological truth with equal standing to its troves of medical knowledge. Caught embarrassed with a foul-mouthed program, IBM was forced to scrub clean its database.

I’m imagining this performance replayed on the algorithms of Amazon or Facebook to highlight the reliance of these systems on our participation. While Scholz is correct in saying that “the corporate Social Web molds us in its image … We are not just on the Social Web but we are becoming it”, the reverse holds as well: We also shape the Social Web. As digital citizens, we aren’t simply at the will of our algorithmic deities (as Bogost fears), but can occupy and shape them.


No comments