Early on in my coursework I started to examine the rhetorical implications of surveillance, thinking epitomized by the rather broad question: "What can rhetoric tell us about surveillance and also what can surveillance tell us about rhetoric." This series of newsletters is an attempt to answer this question. I plan to pull from an interdisciplinary mix of scholarship that includes, among others, rhetoric and composition, media studies, information studies, and surveillance studies. As I have students in my critical research course work through connections between academic sources (peer-reviewed scholarship) and popular sources (essays, youtube videos, games, podcasts, news articles, etc), this monthly newsletter will have similar aims. Each month, I plan to pull a piece of scholarship that deals with some facet of surveillance rhetorics and then pair it with a couple of popular sources that add much needed texture. These might be examples of principles in action or they could be something like a Ted Talk or podcast that explains the idea differently. My hope is to bring surveillance and rhetoric a little closer together, a gesture I hope starts new interdisciplinary conversations.
1. Willson, Michele. “Algorithms (and the) Everyday.” Information, Communication & Society, vol. 20, no. 1, Jan. 2017, pp. 137–50. Crossref, doi:10.1080/1369118X.2016.1200645.
A good place to start thinking about the role surveillance has in our contemporary lives is to think about one of its most important tools, algorithms. Willson argues that our everyday activities in online spaces are caught up in and maintained by algorithms, for even the most mundane activities we perform online rely on algorithms "to sort, manipulate, analyse and predict” (139). We trust a lot in our less-than-visible collaborators as we desire to offload more and more of our decision-making into them, what Willson refers to as the algorithmisation of everyday practices (143). We do not do this because we are careless per se, we do this because we desire to feel more certain in our decision-making and algorithms seem to offer a more non-biased approach to decisions. What we overlook, however, is how much our biases are built into the datasets that algorithms draw from, the data itself inevitably containing the very biases we sought to avoid in deploying an algorithms. Algorithms lack an ability to nuance the information its draws from to make decisions, unlike humans theyare not able to attend to social, cultural, and political context--what I would say are the rhetorical implications that come with human actions (145). Something else we overlook is how pervasive algorithms have to be in order to aid in our everyday decision-making: how much we already rely on surveillance and data-monitoring (dataveillance) in order for our algorithmically powered systems to work (144).
In this 2017 episode of the podcast 99% Invisible, Roman Mars interviews Cathy O'Neil about her book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. While the interview does act as a brief synopsis of the main argument of her book, the podcast opens with a viral video incident of a man being forcibly removed from a plane because an airline overbooked and the algorithm the airline used determined who should be removed chose him. More than just an example of digital algorithms impacting the material world, this is a rather "everyday-esque" experience, one not out of the realm of possibility for anyone partaking in air travel. What I find interesting is how O'Neil describes the difference between human behavior and an algorithm. The indignity the removed man faces is enacted by humans, algorithms made a decision, however it did not choose how to enact that decision. Later in this episode, O'Neil explains how algorithms are aiding overwhelmed legal systems in sentencing decisions but that the dataset the algorithm relies on is faulty at best. Rather than diminishing bias it instead serves to further perpetuate inequalities for over-policed communities. More than anything, I see O'Neil critiquing the assumed objectivity of algorithms but also their design in one context and then use in another. Her actual book is a really interesting read as she offers modeling as a concept which we do in our heads everyday (the mental models we rely on to make sense of the world) but then also build into the algorithms we increasingly rely on.
In continuing with the data collection of "everyday" activity, this piece from the New York Times reports on the industry built around tracking people's movements and then selling the data. I think this quote encapsulates the article quite well: "Like many consumers, Ms. Magrin knew that apps could track people's movements. But as smartphones have become ubiquitous and technology more accurate, an industry of snooping on people's daily habits has spread and grown more intrusive." There is a gray area with consent here--the difference between what our data can reveal about us versus what it is used for (most companies claim to not want to track users as much as to find behavior trends). The big takeaway for me is that government surveillance is not the only kind interested in tracking your location, and while this not a mind-blowing revelation, it does raise questions pertaining to the level of comfort we should have with these capabilities being in the hands of private entities with very little actual oversight. Its not just a matter of "privacy is dead" or "data collection is inevitable" but, as the reporters clarify, how much we should be in-the-know for how this data is used (questions of 'how,' 'for whom,' and 'to what ends').