While most of the information I learned in regards to Big Data struck me as absolutely fascinating (and, to a degree, terrifying), the topic of predictive analytics has maybe landed in first place. Generally, it uses data mining techniques to extract information in order to predict trends and behaviors; and I feel there is this astonishing quality about it, especially considering the kind of all-encompassing power it seems to possess. During a class discussion on predictive analytics, we watched a video of interviews of big shot Google employees who discussed their somewhat freakish passion for algorithms and data mining. I don’t at all mean to sound condescending about the sheer talents of these computer science geniuses. They’ve built an empire, and I am well aware of the fact that I could never dream of doing the type of work to which they’ve dedicated huge sums of their lives. Still, I find the passion in these people to be a bit freakish, and I have some questions: Namely, do the programmers developing such involved algorithms for Google’s impeccable search engine—which generates answers so tailored to each individual—for example, ever consider the rather philosophical implications?
That question is long, but it’s not meant to be challenging. I merely ache to know if the technophiles ever worry about the possibility of human beings becoming obsolete. It isn’t my opinion that humans will quite literally die out because of the work of predictive analytics. However, I think we can analyze this subject and recognize the space it provides for intellectual exploration. We need to examine some of our losses because of some of our gains—or, maybe I shouldn’t quite word it that way. Regardless, if our computers continue acquiring algorithms to determine things in our life—which shoes shall we Add To Cart, which jobs accept us to the next round of interviews, which college or university hits us with a rejection letter—then where does human insight go? I implore you, my reader, to ponder what all this movement toward self-automation could mean for us. What happens when an algorithm inevitably goes all butterfingers on something that an actual person would definitely catch? Predictive analytics could be a slippery slope.
These guys and gals from Google spoke about their desire to make stuff, for lack of a better word, easier for the rest of us. We humans wanted more, so our GoogleGods gave us more. First, we could browse the Internet and place our elementary searches into the handy search bar. This option garnered us many possibilities, many answers. But, it wasn’t enough because we humans—selfish by nature—wanted more. We needed it. So, Google gave us perks, like “Images” and “News.” They didn’t stop there, though. And now—now, we are here in our filter bubbles. Our search queries deliver information to an algorithm, which, in turn, provides us with links and what not that we’ll more than likely respond to positively based on our past searches. The data we output comes back ten times over. We’re doing somersaults in a cyberfield of automated information because an algorithm has apparently learned our likes and dislikes. Yikes?
In many ways, it is nice to know that Google bestows the most relevant possible answers. Except that I want out of my filter bubble. Why does an algorithm get to decide what is most relevant to me? What about my human insight? It seems awful that my Internet habits have essentially pinned me as a type of person with types of interest that follow me around online. What’s worse is that, in more and more cases, I think the online is braiding itself into the offline. You can’t really escape, can you? And, forgive me, O Google, as I have incidentally placed a large portion of blame on you. I don’t think that we can blame one particular company. It simply isn’t fair. But, I do believe we need to look at this technological powerhouse and the avenues it has paved.
Contemporary society is so willfully swallowed up by the need to have a digital presence and technological fortress that the consequences often go unweighed. Of course, I am not implying that we should all throw our laptops into the ocean screaming, “Free at last!” All I’m really trying to say is that I’m an advocate for breaking out of our filter bubbles as one way to find a balance between technology and complete social reclusion. I propose we educate ourselves on topics like predictive analytics to discover all the possibilities and limitations. Ask yourself, what role do I play in this digital age, and what does that role mean?
Part of me wonders if this minor existential crisis stems from a place of intense pride. As in, why does the Internet get to know a thing about me? (Okay, fine, because I put it on there.) I have this theory in my head that human beings will become so absorbed by all our inventions and advances that we may forget what we wanted them for in the first place: connection. The simplest and most obvious answer for why the Internet was invented was so that humans could connect to other humans. Now, all these years away from its birth, there has been an undeniable and immense degradation of human connection. An example: Why would I ever roll down my window at a street corner to ask a stranger directions when I can ask Siri instead? And, isn’t technology really just an outlet for us to preserve and extend our mortality? If what I do online never disappears, am I attempting a kind of pseudo-immortality? Who do I complain to here? It is important to remember that I cannot dump my big, often unanswerable questions onto one single aspect of my life, i.e. the computer—computer and cousins are not the source of all misery or happiness. To exist in this day and age, there needs to be a middle zone where I feel appropriately connected to the world around me. I’m still working on that.
When I went to consider my personal data for a predictive analytics assignment, my thoughts immediately fell into a cloud of financial guilt: I do a lot of online shopping (or really, online browsing). I know that retail companies have a pretty scarily accurate sense of my style. I notice this the most with shoes. I typically view various styles of gladiator sandals or heels with a chunky, wooden bottom; and I’ve realized that “Shoes You May Also Like” tend to be similar types of sandals or heels. Another huge factor in my online shoe shopping is the color of the shoes. In real life, I generally wear shoes that are some shade of brown—and this color of shoes is also generally the only color I look at online. If data analysts enhanced their predictive skills, then they would not only look at the types of shoes I am purchasing online, but they would be able to predict colors I might be interested in. I suppose this would mean that the predictive analysts could develop some stalker advancement in their algorithm to detect my shoe color preference.
I tested this out on the Urban Outfitters app, and I found the suggested shoes were in other colors—primarily black. The thing is, I kind of hate black shoes and never buy them. Of course, until just now, I’ve never explicitly stated that online. However, looking at fairly simple data would sort this out. Data analysts at UO, for example, could go through my recent and past shoe purchases to determine the style of shoe I am most inclined to buy as well as the color. Considering I have never purchased black shoes from this company, it would be an obvious find. Urban Outfitters is a sister company to Anthropologie and Free People, and I have the apps for those stores on my phone, too. I check out the same styles of shoes on all three applications. Data analysts could put these bits of information together and target me more accurately on each app. From my personal data, it sounds like all I care about is shopping, which isn’t true, but it’s also my best example… It’s funny, though, because predictive analytics can’t decipher if or when I am actually going to buy something from a store. An algorithm will never truly know my thoughts and feelings on shoes, or anything, for that matter. Most of the time I am browsing out of boredom. With something as trivial as online shopping habits, I’m not going to alter my online behavior. Predictive analytics will just manipulate the new, different data anyway.
My reaction to the possibility of being specifically targeted online is one of low-key panic. What I mean is that I often dwell on my existence in an increasingly technophilic culture, and I do this with small-scale despondence, yet my digital contributions stay the same. I’m not bowing out of this innovative era. Still, it is frightening to consider that an algorithm has a piece of me figured out. It is also sad to consider the importance modern people, at least in America, put on computers and technology. I worry about the human ability to connect and the loss of simple intimacy. I worry about my own ability to figure things out for myself—it’s way too easy to pick up my smart phone and type something into Google. I wonder, will I soon stop calling my mother when I can just hop on the Internet to find something out? Do I really need to go to the orthopedic if, instead, I can read online about how to heal a bad ankle sprain?
As I said earlier, there’s really no escaping the ways of the Internet, no way to stop giving away our personal data. We can, however, control the way we frame our perspective on the inability to escape. Basically, the tectonic plates of my soul don’t need to converge in fear when I reflect (ruminate) on concepts like predictive analytics. To me, contemplation of this hyper-digital age has become endlessly important. It is both peculiar and captivating to consider the metaphysical grounds that have been uprooted by our very tangible technological advances.
(Picture credit to @textsfromyourexistentialist on Instagram.)