Blog Post

A Cheatsheet to the Turing Test

A Cheatsheet to the Turing Test

Last January, an open letter by expert scientists, researchers, artists, and business leaders warned the world of the hazards of artificial intelligence research. Elon Musk is scared about AI, read the newspapers. Stephen Hawkings, Bill Gates, and many others are worried about the implications of human-like computer intelligences, and words like “extinction” and “apocalypse” peppered newsfeeds online.

But for me, it seems like the focus on whether artificial intelligence could ever surpass natural intelligence (and consequently take over) is ignoring interesting and more pressing questions. Rather than framing machines/humans in inevitable conflict, what if we imagined instead odd couplings of intelligent agents that render distinctions of artificial/natural unimportant?

I have two motives in moving away from a machine/human binary. The first is practical: Clearly separating human and artificial intelligences quickly becomes tricky, and I’d rather not find myself stuck constantly trying to untangle the recursive mess of definitions for intelligence, thought, rationality, humanity, and sentience.

Consider Deep Blue, the chess-playing supercomputer that was the first program to defeat a world champion. After its victory against Gary Kasparov in 1997—widely publicized by IBM—the computer became somewhat of a celebrity, the nucleus around which academics, scholars, and the lay public discussed the future of computation. Artificial intelligence entered pop culture.

However, articulating exactly why we define Deep Blue as artificial intelligence is a bit harder. While certainly an amazing feat of engineering, we’d be hard-pressed to say that the machine was actually thinking. It was an advanced piece of circuitry and programming that could execute the well-defined problem of chess. If anything, the reason we value chess as a test of intellect—its exponential explosion of possible future moves—makes it an easier job for a computer. In making Deep Blue, IBM didn’t make a machine that could think, and therefore could play chess. It turned out to be much easier to make a machine that just played chess.

But maybe mechanical determinism isn’t grounds for dismissing intelligence. After all, barring the touch of the divine, our own intelligence will inevitably be bound by some set of neurochemical mechanisms of calculation.

And maybe thinking is wholly beside the point, when it comes to intelligence.  To escape the epistemological trap of pinpointing the origin of thought, Alan Turing proposed in 1950 instead to place the onus on acting intelligent, regardless of the underlying mechanism:

If a machine acts as intelligently as a human being, then it is as intelligent as a human being (Russell and Norvig, 2-3). 

Voila! This standard—enshrined in the Turing Test—allowed researchers and philosophers to move happily forward, unconcerned by the messiness of the internal workings of the intelligent agent, be it neurons or circuitry, genetics or software.

Or so it seemed. In adopting the Turing Test, programmers are trading a philosophical issue (what is thought?) for a political one (who counts as human?). Furthermore, it’s unclear why human intellect should be the most desirable: Evaluating a vehicle by its fidelity to human locomotion seems silly and limiting. Surely, cognition shouldn’t go by those same standards.

Finally, it still doesn’t tell us whether Deep Blue was intelligent. IBM’s machine can play a mean game of chess, of course, but it can’t tell a joke, chat about football, or provide solace and advice to a grumpy friend. It would fail spectacularly at being a human. So either we concede it isn’t intelligent, which seems to be then a terribly restrictive definition. Or we compartmentalize human behavior and intelligence—playing checkers, performing arithmetic, and holding a conversation—and evaluate the success or failure of a program strictly in these domains.

But then this raises the question: What’s the most atomic aspect of our humanity? If a program can do exceedingly simple tasks well, like performing arithmetic or responding “hello,” has it emulated enough human intelligence?

Given this incoherence surrounding defining artificial intelligence, we turn to Donna Haraway’s notion of the cyborg for resolve. This is my second motive for moving beyond a machine/human binary. Rather than paralysis in the face of shifting and vague boundaries of technology, Haraway embraces the uncertainty, claiming “we are all chimeras, theorized and fabricated hybrids of machine and organism; in short, we are cyborgs. The cyborg is our ontology; it gives us our politics.”

In the remaining blog posts in this series, I’ll be looking at different histories of AI, as well as unexpected or unconventional examples of natural/artificial pairings between human and machine. A stockbroker working with high-speed trading algorithms, police departments tackling crime with massive data-aggregation platforms, and doctors working with supercomputer knowledge engines all are obvious examples of human/machine pairings: Rather than one replacing the other, the two work collaboratively, so that the system—the cyborg whole—opens new possibilities altogether.

But I also want to consider ways these relationships might be unexpected, inverted, or subverted. Amazon Mechanical Turk place real people back into the software, but as one module amongst many others, with no primacy against machine. Search engines and social media feeds—powered by AI’s fed petabytes of data—can act as pulpits, marketers, censors, or librarians. Finally, taking metaphors of artificial intelligence to fanciful conclusions, we can even look at the technologies of race as artificial intelligences—heuristics and classifiers enacted by society.

I’m deliberately casting a broad net, not because I think all examples might be technically considered AI programs by computer scientists, but because it can provide a specific way for us to consider ways that Haraway’s cyborgs might be walking among us or liking our photos or (gasp!) maybe even might be us. This isn’t undertaken as a form of spectacle or a digital freak-show, but to imagine opportunities for intervention in technology, culture, and society. 

126

No comments