Blog Post

What is Post-Drone? Part II: The Bumpy Road Towards Artificial Intelligence

What is Post-Drone? Part II: The Bumpy Road Towards Artificial Intelligence

This past January, Elon Musk donated $10 million to the Future of Life Institute (FLI) “to run a global research program aimed at keeping [artificial intelligence] beneficial to humanity.”  The brains behind the likes of PayPal, Tesla Motors, and SpaceX, Musk is the founder of a billionaire technology empire – described by many as the closest thing to a real-life Tony Stark.  So yes, it’s a pretty big deal when he joins the likes of Bill Gates and Stephen Hawking to advocate for greater control of the development of artificially intelligent technologies.  Why now?  Well according to Musk, “with artificial intelligence we’re summoning the demon.”  From the perspective of our current historical moment, it seems more important than ever before to provide regimented oversight into the goings-on of developing machine-learning algorithms, computer vision software, and other advanced forms of technology that are embedded within the AI narrative. 

Time and again, anxieties surrounding the coming robot apocalypse focus on the ways in which our machines will eventually be able to outgun/run/smart us.  The concerns voiced by Musk and his fellow techno-futurists often seem to come from visions of robots that are on par with humans – a notion that brings to mind James Cameron’s Terminator franchise.  In most cases, advocates deem it necessary to ensure the benevolent programmability of future intelligent robotics – something along the line of Ronald Arkin’s “Ethical Governor.”  In their open letter detailing research priorities for “beneficial artificial intelligence,” FLI clearly states that future research should be “aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”  This all seems like rather sound logic.  Let’s have the leading roboticists and computer scientists prioritize the ability to direct the machines of the future towards beneficent tasks.  From the ground up, we must aim to develop systems that are designed not to be Skynet.  By all means, go ahead.

This is where my developing theory of the post-drone comes into play.  It is important to acknowledge that the future of unmanned military technology is intimately entangled with the goals of the artificial intelligence community.  As the U.S. Department of Defense (DoD) has stated time and again in their yearly publications of the Unmanned Systems Integrated Roadmap, “autonomy will increase warfighter effectiveness by enhancing unmanned systems capabilities and expanding their capacity to effect results in the battlespace.”  Technical details aside, much of the present military rhetoric assumes that the leap from direct human operation to full autonomous capabilities will occur in one fell swoop.  The drive to increase the efficacy of the future “warfighter” simultaneously occludes the fact that the road ahead is by no means straightforward.  For example, the development of sufficiently accurate computer vision algorithms will likely experience a great deal of hiccupping before reaching an acceptable state of operability.  As Patrick Lin points out, “a robot already has a hard time distinguishing a terrorist pointing a gun at it from, say, a girl pointing an ice cream cone at it.”  In this instance, the notion of “debugging” takes on a very different tone.

In all likelihood, our fears of artificially intelligent killer robots will remain secondary to the more realistic scenario of the near-term post-drone.  In this imagined future, drones designed to be autonomously “capable” are deployed into combat scenarios long before their programming has been fully debugged.  Faced for the first time with an unimaginably complex field of battle, this incarnation of the post-drone is most dangerous as an accidental glitch rather than a perfected assassin.  Ten years down the road, we can make the theoretical assumption that computer science will not have advanced to the point of human-machine parity.  The world will be placed in jeopardy simply because autonomous weapons will be fielded without a full understanding of the consequences.  Deployed to perform tasks normally outfitted for humans, near-term technologies simply won’t be able to keep up.  As Human Rights Watch puts it in their Case Against Killer Robots,

“To comply with international humanitarian law, fully autonomous weapons would need human qualities that they inherently lack. In particular, such robots would not have the ability to relate to other humans and understand their intentions. They could find it difficult to process complex and evolving situations effectively and could not apply human judgment to deal with subjective tests.”

Faced with the chaos of the Real, this collision of the digital with the analog will inevitably cause unforeseen side effects.  Pre-programmed targeting criteria will falter when encountering actual enemy subjects; swarming tactics will dissociate when conditions become less than ideal.  The near-term post-drone implodes as a result of this inability to cope.

Realistically, the development of artificially intelligent systems will continue as planned.  Silicon Valley philanthropists will steer the research and development of these systems, and techno-futurists will advocate for “benevolent” robots incapable of causing harm.  Crucially, this form of guidance assumes an idealistic state of control – the ability to master our machines and foreclose any possibility of glitches or bugs.  In the creation of human-oriented AI, there is a very real danger that something altogether alien will arise in its place.  Like Mary Shelley’s infamous monster, the near-term post-drone is the bastardized manifestation of an out-of-control technological pursuit.  Ill equipped and underestimated, our creations always find a way to come back and bite us.

35

1 comment

  

69