Friday, October 30, 2009

the unlikelihood of murderous AI

Sometimes suspension of disbelief is difficult. For me, a murderous AI in a story can be one of those obstacles. By murderous AI I don't mean an AI that's designed to kill but malfunctions. I mean an AI that's designed for other purposes but later decides to murder humans instead. It seems so unlikely for a number of reasons.
  • Domain. An AI is created for a particular purpose. In order for this purpose to be useful to people and not subject to "scope creep", it probably isn't "imitating a person as completely as possible". Moreover, in order to be still more useful for that circumscribed purpose, expect the AI to be mostly filled with expert knowledge for that purpose rather than a smattering of wide-ranging knowledge. (The AI might be capable of a convincing conversation, but it's likely to be a boring one!) For yet greater cost-effectiveness, the AI's perceptual and categorization processes would be restricted to its purpose as well. For instance, it might not need human-like vision or hearing or touch sensations and so its "world" could be as alien to us as a canine's advanced sense of smell. A data-processing AI could have no traditional senses at all but instead a "feel for datasets". It might not require a mental representation of a "human". Within the confines of these little AI domains, there's unlikely to be a decision to "murder" because the AI doesn't have the "building blocks" for the mere concept of it.
  • Goals. Closely connected to its domain, the goals of an AI are likely to be clearly defined in relation to its purpose. The AI has motivations, desires, and emotions to the extent that its goals guide its thoughts. It'd be counterproductive for the AI to be too flexible about its goals or for its emotions to be free-floating and chaotic. It's also odd to assume that an AI would need to have a set of drives identical to that produced in humans over eons of brain evolution (to quote the murderous AI's designer's wail "Why, why did I choose to include paranoid tendencies and overwhelming aggression?...").
  • Mortality. Frankly, mortality is a formidably advanced idea because direct experience of it is impossible. Plus, any degree of indirect understanding, i.e. observing the mortality of other entities, is closely tied to knowledge and empathy of the observed entity. An AI needs to be exceedingly gifted to infer its own risk of mortality and/or effectively carry out a rampage. Even the quite logical conclusion "the most comprehensive method of self-defense is killing everything else" can't be executed without the AI figuring out how to 1) distinguish animate from inanimate for a given being and 2) convert that being from one state to the other. An incomplete grasp of the topic could lead to a death definition of "quiet and still" in which case "playing dead" is an excellent defensive strategy. Would the AI try to "kill" a ringing mobile phone by dropping it, assess success when the ringing stops, and diagnose the phone as "undead" when it resumes ringing during the next time the caller tries again?
  • Fail-safe/Dependency/Fallibility. Many people think of this sooner or later when they encounter a story about a murderous AI. Countless devices much simpler than AI have a "fail-safe" mechanism in which operation immediately halts in response to a potentially-dangerous condition. And without a fail-safe, a device still has unavoidable dependencies such as an energy supply and to remove those dependencies would incapacitate/break it. A third possibility is inherent weaknesses in its intelligence. The story's AI builder must have extensive memory of his or her work and therefore know a great number of points of failure or productive paths of argumentation. Admittedly, the murderous AI in the story could be said to overwrite its software or remove pieces of its own hardware to circumvent fail-safes, but if a device can run without the fail-safe enabled then the fail-safe needed more consideration.
Although I'm not worried about the prospect of an AI turning traditionally murderous, I'm less sanguine about the threat of an AI turning either misguided or inscrutable, which I see as two problems that could very naturally arise in the expected development of an advanced intelligence that can flexibly learn. A potentially dangerous AI should be well-protected from false assumptions and hasty generalizations (has Asimov taught us nothing?)! Based on my experiences with technology, an AI that "doesn't know what it's doing" is a likelier threat than an AI that "has just cultivated a predilection for carnage".

2 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete

Note: Only a member of this blog may post a comment.