A sure sign of a dysfunctional object-oriented design is the nudist community anti-pattern. The defining characteristic of this anti-pattern is that all the design's important objects don't have the mutual data privacy that's yielded by true encapsulation. (Still more care is required when working with an implementation that relies primarily on conventions to enforce data hiding.)
At first glance, the information has been properly separated and gathered into individual objects with distinct responsibilities. However, in practice, each object performs its single responsibility by direct requests for additional data from its fellow objects. Hence, when the shape of the data inevitably changes, not only the responsible object must change but also every other object that asks it directly for that data.
The remedy is to revise the nature of the interactions between objects to better respect individual privacy. Objects shouldn't need to openly publish members to accomplish tasks, but instead should send structured messages that demurely ask an object to produce a necessary sub-goal or intermediate result. If it's difficult to decide which object should be responsible for a role, that could be an indication that the design needs refactoring because requirements have changed.
When an object simply must have information from another, the package of information can still be suitably wrapped for handling by the outsider object, e.g. limiting the exposure to partial instead of full or restricting access to "look don't touch". Also, by putting it in a "box" before giving it, the interaction as a whole can be kept loose because each participant need only depend on a shared abstraction with semantic meaning (cf. "dependency inversion").
Friday, October 30, 2009
the unlikelihood of murderous AI
Sometimes suspension of disbelief is difficult. For me, a murderous AI in a story can be one of those obstacles. By murderous AI I don't mean an AI that's designed to kill but malfunctions. I mean an AI that's designed for other purposes but later decides to murder humans instead. It seems so unlikely for a number of reasons.
- Domain. An AI is created for a particular purpose. In order for this purpose to be useful to people and not subject to "scope creep", it probably isn't "imitating a person as completely as possible". Moreover, in order to be still more useful for that circumscribed purpose, expect the AI to be mostly filled with expert knowledge for that purpose rather than a smattering of wide-ranging knowledge. (The AI might be capable of a convincing conversation, but it's likely to be a boring one!) For yet greater cost-effectiveness, the AI's perceptual and categorization processes would be restricted to its purpose as well. For instance, it might not need human-like vision or hearing or touch sensations and so its "world" could be as alien to us as a canine's advanced sense of smell. A data-processing AI could have no traditional senses at all but instead a "feel for datasets". It might not require a mental representation of a "human". Within the confines of these little AI domains, there's unlikely to be a decision to "murder" because the AI doesn't have the "building blocks" for the mere concept of it.
- Goals. Closely connected to its domain, the goals of an AI are likely to be clearly defined in relation to its purpose. The AI has motivations, desires, and emotions to the extent that its goals guide its thoughts. It'd be counterproductive for the AI to be too flexible about its goals or for its emotions to be free-floating and chaotic. It's also odd to assume that an AI would need to have a set of drives identical to that produced in humans over eons of brain evolution (to quote the murderous AI's designer's wail "Why, why did I choose to include paranoid tendencies and overwhelming aggression?...").
- Mortality. Frankly, mortality is a formidably advanced idea because direct experience of it is impossible. Plus, any degree of indirect understanding, i.e. observing the mortality of other entities, is closely tied to knowledge and empathy of the observed entity. An AI needs to be exceedingly gifted to infer its own risk of mortality and/or effectively carry out a rampage. Even the quite logical conclusion "the most comprehensive method of self-defense is killing everything else" can't be executed without the AI figuring out how to 1) distinguish animate from inanimate for a given being and 2) convert that being from one state to the other. An incomplete grasp of the topic could lead to a death definition of "quiet and still" in which case "playing dead" is an excellent defensive strategy. Would the AI try to "kill" a ringing mobile phone by dropping it, assess success when the ringing stops, and diagnose the phone as "undead" when it resumes ringing during the next time the caller tries again?
- Fail-safe/Dependency/Fallibility. Many people think of this sooner or later when they encounter a story about a murderous AI. Countless devices much simpler than AI have a "fail-safe" mechanism in which operation immediately halts in response to a potentially-dangerous condition. And without a fail-safe, a device still has unavoidable dependencies such as an energy supply and to remove those dependencies would incapacitate/break it. A third possibility is inherent weaknesses in its intelligence. The story's AI builder must have extensive memory of his or her work and therefore know a great number of points of failure or productive paths of argumentation. Admittedly, the murderous AI in the story could be said to overwrite its software or remove pieces of its own hardware to circumvent fail-safes, but if a device can run without the fail-safe enabled then the fail-safe needed more consideration.
Thursday, October 08, 2009
LINQ has your Schwartzian transform right here...
I was looking around for how to do a decorate-sort-undecorate in .Net. Eventually I realized (yeah, yeah, sometimes I'm slow to catch on) that the LINQ "orderby" clause actually makes it incredibly easy. For instance, if you had some table rows from a database query that you needed to sort according to how the values of one of the columns are ordered in an arbitrary external collection...
var sortedRows = from rw in queryResults.AsEnumerable()
orderby externalCollection.IndexOf((string) rw["columnName"])
select rw;
Oh brave new world, that has such query clauses in it! Back when I first read about the Schwartzian transform in Perl, with its hot "map-on-map" action, it took a little while for me to decipher it (my education up to that point had been almost entirely in the imperative paradigm with some elementary OO dashed in).
Between this and not having to learn about pointers, either, programmers who start out today have it too easy. Toss 'em in the C, I say...
UPDATE: Old news. Obviously, others have noticed this long before, and more importantly taken the steps of empirically confirming that LINQ does a Schwartzian under the covers. (I just assumed it was because my mental model of LINQ is of the clauses transforming sequences into new sequences, not ever doing things in-place.)
var sortedRows = from rw in queryResults.AsEnumerable()
orderby externalCollection.IndexOf((string) rw["columnName"])
select rw;
Oh brave new world, that has such query clauses in it! Back when I first read about the Schwartzian transform in Perl, with its hot "map-on-map" action, it took a little while for me to decipher it (my education up to that point had been almost entirely in the imperative paradigm with some elementary OO dashed in).
Between this and not having to learn about pointers, either, programmers who start out today have it too easy. Toss 'em in the C, I say...
UPDATE: Old news. Obviously, others have noticed this long before, and more importantly taken the steps of empirically confirming that LINQ does a Schwartzian under the covers. (I just assumed it was because my mental model of LINQ is of the clauses transforming sequences into new sequences, not ever doing things in-place.)
Subscribe to:
Posts (Atom)