I'm aware that I'm not a Cylon. Cylons are fictional, first of all.
But it's an excellent metaphor for how I felt when it began to dawn on me that, despite the deeply religious tenets that I formerly cultivated throughout my life, I underwent a "gradual intellectual anti-conversion" which culminated several (less than five) years ago. "Intellectual" because emotion didn't participate in the process; there was no instance of shaking my fist at clouds and also no desperate clinging to my fading assurances. "Anti-conversion" because I view the change as a reversal/erasure of specific beliefs rather than switching to a new "side" as such. Like the fictional characters who lived as humans only to arrive at the breathtaking realization that they were unmistakable examples of the hated enemy Cylons, I discovered to my surprise that my assumptions had changed sufficiently that the most accurate label for me now is "atheist".
It was "gradual" because while it was happening I sometimes noticed the mental conflicts, but I couldn't recall a definitive moment of selection or rejection. There was no single choice, but at the end there was a belated acknowledgment that some of my beliefs had simply deposed some others in the manner of a silent coup. In fact, further mental struggle had long ago become pointless since one side now occupied all the "territory". Further battles could only be as "asymmetric" as guerrilla warfare.
Before the point of mental consolidation, I considered myself "religious with doubts and reservations". My thinking seemed unsteady but regardless I continued to act the same. After the crystallization point, the prior motivations and habits were present yet ceased to exert authority; I saw through. I could select my level of compliance without compulsion, like adjusting the volume on the playback of a speech. The majestic and imposing scenery shrunk down to matte background paintings. However, my new-found godlessness didn't lead to unrestrained lawlessness or carelessness (just as not all Cylons who lived as humans chose immediately to quit behaving like humans). I suppose that as an immigrant to atheism, by default I naturally cherish the customs of the land of my previous patriotism.
Actually, this phenomenon was another jolt to my perspective. The religious are accustomed to categories such as 1) young people who rebel passionately by doing exactly what is forbidden, perhaps when they start higher education at a faraway location, 2) wishy-washy attendees who "backslide" progressively worse in their actions until they can't bear to even pretend to be devotees any longer, 3) people who deny the existence of any god and therefore act as chaotic and selfish manipulators, iconoclasts, and nihilists (I can very well envision someone commenting "Say what you like about paganism, at least it's an ethos"). My gradual intellectual anti-conversion doesn't fit into this tidy taxonomy.
For regardless of my unbelief, I'm surely not one of the bogeymen in the scary third category. Sure, some of my politics and causes are different now, but I'm not aiming to overthrow the right to religious expression and culture, presuming there's no entanglement among important social institutions that should be neutral toward such beliefs. I'm also not against religious evangelism and conversions as long as there's no abusive coercion or exploitation. Frankly, my interest in which religion dominates dropped tremendously when I self-identified as atheist, for whom afterlife and divine judgment are nonexistent consequences of incorrectness. I don't even care about convincing other people of my atheistic point of view, as much as I care that atheists not be stereotyped or persecuted in the larger society.
Furthermore, my present sentiments regarding religion go beyond a lack of competitive zeal against it. I have a lingering appreciation for it. Although not all effects of religion are good, to say the least, I know many people for whom religion is an essential pillar of the mindsets that calm their psyches and motivate them to accomplish amazing progress in themselves and their surroundings. And I think it's baldly inaccurate to accuse the religious of widespread stupidity or weakness. Religion can at times demand either courage or intelligence. Besides, evangelistic atheists should keep in mind that shaming people into following your example is not a highly effective technique anyway, especially if the goal is willing long-term commitment. That tactic certainly played no part in convincing me.
Moreover, these conceded emotional comforts of religion tempt me as well, whenever I personally contemplate death. Given that life is an exceptional configuration of matter and not a separate substance that "inhabits" it, death is many other configurations of that matter. My consciousness requires a blood flow of necessary materials to operate; when this flow stop working normally, my consciousness will also halt. Life's fragility is staggering. Without the promise of an afterlife independent of the vagaries of complex biological systems which can forestall entropy solely for a limited period, the inestimable value of healthy living should be obvious. Responding to death's finality by living dangerously is nonsensical. I should think it obvious that the saying "you only live once" doesn't imply "risk and/or waste the only life you have". To an atheist, "you" are your body.
But if every human is a body and no more, then the fact of the terrifying impermanence of one's own life comes with discomforting companions: every dead human, in all of his or her uniqueness, must be irretrievably gone. The precise configuration of bodily matter that constituted him or her shall never arise again. It's no more likely, and to the contrary far, far less, than the chance of a thorough shuffle of a pack of 52 playing cards producing the exact same order as before the shuffle. Of course biological reproduction perpetuates a lot of the mere genetic bases, but a perfect clone would still develop differently than the source. Indeed, without the clone retaining an identical set of lifelong memories, it couldn't fool any observers. This is why the question of "legacy", i.e. the lasting consequences "left behind" by a human's actions after death, is extremely pertinent to atheists. In a highly literal sense, someone's legacy is the only "part" that can justifiably be termed eternal (although "his" or "hers" separated particles of matter and energy are conserved, but that's scant consolation at the human scale).
I understand if people choose to question my unflinching claim that bodily death entails total death. How can I certainly pronounce that all of the deceased, whether family, friends, or martyrs, haven't moved on to an unexperienced "plane", since I clearly haven't gone through it? Without direct evidence, isn't it more fair and conciliatory to postpone discussion? Well, my first, curt reply is the suggestion that everybody else postpone discussion, too. If they aren't, then I won't. My second, earnest reply is the blunt admission that evidence, as commonly defined, isn't the solitary foundation of my thoughts about reality. Evidence has a strong tendency to be patchy and/or conflicting. Therefore judgment is indispensable, and based on my judgment of the overall array of evidence, including the pieces of evidence that appear to be absent, death is the end of humans. This statement is the honest reflection of my outlook, as opposed to something comparatively half-hearted like "due to lack of positive evidence, I don't know". I profess a materialistic universe; so there's simply no room for anything supernatural, much less the person-centered afterlife usually described. I readily affirm the incompleteness and uncertainty embedded in my knowledge, but I don't waver on the central axioms.
Odds are, the open mention of the role of judgment/selection provokes objections from two diverging groups: 1) from people who see themselves as pure empiricists, because they say that evidence always "speaks" for itself and any evidence that isn't perfectly unambiguous is invalid, 2) from people who subscribe to a particular revelation of a supporting supernatural realm, because they say that people who use individual and error-prone ability to define ultimate truth will instead substitute their own relative, subjective, and quick-changing preferences. But neither objection adequately captures a holistic and authentic viewpoint of actual human activity. Perception, interpretation, goals, experiences, and actions, etc., feed and affect one another. Personal desires and idiosyncratic mental processing are intermingled throughout mental events. No matter which concepts people use to anchor their thoughts, the coronation of those concepts is not passive but active. Regardless of what they say in debates, the pure empiricist decides how to categorize and evaluate his or her valuable evidence, and the devotee of a supernatural dogma decides how to extrapolate and apply it to the modern situations thrust upon him or her. And the confident atheist is no different...
Except during an interval in which transforming thoughts rise in power sneakily like an unfamiliar tune one can't shake off, and the final remaining task is to put it into words: I am a Cylon.
Tuesday, July 27, 2010
Wednesday, July 14, 2010
persistence of private by the Nucleus pattern
The encapsulation of data by an object's methods is one of the foremost goals of effective OOP. Restricting exposure of the object's private information prevents other code from accessing it. The inaccessibility ensures that the other code can't depend upon or otherwise share responsibility for the private information. Each object has a single responsibility: a sovereign private realm of information and expertise.
However, this ideal conflicts with the reality of the need to give objects persistence because most programs require data storage in some form. And the required interaction with the storage mechanism clearly isn't the responsibility of the objects that happen to correspond to the data. Yet how can the objects responsible, often known as repositories or data mappers, mediate between external storage and other objects while obeying encapsulation? How can information be both private and persistent without the object itself assuming data storage responsibility?
The "Nucleus" design pattern, very similar to an Active Record, addresses this issue. According to the pattern, a persistent object, similar to a eukaryotic cell, contains a private inner object that acts as its "nucleus". The nucleus object's responsibilities are to hold and facilitate access to the persistent data of the object. Therefore its methods likely consist of nothing more than public "getters and setters" for the data properties (and possibly other methods that merely make the getters and setters more convenient), and one of its constructors has no parameters. It's a DTO or VO. It isn't normally present outside of its containing object since it has no meaningful behavior. Since the nucleus object is private, outside objects affect it only indirectly through the execution of the containing object's set of appropriate information-encapsulating methods. The containing object essentially uses the nucleus object as its own data storage mechanism. The nucleus is the "seed" of the object that contains no more and no less than all the data necessary to exactly replicate the object.
Naturally, this increase in complexity affects the factory object responsible for assembly. It must initialize the nucleus object, whether based on defaults in the case of a new entity, or an external query performed by the storage-handling object in the case of a continuing entity. Then it must pass the nucleus object to the containing object's constructor. Finally, it takes a pair of weak references to the containing object and nucleus object and "registers" them with the relevant stateful storage-handling object that's embedded in the execution context.
The object pair registration is important. Later, when any code requests the storage-handling object to transfer the state of the containing object to external storage, the storage-handling object can refer to the registration list to match the containing object up to the nucleus object and call the public property methods on the nucleus object to determine what data values to really transfer.
Pro:
However, this ideal conflicts with the reality of the need to give objects persistence because most programs require data storage in some form. And the required interaction with the storage mechanism clearly isn't the responsibility of the objects that happen to correspond to the data. Yet how can the objects responsible, often known as repositories or data mappers, mediate between external storage and other objects while obeying encapsulation? How can information be both private and persistent without the object itself assuming data storage responsibility?
The "Nucleus" design pattern, very similar to an Active Record, addresses this issue. According to the pattern, a persistent object, similar to a eukaryotic cell, contains a private inner object that acts as its "nucleus". The nucleus object's responsibilities are to hold and facilitate access to the persistent data of the object. Therefore its methods likely consist of nothing more than public "getters and setters" for the data properties (and possibly other methods that merely make the getters and setters more convenient), and one of its constructors has no parameters. It's a DTO or VO. It isn't normally present outside of its containing object since it has no meaningful behavior. Since the nucleus object is private, outside objects affect it only indirectly through the execution of the containing object's set of appropriate information-encapsulating methods. The containing object essentially uses the nucleus object as its own data storage mechanism. The nucleus is the "seed" of the object that contains no more and no less than all the data necessary to exactly replicate the object.
Naturally, this increase in complexity affects the factory object responsible for assembly. It must initialize the nucleus object, whether based on defaults in the case of a new entity, or an external query performed by the storage-handling object in the case of a continuing entity. Then it must pass the nucleus object to the containing object's constructor. Finally, it takes a pair of weak references to the containing object and nucleus object and "registers" them with the relevant stateful storage-handling object that's embedded in the execution context.
The object pair registration is important. Later, when any code requests the storage-handling object to transfer the state of the containing object to external storage, the storage-handling object can refer to the registration list to match the containing object up to the nucleus object and call the public property methods on the nucleus object to determine what data values to really transfer.
Pro:
- The containing object doesn't contain public methods to get or set any private data of its responsibility.
- The containing object has no responsibility for interactions with external storage. It only handles the nucleus object.
- Since the nucleus object's responsibility is a bridge between external storage and the containing object, design compromises for the sake of the external storage implementation (e.g. a specific superclass?) are easier to accommodate without muddying the design and publicly-accessible "face" of the containing object.
- The nucleus object is one additional object/class for each persistent original object/class that uses the pattern. It's closely tied to the containing object, its factory object, and its storage-handling object.
- The original object must replace persistent data variable members with a private nucleus object member, and the containing object's methods must instead access persistent data values through the nucleus object's properties.
- The containing object's constructors must have a nucleus object parameter.
- The factory must construct the nucleus object, pass it to the containing object's constructor, and pass along weak references to the storage-handling object.
- The storage-handling object must maintain one or more lists of pairs of weak references to containing objects and nucleus objects. It also must use these lists whenever any code requests a storage task.
- The code in the storage-handling object must change to handle the nucleus object instead of the original object.
Wednesday, July 07, 2010
explicit is better than implicit: in favor of static typing
Right now, static not dynamic typing more closely fits my aesthetic preferences and intellectual biases. And the best expression of the reason is "explicit is better than implicit". The primary problem I have with dynamic typing, i.e. checking types solely as the program executes, is that the type must be left implicit/unknown despite its vital importance to correct reasoning about the code's operation. The crux is whether the upsides of a type being mutable and/or loosely-checked outweigh the downside of it being implicit.
Dynamism. I'm inclined to guess that most of the time most developers don't in fact require or exploit the admittedly-vast possibilities that dynamic typing enables. The effectiveness of tracing compilers and run-time call-site optimizations confirms this. My experiences with C#'s "var" have demonstrated that, for mere avoidance of type declarations for strictly-local variables, type inference almost always works as well as a pure dynamic type. Stylistically speaking, rebinding a name to multiple data types probably has few fans. The binding of a name to different data types is more handy for parameters and returns...
Ultimate API Generality. As for the undeniable generality of dynamically-typed APIs, I'm convinced that most of the time utterly dynamic parameters are less accurate and precise than simple parameters of high abstraction. This is seen in LINQ's impressive range of applicability to generic "IEnumerable<T>" and in how rarely everyday Objective-C code needs to use the dynamic type id. With few exceptions, application code needs to implicitly or explicitly assume something, albeit very little, about a variable's contents in order to meaningfully manipulate it. In languages with dynamic typing, this truth can be concealed by high-level data types built-in to the syntax, which may share many operators and have implicit coercion rules. Of course, in actuality the API may not react reasonably to every data type passed to it...
"Informal Interfaces". According to this design pattern, as long as a group of objects happen to support the same set of methods, the same code can function on the entire group. In essence, the code's actions define an expected interface. The set of required methods might differ by the code's execution paths! This pattern is plainly superior for adapting code and objects in ways that cut across inheritance hierarchies. Yet once more I question whether, most of the time, the benefits are worth the downside in transparency. Every time the code changes, its informal interface could change. If someone wants to pass a new type to the code, the informal interface must either be inferred by perusing the source or by consulting documentation that may be incomplete or obsolete. If an object passed to the code changes, the object could in effect violate the code's informal interface and lead to a bug that surprises users and developers alike. "I replaced a method on the object over here, why did code over there abruptly stop working?" I sympathize with complaints about possible exponential quantities of static interface types, but to me it still seems preferable to the effort that's required to manually track lots of informal interfaces. But in cases of high code churn, developers must expend effort just to update static interface types as requirements and objects iterate...
Evolutionary Design. There's something appealing about the plea to stop scribbling UML and prod the project forward by pushing out working code regardless of an anemic model of the problem domain. In the earliest phases, the presentation of functioning prototypes is a prime tool for provoking the responses that inform the model. As the model evolves, types and type members come and go at a parallel pace. Isn't it bothersome to explicitly record all these modifications? Well, sure, but there are no shortcuts around the mess of broken abstractions. When the ground underneath drops away, the stuff on top should complain as soon as possible, rather than levitating like a cartoon figure until it looks down, notices the missing ground, and dramatically plummets. Part of the value of explicit types lies precisely in making code dependencies not only discoverable but obvious. This is still more important whenever separate individuals or teams develop the cooperating codebases. The other codebase has a stake in the ongoing evolution of its partner. A "handshake" agreement could be enough to keep everyone carefully synchronized, but it's more error-prone compared to an enforced interface to which everyone can refer. During rapid evolution, automated type-checking is an aid (although not a panacea!) to the task of reconciling and integrating small transforming chunks of data and code to the overall design. Types that match offer at least a minimum of assurance that contradictory interpretations of the domain model haven't slipped in. On the other hand, unrestricted typing allows for a wider range of modeling approaches...
Traits/Advanced Object Construction. No disagreement from me. I wish static type schemes would represent more exotic ideas about objects. Still, most of the time, applied ingenuity, e.g. OO design patterns, can accomplish a lot through existing features like composition, delegation, generics, inheritance.
I want to emphasize that my lean toward static typing for the sake of explicitness isn't my ideal. I direct my top level of respect at languages and platforms that leave the strictness of the types up to the developer. I like a functioning "escape hatch" from static types, to be employed in dire situations. Or the option to mix up languages as I choose for each layer of the project. I judge types to be helpful more often than not, but I reserve the right to toss 'em out when needs demand.
Dynamism. I'm inclined to guess that most of the time most developers don't in fact require or exploit the admittedly-vast possibilities that dynamic typing enables. The effectiveness of tracing compilers and run-time call-site optimizations confirms this. My experiences with C#'s "var" have demonstrated that, for mere avoidance of type declarations for strictly-local variables, type inference almost always works as well as a pure dynamic type. Stylistically speaking, rebinding a name to multiple data types probably has few fans. The binding of a name to different data types is more handy for parameters and returns...
Ultimate API Generality. As for the undeniable generality of dynamically-typed APIs, I'm convinced that most of the time utterly dynamic parameters are less accurate and precise than simple parameters of high abstraction. This is seen in LINQ's impressive range of applicability to generic "IEnumerable<T>" and in how rarely everyday Objective-C code needs to use the dynamic type id. With few exceptions, application code needs to implicitly or explicitly assume something, albeit very little, about a variable's contents in order to meaningfully manipulate it. In languages with dynamic typing, this truth can be concealed by high-level data types built-in to the syntax, which may share many operators and have implicit coercion rules. Of course, in actuality the API may not react reasonably to every data type passed to it...
"Informal Interfaces". According to this design pattern, as long as a group of objects happen to support the same set of methods, the same code can function on the entire group. In essence, the code's actions define an expected interface. The set of required methods might differ by the code's execution paths! This pattern is plainly superior for adapting code and objects in ways that cut across inheritance hierarchies. Yet once more I question whether, most of the time, the benefits are worth the downside in transparency. Every time the code changes, its informal interface could change. If someone wants to pass a new type to the code, the informal interface must either be inferred by perusing the source or by consulting documentation that may be incomplete or obsolete. If an object passed to the code changes, the object could in effect violate the code's informal interface and lead to a bug that surprises users and developers alike. "I replaced a method on the object over here, why did code over there abruptly stop working?" I sympathize with complaints about possible exponential quantities of static interface types, but to me it still seems preferable to the effort that's required to manually track lots of informal interfaces. But in cases of high code churn, developers must expend effort just to update static interface types as requirements and objects iterate...
Evolutionary Design. There's something appealing about the plea to stop scribbling UML and prod the project forward by pushing out working code regardless of an anemic model of the problem domain. In the earliest phases, the presentation of functioning prototypes is a prime tool for provoking the responses that inform the model. As the model evolves, types and type members come and go at a parallel pace. Isn't it bothersome to explicitly record all these modifications? Well, sure, but there are no shortcuts around the mess of broken abstractions. When the ground underneath drops away, the stuff on top should complain as soon as possible, rather than levitating like a cartoon figure until it looks down, notices the missing ground, and dramatically plummets. Part of the value of explicit types lies precisely in making code dependencies not only discoverable but obvious. This is still more important whenever separate individuals or teams develop the cooperating codebases. The other codebase has a stake in the ongoing evolution of its partner. A "handshake" agreement could be enough to keep everyone carefully synchronized, but it's more error-prone compared to an enforced interface to which everyone can refer. During rapid evolution, automated type-checking is an aid (although not a panacea!) to the task of reconciling and integrating small transforming chunks of data and code to the overall design. Types that match offer at least a minimum of assurance that contradictory interpretations of the domain model haven't slipped in. On the other hand, unrestricted typing allows for a wider range of modeling approaches...
Traits/Advanced Object Construction. No disagreement from me. I wish static type schemes would represent more exotic ideas about objects. Still, most of the time, applied ingenuity, e.g. OO design patterns, can accomplish a lot through existing features like composition, delegation, generics, inheritance.
I want to emphasize that my lean toward static typing for the sake of explicitness isn't my ideal. I direct my top level of respect at languages and platforms that leave the strictness of the types up to the developer. I like a functioning "escape hatch" from static types, to be employed in dire situations. Or the option to mix up languages as I choose for each layer of the project. I judge types to be helpful more often than not, but I reserve the right to toss 'em out when needs demand.
foolish usage of static and dynamic typing
Whenever I read the eternal sniping between the cheerleaders of static and dynamic typing, I'm often struck by the similarity of the arguments for each side (including identical words like "maintainable"; let's not even get started on the multifarious definitions of "scalability"). Today's example is "I admit that your complaints about [static, dynamic] typing are valid...but only if someone is doing it incorrectly. Don't do that." Assuming that sometimes the criticisms of a typing strategy are really targeted at specimens of its foolish usage, I've therefore paired common criticisms with preventive usage guidelines. The point is to neither tie yourself to the mast when you're static-typing nor wander overboard when you're dynamic-typing.
Static
Static
- "I detest subjecting my fingers to the burden of type declarations." Use an IDE with type-completion and/or a language with type inference.
- "The types aren't helping to document the code." Stop reflexively using the "root object" type or "magic-value" strings as parameter and return types. Y'know, these days the space/speed/development cost of a short semantically-named class is not that much, and the benefit in flexibility and code centralization might pay off big later on - when all code that uses an object is doing the same "boilerplate" tasks on it, do you suppose that just maybe those tasks belong in methods? If a class isn't appropriate, consider an enumeration (or even a struct in C#).
- "Branches on run-time types (and the corresponding casts in each branch) are bloating my code." Usually, subtypes should be directly substitutable without changing code, so the type hierarchy probably isn't right. Other possibilities include switching to an interface type or breaking the type-dependent code into separate methods of identical name and distinct signatures so the runtime can handle the type-dispatch.
- "I can't reuse code when it's tightly coupled to different types than mine." Write code to a minimal interface type, perhaps a type that's in the standard library already. Use factories, factory patterns, and dependency injection to obtain fresh instances of objects.
- "I need to change an interface." Consider an interface subtype or phasing in a separate interface altogether. Interfaces should be small and largely unchanging.
- "When the types in the code all match, people don't realize that the code can still be wrong." Write and run automated unit tests to verify behavior. Mocks and proxies are easier than ever.
- "It's impossible to figure out a function's expected parameters, return values, and error signals without scanning the source." In comments and documentation, be clear about what the function needs and how it responds to the range of possible data. In lieu of this, furnish abundant examples for code monkeys to ape.
- "Sometimes I don't reread stuff after I type it. What if a typo leads to the allocation of a new variable in place of a reassignment?" This error is so well-known and typical that the language runtime likely can warn of or optionally forbid assignments to undeclared/uninitialized variables.
- "Without types, no one can automatically verify that all the pieces of code fit together right." Write and run automated unit tests that match the specific scenarios of object interactions.
- "Beyond a particular threshold scripts suck, so why would I wish to implement a large-scale project in that manner?" A language might allow the absence of namespaces/modules and OO, but developers who long to remain sane will nevertheless divide their code into comprehensible sections. Long-lived proverbs like "don't repeat yourself" and "beware global state" belong as much on the cutting-edge as in the "enterprise".
- "My head spins in response to the capability for multiple inheritance and for classes to modify one another and to redirect messages, etc." Too much of this can indeed produce a bewildering tangle of dependencies and overrides-of-overrides, which is why the prudent path is often boring and normal rather than intricate and exceptional. Mixin classes should be well-defined and not overreach.
- "In order to read the code I must mentally track the current implied types of variables' values and the available methods on objects." Descriptive names and comments greatly reduce the odds that the code reader would misinterpret the intended variable contents. Meanwhile, in normal circumstances a developer should avoid changes to an object's significant behavior after the time of creation/class definition, thus ensuring that the relevant class definitions (and dynamic initialization code) are sufficient clues for deducing the object's abilities.
Subscribe to:
Posts (Atom)