Wednesday, June 30, 2010

peeve no. 260 is factional prejudice in IT

Faction: a group or clique within a larger group, party, government, organization, or the like
I don't know what it is about IT that results in professionals forming factions. Use text editor X! Align your punctuation thusly! Avoid all software from company Q! Programming language R is your cool friend that all the popular kids like! Every problem should be flattened via solution U! Your VCS is DOA!

I understand that people develop attachments to their favorite tools and passion for their work; that's fine. What drives me nuts is the all-too-common over-exaggeration of these feelings. Acting as a cheerleader for your chosen IT factions can be fun, but if you ground your personal identity in it then you've officially gone mental. An IT professional should make technological decisions based on actual requirements and well-understood trade-offs, not based on which option fits your personal "style". Of course, usability and even nebulous subjective appeal are nevertheless legitimate factors in the decision simply because a difficult and hated option is detrimental to productivity and morale. The difference is a decision-maker who critically weighs all applicable criteria instead of just always "voting with the faction". Emotional investment is present without overpowering every other fact. Someone doesn't say "I'm a BLUB programmer". Rather, "I know the most pertinent portions of the syntax and semantics of BLUB, and I like BLUB because of BLURB".(I've complained before about grouping programmers by language.) 

Although I'm ranting against factional prejudice, I caution readers to not interpret my opinion as a case of "binary" thinking (i.e. thinking characterized by a simplistic division into only two categories). Some factional prejudices are quite beneficial and practical, when kept within reasonable and specific bounds. For example, past behaviors of a company or open source project are good reasons to be wary of current behavior. A platform's reputation for lackluster efficiency is a worthwhile justification to avoid it unless the platform demonstrates improvement (or the efficiency is sufficient for the task). Evaluate factional prejudice according to its motivation and relevance and realism. 

For businesses at least, technology isn't an accessory or an avenue of personal expression. It serves a purpose beyond the satisfaction of one's ego. The One True Way might not be what's right for the situation and the client.

Friday, June 25, 2010

watch out for a rebase of a merge

So, a while back I tried to publish some detailed foolproof instructions for using named branches and hgsubversion. I've been using Mercurial as a Subversion client for a little while now, and I figured that if I posted my discoveries and practices then others with an Internet connection could benefit as well. After revising the post multiple times, I finally decided to pull it (HA!) from the blog because it clearly wasn't as foolproof as anticipated - as proven by the mishaps of this fool, anyway. At this point I redirect you to the Stack Overflow question "How to handle merges with hgsubversion?" and parrot the advice there: don't bother trying to reconcile named branches to Subversion!  To fork development, clone the repository instead.

(Normally, my personal inclination is toward branches since clones feel like a hack solution. Given that I'm doing the right thing by using a VCS, why should I need to have separate directories for separate lines of development? It's true that environment variables or symbolic links make switching between the two directories fairly easy, but having to do those manipulations outside the VCS then feels like a hack of a hack. More and more frequently I wonder whether I'll settle on git, which has its own well-known set of warts, in the end.)

One of the primary shatter-points of my confidence was a rebase of a merge. This is the situation in which the default branch was merged into the other branch in the past, and the time arrives to rebase the other branch onto the default branch. (During work on a branch that will be ended by a rebase, the better course of action would have been to update it by a rebase onto default with --keepbranches rather than merging in the default branch.) The root problem is that a merge duplicates changes on the default branch by re-applying to the other branch, so to rebase this merge is to attempt to apply the default branch's past state to its present state, although the default branch may have changed in numerous ways after that merge!

As you might expect, a rebase of a merge often produces conflicts, and the likely desired resolution is to choose the version in the tip of the destination branch, "local" or "parent1". I can only write "likely" because, as usual with conflict resolution, only the developer's judgment can determine the right mixture whenever significant feature changes are involved in the conflict too.

While all these conflicts can be irritating, hair-pulling may be a more proper response to the instances of no conflict; Mercurial dutifully sends one or more files in the default branch backward in time without comment. I know of two causes of this behavior. 1. After the merge from default into other, a later change in default undoes one of the merged changes, and through the rebase of the merge the undoing is undone (reviving the original mistaken change). 2. After the merge from default into other adds a new file, a later change in default modifies the added file, and through the rebase of the merge the added file's later modifications vanish.

Therefore, after a rebase, it's a good idea to run a diff --stat in the range of the default's tip before the rebase and default's tip after the rebase. And for any of the unexpected changes that have no relevance to the feature just rebased, revert those files to the version of the default tip before the rebase and check in. Constant vigilance!

Monday, June 21, 2010

my turn to explain type covariance and contravariance

The Web has a wealth of resources about type covariance and contravariance, but I often find the approach unsatisfying: either code snippets or abstract-ish theoretical axioms. I prefer an additional, intermediate level of understanding that may be called pragmatic. Pragmatically understood, any pattern or concept isn't solely "standalone" snippets or logical axioms. A learner should be able to connect the concept to his or her prior knowledge and also to the question "When would I use this?" I'll attempt to remedy this shortcoming.

For code reuse, a pivotal issue is which data type substitutions in the code will work. Can code written for type Y also function properly for data of type Z? Well, if the definition of data type Z is strictly "Y's definition plus some other stuff", then yes, since the code can treat Z data as if it were Y data. Z is said to be derived from Y. Z is a stricter data type relative to Y. We then say that Z is covariant to Y but Y is contravariant to Z. Code that acts on Y can also act on Z because Z encompasses Y, but code that acts on Z may not be able to act on Y because the code may rely on the "other stuff" in Z beyond Y's definition. In languages that support it, the creation of a data type that's covariant to an existing type can be easily accomplished through inheritance of the existing type's language artifacts. (When someone uses the language feature of inheritance to create a type whose data can't be treated by code as if it were of the original type, confident code reuse is lost, which is the goal behind type covariance/contravariance!)

Thus far I've only mentioned the reuse of one piece of code over any data type covariant to the code's original type. The data varies but the code doesn't. A different yet still eminently practical situation is the need for the code to vary and nevertheless join up with other pieces of code without any of the code failing to work (e.g. a callback). What relationship must there be between the data types of the code pieces in order for the whole to work?

All that's necessary to approach this question is to reapply the same principle: a piece of code that works with data type Y will also work with data type Z provided that Z is covariant to Y. Assume an example gap to be filled by varying candidates of code. In this gap, the incoming data is of original type G and the outgoing data is of original type P.

Consider the code that receives the outgoing data. If the candidate code sends data of type P, the receiving code will work simply because it's written for that original type. If the candidate code sends data of a type covariant to (derived or inherited from) P, the receiving code will work because, as stated before, covariant types can "stand in" for the original type. If the candidate code sends data of a type contravariant to P, the receiving code won't work because it may rely on the parts of P's definition that are included in P alone. (In fact, this should always be true. Whenever it isn't, the receiving code should have instead targeted either a less-strict type or even just an interface. A "snug" data-type yields greater flexibility.) So the rule for the candidate code is that it must be written to send data of type P or covariant to P.

Meanwhile, the code that sends the gap's incoming data has committed to sending it as type G. If G is the original type that the candidate code is written to receive, it will work simply because it's written for that type. If a type covariant to G is the original type that the candidate code is written to receive, it won't work because it may rely on the parts of that type that aren't in G. If a type contravariant to G is the original type that the candidate code is written to receive, it will work because G is covariant to it, and once again covariant types are substitutable. So the rule for the candidate code is that it must be written to receive data of type G or contravariant to G.

Hence, the rules for covariance and contravariance in various programming languages (delegates in C#, generics in Java or C#) are neither arbitrarily set nor needlessly complex. The point is to pursue greater code generality through careful variance among related data types. Static data types are "promises" that quite literally bind code together. Pieces of code may exceed those promises (legal covariance of return); however, the code may not violate those promises (illegal contravariance of return). Pieces of code may gladly accept what is seen as excessively-generous promises (legal contravariance of paramaters); however, no code should expect anything that hasn't been promised (illegal covariance of parameters).

Thursday, June 10, 2010

agile's effect on productivity

Someone needs to state the obvious for those who miss it. Agile software development is not a silver bullet for productivity. Agile is not something you "mix in" and yield the same result faster. Applying agile practices to the humongous project won't result in massive software at lightning speed. That's not the aim of agile.

Agile's main effect is a decrease in project latency, not an increase in throughput. Agile development means delivering minimal but user-confirmed business value sooner, rather than delivering maximal but doubtful business value once at the end. The hope is that by going to the trouble of more frequent releases, the software will grow gradually but surely, as actual usage not guessing motivates what the software needs to be. The economy of agile doesn't result in a greater quantity of software features per development time period, but it's certainly intended to result in a lesser number of wasteful software features per development time period.

This shift in perspective affects everybody's expectations, so agile development is more than a set of new habits for developers. Project managers need to break their world-changing ambitions into much smaller chunks. Users need to become more involved. Analysts need to laser-focus their requirement-gathering. Architects and modelers (may) need to recalibrate the balance between resource costs and the pursuit of perfection.

If a plane flight is like a software project, then agile development won't decrease the time it takes to go from Seattle to Boston non-stop. But it will change the non-stop flight into a series of short connecting flights, and by the time you reach Chicago you might realize that you didn't really need to go to Boston in the first place.

Wednesday, June 09, 2010

DDD and intrinsic vs. extrinsic identity

Ask someone for an introduction to domain-driven design, and it will likely include descriptions of "the suggested building blocks". Two of the blocks are known as entities and value objects. The distinction between these two may appear to be pointless and obscure at first, but it's invaluable to a clearheaded and thoughtful model.

The difference turns on the meaning of identity, which philosophers have long pondered and debated, especially in regard to individual humans (i.e. the question "Who am I really?"). The answers fall into two rough categories: intrinsic and extrinsic. Unsurprisingly, the dividing-line can be fuzzy.
  • In the extrinsic category, a thing's identity is tied to sensations and ideas about that thing. This would be like identifying my phone by its shape, color, model, and so on. Based on extrinsic identity, if any of these "extrinsic" propositions about my phone ceased to be true, my phone would therefore no longer have the same identity. The phone would not only appear but be a different phone. Its identity is dependent. When someone meets a new acquaintance, his or her identity is generally rather extrinsic. "I don't know this person, but his name is Fred and he works in Human Resources."
  • The opposite of extrinsic identity is intrinsic identity, which is independent of sensations and ideas about a thing. A thing with intrinsic identity "wears" sensations and ideas but is "itself" regardless. This would be like disassembling my precious phone piece by piece and then continuing to call the pile of pieces my phone. Transformations of all kinds don't change intrinsic identity. When someone has maintained a long-term connection to another person, his or her identity is generally rather intrinsic. "She's changed her residence, weight, politics, and career over the last several years, but she's still Betty to me."
Neither category of philosophical answers is "right", of course, but merely more or less appropriate depending on situational factors like the thing to identify and the goal to meet by distinguishing identity. In DDD, the choice takes on added importance when modeling a domain object's identity.
  • If the object's identity consists of a particular combination of attributes, then its identity is extrinsic. Two objects of the class with identical attribute values are indistinguishable (for the purpose of the domain). These are known as value objects. Each instance is a "value" similar to a number or a string of characters. Since extrinsic identity is by definition fully dependent on a thing's parts, a value object never changes. But someone often creates a new value object using the attribute values of preexisting value objects. This is just like adding two numbers to yield a third number. Modelers and developers can exploit the unalterable nature of value objects for various advantages, such as avoiding the need for storage. Possibly, there could be a highly reusable "mathematics" of the value object in which all methods are "operations" that 1) operate on and return value objects and 2) are completely stateless (free of side effects).
  • Instead, a domain object's identity could be persistent throughout arbitrary transformations, which indicates intrinsic identity. Instances of the class probably have vital "history" and might not represent the same physical domain item despite identical or near-identical attribute values. These are known as entities. Normally a nontrivial entity will aggregate one or more value objects but along with a mechanism of unique identification. That mechanism could be a GUID or something else as long as no two instances can contain the same identifier by accident. Unlike value objects, (long-lived) entities change and correspond more directly to physical domain items, so a storage strategy for the entity and its associated value objects is necessary. (The advice of DDD is to keep the entities' code as purely domain-driven as possible by abstracting the storage details in a repository object.)

Tuesday, June 08, 2010

a dialogue about abstract meaning through isomorphism

This dialogue discusses some possible objections to meaning through isomorphism for abstractions. It follows a prior dialogue that delved deeper into the counter-claim that understanding of meaning requires more than symbols and syntax and algorithms, which is the point of the Chinese Room argument.

Soulum: When we last met, you alleged that the reduction of meaning to isomorphism is sufficient for the abstract parts of human mental life. Some examples of these parts are logic and mathematics, ethics and religion, introspection and philosophy. How can meaning of that level of abstraction arise from trifling "isomorphisms" with human experience?
Isolder: Would you admit that the words, and in some cases formalized symbols, that communicate abstract meanings have definitions? A definition is an excellent specimen of isomorphism. It expresses relations between the defined word and other words. Since the listener then knows which parts of his or her knowledge match up to the words in the definition, the listener can apply the corresponding relations to those parts of knowledge and thereby match up the defined word to another part of his or her knowledge. And at that point the definition has succeeded in its aim of enlarging the language-knowledge isomorphism. As I tried to say before, all that's necessary is that the isomorphisms be cumulative. The level of abstraction is incidental.
Soulum: Aren't you mistaking a symbol's definition in words for the symbol's actual and original meaning? It's undeniable that a symbol's definition is often communicated in terms of other symbols, but you're not addressing the fundamental difference of abstract meaning, which is the abstract symbol's creation. Non-abstract symbol creation is nothing more in theory than slapping a name on something in plain view. Making new combinations of existing abstractions is not much harder; tell me two numbers and I can mechanically produce a third. But from where do these numbers come? What is the "isomorphism" for the number one? The abstractions I speak of are more or less essential to human lifestyles, so the abstractions must be meaningful and the meaning must go beyond mere definitions. Your answer "Anything with a definition therefore has an isomorphism-based meaning" is far too broad for my taste. I could play with words all day long, assigning new definitions right and left, but my newly-defined words could still be self-evidently fictional and useless.
Isolder: Of course. I asked about definitions in order to show that abstract meaning is communicated via isomorphisms. That doesn't imply that every possible definition communicates a valid isomorphism to reality. Nor does it imply that abstractions are created by purely chaotic symbol combinations. Rather, a human forms a new abstraction by isomorphic mental operations: comparisons, extensions, generalizations, analyses. Each of these mental operations might result or not in a new abstraction of significant meaning. Isomorphisms also function as the verification of the new abstraction's significant meaning.
Soulum: It's abundantly clear that you have a one-track mind and its track is isomorphism. Instead of verbally treating every issue as a nail for your isomorphism hammer, why not put it to a specific test by returning to my question about the isomorphisms for the number one?
Isolder: Fine. The number one is isomorphic to a generalization of human experiences, in the category of quantities. In their lives, probably motivated by social interactions such as trades, humans developed a generalized way to specify quantities. It would've been greatly inefficient and bothersome to use individual symbols for "one stone", "two stones", "three stones". And once the quantity symbol had been broken apart from the object symbol, it would've been similarly tedious to use individualized sets of quantity symbols for each object symbol; hence the symbol for "one" stone could be reused as the symbol for "one" feather. The mental isomorphism of quantity between these situations thus became a linguistic isomorphism between the situations. The number one is abstract and mathematicians throughout the centuries have assigned it logically rigorous definitions in terms of other numbers and functions, but its isomorphic connection to real scenarios ensures that its meaning is much more relevant than the free wordplay you mentioned a moment ago.
Soulum: You're describing hypothetical historical events, and you give the impression that the number one is dependent on language usage. I believe that you continue to be purposely obtuse about the essential difference of my point of view. While one is useful in many contexts, its existence is independent and verifiable without reference to fleeting bits of matter. Humans discovered one. Isn't it an astounding coincidence that so many cultures happened to include symbols for one?
Isolder: Is it also an astounding coincidence that so many cultures happened to include symbols for any other concept? You may as well be impressed by the preponderance of circular wheels. We shouldn't be surprised by the independent invention of useful abstractions; think of how many times separate mathematicians have simultaneously reached the same conclusion but stated it using distinct words and symbols that are isomorphic to one another. Moreover, note the history of the number zero, the number "e", the number "i". Cultures got along fine without these luxuries for centuries (although practical "pi" has been around for a while). The pace of mathematical invention sped up when it started to become an end in itself. Yet even "pure" mathematical work is somewhat mythical. Mathematics has always been motivated, whether to prove abstruse conjectures or solve engineering problems. One of the spurs for growth was the need for more sophisticated tools just to describe scientific discoveries. Can a student know physics without knowing calculus, tensors, and vector spaces? You could say that a physics student with a "conceptual" understanding only has approximate mental isomorphisms for the movements of reality.
Soulum: Again, the specific human history of the abstractions is a digression. No matter the history, no matter the degree of usefulness, no matter the isomorphic resemblance to anything whatsoever, these abstractions are provably true forever in an intellectually appealing fashion. A quintessential blank-slate hermit could conceive of it all; some artists and writers and mathematicians and moralists have in fact created their masterpieces in almost-total isolation. Any time someone communicates one of these abstractions to me, I don't need to "just accept it" in the same way I must when someone communicates the result of an experiment that I don't have the resources to duplicate. I can prove or disprove the abstraction using the capabilities of my mind, guided perhaps by the sort of finely-honed intuition that escapes artificial imitation.
Isolder: The all-knowing hermit you speak of certainly has supernatural intelligence and creativity, not to mention long life! Setting the hermit aside, I readily acknowledge that individuals can figure out the validity of abstractions for themselves. And you may groan when I assert how they do it: isomorphism. The processes of logic and rigorous proof are repeated forms of isomorphism. "All men are mortal" is obviously a relation between the two ideas "all men" and "mortal". "Aristotle is a man" matches "Aristotle" to "all men". If the match is a true isomorphism then the relation is preserved and "Aristotle is mortal". However, I seriously question the amount of emphasis you place on personal testing of abstractions. Wouldn't you concede that for the most part humans really do "just accept it" in the case of abstractions? A teacher can flatly preach that zero multiplied by another number is always zero, or the teacher can list the other multiples of a number to show that according to the pattern the zeroth multiple "should" be zero. What is the likelihood of the teacher using "algebraic ring structure" to prove that, due to distributivity, the product of the additive identity and any other element is the additive identity?
Soulum: I don't maintain that all abstractions are taught with proofs but only that whoever wishes to perform a check on the abstraction will always get the same answer. Call the written justifications "isomorphisms" if you insist, but in any case the abstractions have sublime logical certainty.
Isolder: Indeed, some abstract questions always get the same answer, and some others like the "purpose of existence" seldom do. I'd go so far as to say that the long-term success of an abstraction stems directly from its degrees of determinism, verifiability, and self-consistency. Without these properties, the feeling of sublime logical certainty isn't worth a lot. Whenever someone learns of an abstraction, the abstraction should come with an isomorphism-based checking/consistency procedure to assure that the abstraction is applied correctly. In short, once the abstraction is set loose from isomorphisms with reality, its meaningfulness can only reside in isomorphisms to yet other abstractions. It's no accident that there's one "right" answer for many abstractions; chances are, the abstractions were designed to be that way. The best abstractions contain few gaps between the internal isomorphisms.
Soulum: I disagree. "Best" is subjective and depends on the particular abstraction's goal. Subtlety and difficulty, interpretation and symbolism, are hallmarks of good fiction, for instance. Where are the isomorphisms there?
Isolder: The isomorphisms are whatever the fiction draws out of its audience. Besides ambiguity, good fiction has relatability, which is why outside of its intended audience it can be unpopular or mystifying. An isomorphism doesn't need to be verbalized to be active. Emotional "resonance" is in that category. I'd argue that some of the most behaviorally-powerful isomorphisms are either genetic or unintentionally imprinted from one generation of a culture to the next. Given the level of instinctive reactions seen in animals, ranging all the way from fish to primates, the presence of passionate and involuntary impulses in humans is an evolutionary cliché. Human brain complexity can eclipse the simplicity of the impulses, but humans continue to sense the primitive isomorphisms behind those impulses. For example, facial expressions and body movements are hugely important to the overall effect of a conversational "connection" because the isomorphisms are largely subconscious.
Soulum: To define "isomorphism" that broadly is a few steps too far, in my opinion. It'd be futile to advance other counterexamples, which you would duly treat as fuel for further contrived isomorphisms. It seems that I can't convince you that the soul fathoms meanings too richly intricate for your crude isomorphism-engine.
Isolder: And I can't convince you that, no matter how deep a perceived meaning is, isomorphism is ultimately the manner in which matter exhibits it. No soul is required, but simply a highly-suitable medium for adaptable isomorphisms such as the human brain.

Friday, June 04, 2010

a dialogue about the Chinese Room and meaning through isomorphism

I could easily envision objections to the ideas laid out in the longish post about meaning through isomorphism. To address them, I'll rip a page from Hofstadter's playbook and present a dialogue. (Yes, I'm fully aware that philosophical dialogues are not in any sense original to Hofstadter.)

Soulum: The concept of meaning through isomorphism is inadequate. For instance, you miss the whole point of the Chinese Room argument when you straightforwardly conclude that the "Chinese Turing Test algorithm" understands Chinese rather than the person! You're supposed to realize that since the algorithm can be executed without understanding Chinese, it's nonsense to equate the algorithm with understanding Chinese. It's a reductio ad absurdum of the whole idea of a valid Turing Test, because understanding is more than a symbol-shuffling algorithm.
Isolder: You're right, perhaps I miss the whole point. Instead, let's consider a different situation in order to illuminate precisely in what ways understanding is more than a symbol-shuffling algorithm. Say we have two intelligences trying to learn Chinese for the first time, one of whom is human and the other is non-human. As you just said, the human is capable of understanding but the non-human is only capable of "symbol manipulation". The Chinese teacher is talented enough to communicate to the human in the natural language he or she knows already and to the non-human via all the symbols it knows already. To the human, the teacher says something like "X is the Chinese symbol for Y". To the non-human, the teacher types something like "X := Y". After each lesson, how should the teacher test for the students' understanding of Chinese?
Soulum: Isn't it obvious to anyone who's taken a foreign language course? The test could include a wide variety of strategies. The test could ask questions in the known language to prompt for answers in Chinese; it could have statements in the known language to be translated into Chinese; it could have Chinese symbols to be translated into the known language; it could have an open-ended question in the known language but to be answered using only Chinese.
Isolder: OK. Assume both students pass with perfect scores. Now since they both passed the test, given that the human learned Chinese through the bridge of a natural language while the non-human learned Chinese through the bridge of "mere symbols", is it fair to assert that after the lesson, the human understands Chinese but the non-human doesn't? If not, how is the human's learning different from the non-human's learning?
Soulum: Simply put, the human has awareness of the world and the human condition. If the teacher asks "What is the symbol for the substance in which people swim?" then the human can reply with the symbol for "water" but the non-human can only shuffle around its symbols for "people" and "swim" and then output "insufficient data". The non-human can't run some parsing rules to start with "people" and "swim" and produce "water".
Isolder: But by asking about swimming, you've started assuming knowledge that the non-human doesn't have. You're inserting your bias into the test questions. It's like asking a typical American what sound a kookaburra makes. You may as well flip the bias around and ask both a human and a tax-return program what the cutoff is for the alternate minimum tax.
Soulum: Point taken. However, you're still missing the thrust of my reasoning. The human's understanding is always superior to that of the non-human because the human has experiences. The human can know the meanings of the symbols for "yellow" and "gold" and thereby synthesize the statement "gold is yellow" even though he or she might never have been told that gold is yellow.
Isolder: I'll gladly concede that experiences give added meaning to symbols. Yet the task you describe is no different than the others. I happen to have a handy program on my computer for working with images. One of its functions is the ability to select all regions of an image that have a specified color. If I load an image of gold into my program and instruct it to select all yellow regions of the image, won't its selections include all the gold pieces? By doing so, hasn't the program figured out that gold is yellow?
Soulum: Regardless, it won't pass a Turing Test to that effect. Try typing the sentence "What substance in the image have you selected?" and see if the program can do what a preschooler can.
Isolder: You're confusing the issue. The program's designed to be neither an image recognizer nor a natural-language processing program. All the experiences you describe that yield greater meaning to human words are in principle just more information. As information, the experiences could in principle be fed into a non-human like so many punch-cards. Human brains happen to have input devices in the form of senses, but the sensations' same information could be conveyed via any number of isomorphisms. And there's no obstacle to the isomorphism being digital rather than analog and binary rather than decimal. Granted, one second of total human experience is truly a fire-hose deluge of information, but like a human the program could use heuristics to discard much of it.
Soulum: I'm not confusing anything. Fundamentally speaking, humans don't discard but creatively distill their many experiences, what you called the fire-hose of information, into relevant knowledge. In contrast, your non-human symbol-shufflers, your programs, cannot accomplish any task other than the original, which was laid out by the programmer. And some smart guys have proven that the programs are even unable to check their own work!
Isolder: You're on pretty shaky ground when you claim that humans are mentally flexible and therefore aren't "designed". On what basis do humans distill the information into knowledge? According to common sense, the distilled information is the information that matters, but why does some information matter and some doesn't? In relatively primitive terms, information matters to humans if it aids in the avoidance of pain and the pursuit of sustenance and shelter and companionship. In short, survival. Evolution is the inherently-pragmatic "programmer of humans". Humans process information differently than non-humans due to different "goals" and different "programmers". Thus abstract symbol manipulation is natural to non-humans and unnatural to humans. The effort to invent non-humans that can mimic activities like vision and autonomous movement continues to achieve greater success over time, but frankly evolution has had much more time and chances. It's difficult to catch up.
Soulum: Statements like those frustrate me during conversations with people like you. It's too simplistic to analyze human behavior as if it were animal behavior. From the time of prehistory, humans developed enormously complex cultures of societies, beliefs, languages, artworks, and technologies. Sure, the infantile aren't fully educated and integrated until after many years, but by the time they've grown they're able to contribute through countless means. At its peak human intellect far surpasses other animals.
Isolder: Indeed, humans have done incredibly diverse things. The capture of meaning through isomorphism is a prime ingredient. Humans can experience the emotional ups and downs within a fictional story. They have a "suspension of disbelief" by which the story triggers isomorphic reactions. An unreal protagonist yields mental meanings that are real. The isomorphisms of complex culture extend "humanity" to surprising symbolic ends. At the same time, humans perform their isomorphic feats of understanding by stacking one isomorphism on another - turtles all the way down. When someone comments that "no one understands quantum mechanics", he or she probably means that quantum mechanics has an "isomorphism gap" of insufficient metaphors. There's a turtle missing, and as a result humans experience a distressing cognitive void.
Soulum: You speak as if knowledge were constructed from the bottom-up. Isn't it self-evident that some truths are correct despite a lack of proof? I haven't inferred my morality. I haven't needed to experiment to confirm that I exist. The compulsion of an airtight logical deduction surely isn't a physical force. Your isomorphisms notwithstanding, I know I have a soul of reason and through it I can make contact with a solid realm that's independent of whatever happens.
Isolder: Hmm. I believe there are isomorphisms behind those meanings, too, but I'll leave that discussion for later.