Wednesday, August 31, 2011

the absent mind

Recently I watched Memento again. I've never owned it but I can watch it via Netflix. No later viewing of it compares to the first, because surprise is gone. Nevertheless the progression of scenes in reverse chronological order is appropriately disorienting as ever.

This time, though, I reacted in a new way. Like an untrained reader who self-diagnoses every disease in the book, I identified with the main character. In my case, the similarity has nothing to do with the inability to form new memories. My memory isn't problematic; based on academic accomplishment, it's above average. No, the similarity is the need to leave reminders for myself. As I watched the movie, I distinctly recall thinking, "Doesn't everyone do that, but on a smaller scale?"

Age can't be the cause. I've been "forgetful" in this way since childhood. The usual driver seems to be having an "absent mind". Some people mock the truism, "Wherever you go, there you are." Those must be the people for whom it isn't a challenge. My thoughts are noisy. Emptying my mind is difficult because sometimes it churns and overflows without prodding. It's as if I'm walking through dreamlike surroundings that are far more engrossing than my real environment. External occurrences can't be guaranteed to leave a lot of impact, during a deep swim in the brainwaves. Of all activities done to excess, theorizing has to be one of the most unobtrusive to onlookers. On occasion, when the train of thought jumps tracks, I'm shocked by the time that's passed. The questions "Where am I?" and "What is the time and date?" are generally unimportant. Depending on which ideas are consuming you, the location of your body and the continual ticking of seconds are totally irrelevant.

Calling it attention deficit disorder would be inaccurate. It's closer to having too much focus or fixation than having too little. And it's also not autism. I tend to have trouble interacting naturally in casual conversation, yet the obstacle is distraction rather than disinterest (I make eye contact although some have commented that I appear to be looking "through" the speaker). My sense is that the low severity of absent mind doesn't qualify at all as a disability. It's a disposition, a personality quirk. As a child, it manifested in toothbrushes left behind after a sleepover or gloves left behind after winter travel from one building's interior into another. The tips for coping aren't too complicated:
  • If something needs to be remembered, but at some point after the immediate moment, make a reminder soon. Otherwise there's the possibility that an intriguing stream of thought will resurface before too long and flood everything else. Don't presume to remember to make a reminder...
  • Reminders don't need to be lengthy, detailed, or time-consuming. Unlike Memento's main character, the hang-up isn't that the memory is completely missing but the high probability that the intact memory won't return to mind at the proper time. All that's necessary is an unambiguous trigger. For instance, the infamous trick of tying a string around a finger is too uninformative, while a note consisting of one significant word could work. One mustn't then misplace or ignore the note, of course. It should be located within normal view.
  • Memento emphasizes routine and the ease of learning repeated subconscious reactions. That fits with my experience. Habits demand less concentration, which is a perpetually scarce resource. Breaking habit is healthy and invigorating, but special effort is called for. To succeed, novel actions require greater levels of deliberation and care. More present, less absent.
  • As in all things, simple strategies shouldn't be overlooked. Startling sensations sharply disrupt introspection. Increased alertness to the world is an automatic side effect of fresh perceptions. Simply keeping your eyes moving might be adequate. Administering pinches is a second tactic, as long as it isn't overdone.
  • Reminders work well for single tasks. Habits work well for routine or periodic tasks. A large list of irregular tasks, perhaps to be completed in the upcoming week, is better attacked by outlining a comprehensive schedule. Without explicit scheduling, there's considerable risk that awareness of the tasks will weaken, only to finally revitalize at an inopportune time. Or that one prominent task will monopolize available energy and mask the rest. Either way, an all-encompassing arrangement of what to do and when is, strangely enough, less of a chore than trying to repeatedly rein in a stampeding brain and urge it toward item by item. I compose the whole schedule, as a draft in progress. Then I think of it whenever I'm pondering or visualizing the time periods in the schedule. Writing or typing it is purely optional (memory isn't the problem, engulfment by my own reflections is the problem). As for an event more than two weeks away, those of us with absent minds act like everyone else. We enter it on calendars.
  • As illustrated by Memento, anticipation is a highly sophisticated approach. Self-knowledge enables self-manipulation. Small adjustments, like temporarily moving objects to atypical positions, can prompt corresponding adjustments in behavior. "Why is that there?...oh, to ensure I don't leave it behind." Unsubtle clues either illuminate the desired path or erect a barrier for the inadvisable path. I in the future am likely to be all right with or without the hints, but the prevention would help in the eventuality of unforeseen internal distractions (e.g. trying to recollect the name of the hidden Elvish kingdom in The Silmarillion, which was one of the last to fall to Morgoth). 
  • Selected undertakings force full engagement, regardless of contemplation's pull. With practice, it's easier to adapt. For example, I've gradually developed my skills at driving and athletics. At work, I'm more conscientious about not drifting off during meetings, and I ensure that all my assignments are written down.
I don't want to produce the wrong impression. I'm not overly burdened by this. A bent toward abstract preoccupation is part of who I am. I realize it's not comparable to serious conditions like dementia, which I wouldn't dare to minimize.

I also recognize that a sound albeit absent mind comes in varying degrees. I've only forgotten my keys when leaving the house once. A while back someone notified me that I was wearing two unmatched shoes. I have a desk drawer that contains a stack of papers with the cryptic markings that result from pursuing a particular flash of inspiration over a few days. It could be worse. Apparently, others have been known to go out in public in partial undress, and their mysterious notes fill filing cabinets...

Tuesday, August 30, 2011

sure, I'll use your free storage

I just noticed that my years-old unused Hotmail account has the side effect of a "Sky Drive". 25GB. The per-file limit of 50MB is adequate for most personal data files. Uh, thanks, I guess.

Saturday, August 27, 2011

truth contingent on its effects


It is difficult to get a man to understand something when his salary depends upon his not understanding it. -Upton Sinclair


Depressing though it may be, the above quote is consistent with many people's experiences in attempting to coax someone to a different opinion or just to convince someone to admit unlikable facts. If humans reached truth solely by accumulating and cross-checking knowledge obtained from trustworthy sources, as some philosophies presume, then the truth of a statement couldn't be so contingent on the effects of believing in the statement. Philosophical Pragmatism is singularly unsurprised, however. Since judgment is an ingredient in perception, biased judgment clouds perception. Seeing things "as they are", in practice means seeing things "as I see them". An isolated thing has no meaning. Brains compute meaning by laying isolated things side by side.

In the Pragmatist model, humans observe, reach conclusions, form plans based on those conclusions to reach goals, and then execute the plans. But this indivisible process can surely operate backward as well. Meaning, I don't want to act, therefore I doubt the conclusions that would force me to act, therefore I doubt the observations that would force those conclusions! The longer that a human clings to a "truth", the longer that a human selectively collects evidence in favor of it, and the stronger it becomes, by the human's design.

I'll end by noting a corollary of Sinclair's quote. It's difficult to get a politician to understand something when their election depends on their not understanding it. Basic accounting, biology, and climatology are notable examples.

Friday, August 26, 2011

I wish

Part of acting as a mature adult in a complicated human society is to express statements that are half-true at most. Sincerity isn't a prerequisite. To avoid offense and thereby smooth social interaction, signaling an effort to be considerate is more important than unconditional agreement with the other's ideas on a divisive topic.

Nevertheless, I bristle at one kind of diplomacy that irreligious believers offer: "I wish." The words vary, but the constant is an empathetic disconnect in which emotional preferences are for religious rather than natural notions. "I wish your god existed." "I wish human souls lived forever." "I wish the universe were simple."

My irritation comes from a personal distaste for these sweet wishes. Yes, it may be true that I wish either that reality were more congenial to me or that I could more easily manipulate it to be so. The first is utopia and the second is magic. Of course these wishes are pleasing to compute; that's the whole rationale for a wish. How reassuring is it for the irreligious to admit the obvious, that they too would like human existence to not be so difficult?

A further problem is that courageous advocacy for fictional paradises will fall apart under follow-up questions. What if the hypothetical god acted like _____? What if living forever included harsh judgment for _____?  What if a simple universe implied the impossibility of ______? Fully outlined proposals of alternatives might not be as enticing as the one-paragraph summary...

Tuesday, August 23, 2011

git's index is more than a scratchpad for new commits

Someone relatively inexperienced with git could develop a mistaken impression about the index. After referring to the isolated commands on a quick-reference guide or on a "phrasebook" that shows git equivalents to other VCS commands, the learner might, with good reason, start to consider the index as a scratchpad for the next commit. The most common tasks are consistent with that concept.

However, this impression is limiting. More accurately viewed, the index is git's entire "current" view of the filesystem. Commits are just saved git views of the filesystem. Files that the user has added, removed, modified, renamed, etc. aren't included in git's view of the filesystem until the user says, with "git add" for example. With the exception of before the very first commit, the index is unlikely to ever be empty. It isn't truly a scratchpad, then. When checking out a commit, git changes its current view of the filesystem to match that commit; therefore it changes the index. Through checkouts, history can be used to populate git's view of the filesystem. Through adds, the actual filesystem can be used to populate git's view of the filesystem. Through commits, git's view of the filesystem can be stored for future reference as a descendant of the HEAD.

Without this understanding, usage of "git reset" is infamous for causing confusion. With it, the confusion is lessened. A reset command that changes the index, which happens in the default or with option --hard, is like a checkout in that it changes git's view to the passed commit. (Of course the reset also moves the branch ref and HEAD, i.e. the future parent of the next commit.) A reset command that doesn't change the index, which happens with option --soft, keeps git's "view" the same as if it remained at the old commit. A user who wanted to collapse all of the changes on a branch into a single commit could possibly checkout that branch, git reset --soft to the branch ancestor, and then commit. Depending on the desired effect, merge --squash or rebase --interactive might be more appropriate, though.

Post-Script: Since this is aimed at git newcomers, I should mention that before trying to be too fancy with resets, become close friends with "reflog" and "stash".

Post-Script The Second: Drat. The Pro Git blog addressed the same general topic, but based more directly around "reset". And with attractive pictures. And a great reference table at the bottom.

Tuesday, August 16, 2011

why it isn't done yet

Modifying decrepit C code ain't like dusting crops, boy! Without precise calculations we could fly right through a star or bounce too close to a supernova, and that'd end your trip real quick, wouldn't it.

Saturday, August 13, 2011

omniduction break down II

To lampoon the use of omniduction...

Inviolable Proposition 1: The Constitution is a sublime and superb document, and if interpreted in a particular way (not necessarily the same way that judges have), it would cure society's ills.
Inviolable Proposition 2: Political compromise is a despicable, cowardly act that yields terrible results. Staring contests are better.
Historical Fact 1: The Constitution is packed with political compromises.

SYSTEM ERROR!

the dash hole principle

The cigarette lighter receptacle has an amusing name. In my automobile and many others that I've seen, the present form isn't actually usable for lighting cigarettes. Now it's a round hole in the dashboard with a cover that's labeled as a power outlet. Over time, cigarette lighter receptacles turned into dash holes. The users of an object emphasized the secondary applications of it until the object itself dropped its primary application. It changed meaning through inventive usage.

Software users can be expected to act the same. Software developers should accept that the users, acting like humans, will adapt by introducing their own concepts and assumptions to a "finished" project. As DDD advises, the key is their language. When they speak about the software, and therefore the underlying design or data model, their words throw attention onto their interpretation of the "problem domain". They might describe data groups/categories and store their evolving understanding with rigid entries, like attaching "special" semantics to product identifiers that start with "Q". They might take several hours to run a series of automated preexisting reports, stuff the conglomerated figures into a spreadsheet, and then generate a chart - additional work which could all be accomplished by a computer in a tenth of the time.

The point is, software in the hands (and brains) of users can easily become a dash hole: an original design that came to be viewed much differently in practice. Developers who don't meet the needs of users will be bypassed manually as time goes on. In some cases, this may be a good approach. Some changes in usage just don't justify substantial software modifications. However, to state the obvious, not everyone is a good software analyst. Ad hoc solutions, enforced not by the software but by scores of unwritten rules, are prone to causing data duplication due to no normalization, chaos due to employee turnover or mere human frailty, and tediousness due to not thinking thoroughly about the whole process.

Dash holes function as adequate power outlets. But imagine if irritating dash holes could've been replaced with something designed to serve that purpose.

Thursday, August 11, 2011

irreligosity does not imply apathy

A typical response to anything said or done by the irreligious is, "Why? If by your own admission religious ideas are irrelevant to you, then what are you accomplishing?" Let me list a few...
  • Public expression and inclusion of irreligious beliefs deserves as much protection and accommodation as religious beliefs. Irreliogiosity is of course really a broad classification, so the specific form and content of it varies greatly. Just as there cannot be a single universally recognized symbol for religiosity, there cannot be one for irreligiosity. I like a stylized atomic diagram as a symbol to indicate philosophical materialism ("disciples of Democritus!"), but presumably others of less strict views might prefer alternative irreligious symbols. (A critic of irreligosity would probably make the facetious suggestion of a "No symbol".) In any case, public exposure of irreligous beliefs does matter. Acknowledgment and tolerance of presentations of a belief reinforce the freedom for it to exist in a society. Disqualifying the mere exposure of a belief has a corresponding intimidation factor against potential believers. Removing a belief from discussion also gives the false impression that it doesn't exist or barely exists. Religious believers may expect to receive divine rewards for their publicity efforts, but secondarily they feel mundane emotional satisfaction simply for displaying the "truth" to society. Irreligious believers have purely the latter as motivation; nevertheless, their self-esteem too is boosted by feelings of belief "validation" regardless of how perfunctory it may be.
  • Irreligious encouragement of anti-conversions may prompt the taunt, "What are you trying to do, save the lost from going to heaven?" The taunt is well-aimed in that self-consistent irreligious definitely can't be motivated by impossible consequences in a non-existent afterlife. And it's also true that they can't be logically motivated by spiteful envy of a religious individual's bus ticket to heaven. Still, they may wish sincerely and unselfishly for anti-conversions in order that more humans spend lesser proportions of their limited lifespans in accordance with fallacies. Unlike the concept of humans who continue on in some shape for all time, humans composed of atoms have finite time which shouldn't be wasted or predicated on false hopes. Pascal's Wager contains the idea that a religious life ultimately disproved is no loss relative to an irreligious life ultimately proved. Wrong. Religiosity exacts non-zero tangible and intangible costs, which vary widely by belief system, and a singular life implies that the decision to pay those costs can never be recouped. Anti-conversions enable someone to expend the rest of their time as they choose. No guarantees apply to the amount of time left, either.
  • Some may opine that irreligious notions are too ill-defined, or too defined by negatives, to have effects like religious notions do: "Acting on behalf of 'no religion' make no sense; there's nothing of substance to debate or achieve!" In some situations, that could be correct. Apart from the definition "uncommitted to a religion", general irreligiosity doesn't have unifying creeds and ideals. It does have a unifying cause, though: blunting or reversing the detrimental outcomes of religiosity on society. I don't intend to claim that religiosity is uniformly awful. I mean that the irreligious are likely to try to counteract destructive repercussions that come to their attention. Assuming that religious notions provide plans of action, irreligious notions could possibly provide corresponding plans of opposition. For instance, attempting to insinuate dogma will provoke strong irreligious reaction. Attempting to relieve poverty won't. (To the contrary, the irreligious who care just as much might successfully ally toward this common enemy.)

    Wednesday, August 10, 2011

    loopier and loopier

    Aware that truth doesn't arrive gift-wrapped on the doorstep, a pragmatic thinker should be willing to consider ideas from any source. In particular, sources of false ideas can be more useful than anticipated. The source could have true ideas intermixed, if not often. False ideas themselves could be stepping-stones or flawed clues to the real truth. Still more subtly, ideas could possibly be literally false and yet "true" through hidden correspondences to nuggets of truth.

    For instance, the concept of the human mind, of a nebulous decision-maker that mysteriously controls the brain/body (not vice-versa), is a handy fiction. The self's "mind" provides the setting for an otherwise gaping hole in the categorization of human experience: the locale of subjective mental phenomena. These are actually real in the sense of being real effects of the real activity of the self's real brain. But from the self's perspective, which naturally doesn't include moment-by-moment observation of the originating brain, orienting ethereal thoughts in a dual mind domain is a tidy solution; when one "sees" things without eyes or "hears" sounds without ears, some explanation is warranted, no matter how far-fetched!

    At a more advanced level of abstraction, a deepest ghostly "soul", which is distinct from the mind, scrutinizes and directs the gamut of human existence. The mind contemplates and manipulates objective items, but the theorized soul in turn contemplates and manipulates the mind. It thinks about thoughts. Events in the mind, as well as objective reality, are the raw material for the soul. Just as the idea of a mind is constructed to answer the fundamental question, "What and where are subjective mental phenomena?", the idea of the soul is constructed to answer the fundamental question, "What observes subjective mental phenomena?"

    Besides having explanatory value, souls are instrumental in behavior modification. Specifically, all the many procedures for "transformation" recommend the triumph of the soul over the mind, although "soul" may not be explicit in the procedures' language. The sign is the treatment of the mind as an object rather than the subject: "calm your mind", "analyze your motivations", "recognize your negatively-charged aura". According to their accounts, transformed humans purposely change their minds. Afterward they cease their past actions and habits, because their new minds are incompatible. They will say that the old inclinations return occasionally, but each one is rejected and ignored not long after inception.

    Such tales pose no problem for someone who believes minds and souls to be factual (my position less than six years ago). This isn't an option within the context of the belief that minds and souls are convenient human inventions for meeting some needs. Regardless of the deceptiveness of the impressions described in the stories, something must be behind the honest storytellers' impressions. Since everything occurs in the confines of one brain, the clear indication is that the illusion of hypothetical entity Q observing hypothetical entity M arises from out of a internal brain loop. That is, brain network Q interacts with brain network M. Activation of network M somewhat indirectly activates network Q.

    Therefore, in effect human culture's encouragement of the dominance of the "soul" is encouragement of more sophisticated brain loops. Deliberate introspection ("meditation") is a loop in which thoughts appear, enter short-term memory, and then undergo processing. It requires training precisely due to its essential parallelism. Paying too much attention to momentary thoughts interrupts the arrival of new thoughts. Paying too little attention to momentary thoughts fails to accomplish the overall goal of "detachment", of treating thoughts subsequently as independent mental objects. Unsurprisingly, practitioners who are working to set up the brain loop benefit from periodically returning their attention to their breathing. Breathing is a sensation whose inherent rhythm aids in coordinating the brain network that acts as the "observer soul" and the brain networks that act as the "observed mind". (Reminiscent of the synchronizing "clock ticks" of a computer processor.)

    Similar loops constitute the enhanced impulse control of the "transformed" human. Someone whose consciousness frequently returns to a moral standard (and/or an omniscient judge) will incidentally discover whatever shortcomings were in progress immediately before the return. Or, after spending extended time associating unpleasant reactions with "evil" impulses, a conditioned response of hesitation starts to accompany the impulses in the future. Alternatively an association could be built up between a strong desire for a positive aim and impulses consistent with that aim. At the ultimate extent, loops could be so ingrained, sensitive, and efficient that some parts of the brain virtually prevent other parts from intruding into decisions under normal circumstances. In any case, the human brain is amazing in its loopy ability to employ its incredible capacity to react to nerve signals that originate from "inside" as well as "outside".

    Friday, August 05, 2011

    source code comments and evolution

    In judging whether the source code of a computer program has been copied, similarity in the two programs' effects isn't a conclusive exhibit. As long as the programs were both written to accomplish similar purposes or solve similar problems, it's reasonable for the two to behave similarly. The base strategies used by the programs could very well turn out to be not that different at all. When a particular mathematical formula yields the right quantity, other formulas would probably need to be alike in order to also yield the right quantity. (This is why some argue that patents on software, as opposed to copyright of the source code, are questionable and/or unenforceable.)

    Although it's not hard to believe that two programs could be created without copying and yet have a resemblance, it stretches credibility to allege that the source code of the two is the same. Human-friendly source code allows for a variety of choices that fall under the amorphous category of "style". Names, extra spacing, units of organization, and so forth distinguish the source code and therefore authorship of two independently-written programs. Still more idiosyncratic are the comments in the source code, which are completely extraneous lines of text that explain the code to readers - including the writer, who will undoubtedly need to return to the code later. It's improbable to see source code with identical details, especially when those details serve no function in the operation of the actual program! One program writer could have used the name "item_count" and the other "number_of_items". One could have included the source code comment, "prevent division by zero", and the other, "avoid a zero denominator". The great freedom that's possible for nonfunctional features of the source code also makes coincidences too unlikely to consider. If these features match, the confident assurance is that the source code is a copy.

    DNA is the metaphorical source code of organisms. And like its metaphor, DNA has well-known nonfunctional ("noncoding", perhaps "junk") elements, too. Observations of such elements match across a huge range of organisms. Just as these matches strongly indicate common origins of program source code, the DNA must have common origins. Evolution, in which species spring from other species, is consistent with the observations. Copying is rampant, but at least there aren't any violations of copyright. There can't be; the material is copying itself!

    Thursday, August 04, 2011

    descriptive or prescriptive Pragmatism

    As I understand Pragmatist philosophy (and repeat endlessly), truth is a holistic enterprise which a human performs using the available mental tools: sensation, experimentation, imagination, reason, assumption, intuition, inclination, and so on. If a truth survives this gauntlet, it "works".

    The all-too-obvious problem (for philosophers) of this rather realistic definition of truth is that it doesn't take a theoretical "position". Essentially, the complaint is that this Pragmatist account of truth is descriptive, not prescriptive. It seems patently evasive to answer the fundamental question, "Is X true?", with the reply, "X is true when the human evaluating X determines it." The questioner can hardly be blamed for making the rejoinder, "By thrusting the evaluation of truth onto individual humans, you can't avoid the conclusion that 'truth' differs depending on who's evaluating. I'm so sorry, but in my estimation Pragmatist 'truth' doesn't work, therefore it's false. In your estimation, it may work and be true, but my results happen to diverge from yours, unfortunately."

    Sly remarks aside, acknowledgment of the pivotal role played by the perceiver/interpreter/thinker is both inescapable and valuable. Two uncontroversial corollaries follow. First, there's no excuse for a single human to pretend to have an egocentric total grasp of all truth. Since humans have differing actions and abilities and outlooks, the truth they discover can be different in highly illuminating ways. Pooling the truth among a group is the best strategy. Advanced cooperation and language are two of the strengths that enabled the predominance of the human species. Second, conceding the importance of the subjective contribution leads to a personal "time-based humility". Humans change and adapt. An unrelenting and honest seeker notes that the present collection of truth isn't identical to the collection five years ago. Considered separately, the present self and the prior self are a pair who don't agree about everything. "They" could be equally certain of having superior knowledge of the truth. At any one time, the self could bloviate and/or blog about many things undeniable, and then deny the same at other times. If truth has no element of subjectivity, then how can you say that you know the truth better now than before, or be sure that you will never know the truth better than now? Even declarations of timeless truth happen in time.

    No matter the downsides or upsides of the descriptive aspect of the Pragmatist concept, I believe it's prescriptive too. Given no standalone ideas can be inherently true, we are spurred to carry out human verification, i.e. whichever physical or mental actions are sufficient to produce proof. We aren't obligated to categorize an idea as true when it lacks proof convincing to us. Alternative claims on truth aren't satisfactory. Explanations and justifications, of why a statement is concretely true, defeat unfounded assertions.

    Furthermore, I believe the Pragmatist "prescription" is to judge the strength of a proof in proportion to the strengths of the proofs on which it depends. As much as possible, proofs and truths shouldn't be treated as independent. Large chunks of evidence are mutual supporters. Data substantiates other data. Chains or networks of proof are the Pragmatist reality. Unconnected pillars or islands of proof are prone to suspicion; "Why is the truth value of this one statement excluded from the standards and substance of the rest?"