In any case, if you're in a similar situation, my first suggestion is linux-image*. Any desktop kernel version that you haven't booted in the last six months is probably not worth keeping....
Tuesday, December 27, 2011
hail dpigs
"dpigs" is a shell script for apt-based Linux distributions. It produces a list of the software packages which consume the most disk space. I've upgraded my installation in-place, time and time again. Hence my root partition (/) has been running out of room, especially after long uptime. Hacking away at the usual culprits was effective but still insufficient. After removing some of the unneeded yet greedy packages that showed up in dpigs, the problem is gone.
Tuesday, November 29, 2011
this is an undefined entry
Standardization is indispensable. Through independent standards, various products can work well, regardless of market competition. On the other hand, successful standards have appropriate limits. Too much constraint impairs vital product flexibility1. To balance these concerns while remaining unambiguous, a standard might literally delineate topics which are called "undefined". It identifies and then ignores. Hence, a well-known technological gaffe is to assume too much about what's undefined by the standard.
I appreciate such indistinct confessions. In fact, one of my cynical unfulfilled wishes is to exercise that option in other contexts.
1The product can exploit the flexibility in myriad ways. It could do nothing or fail. Or it could provide additional benefit. It could violate unstated expectations. Or it could guarantee a result which the standard doesn't. It could optimize for greater efficiency by performing fewer checks. Or it could optimize for greater infallibility by performing innumerable checks.
I appreciate such indistinct confessions. In fact, one of my cynical unfulfilled wishes is to exercise that option in other contexts.
- I'll finish the project by the agreed deadline. On the condition that everyone else finishes the preceding tasks promptly. Else the finish date is undefined.
- I'll contribute a proportional share to the joint cost of supper. On the condition that I stopped by an ATM recently. Else I should warn you that the amount that I will find in my wallet is undefined.
- I'll advocate the replacement of the software package. On the condition that the result is an unqualified improvement. Else my original level of support for the replacement is undefined.
- I'll follow the specifications. On the condition that the specifications are exact matches for what I was already planning. Else my fidelity to the specifications is undefined.
- I'll write a program that handles the input correctly. On the condition that all the input is correctly entered. Else the program's actions are undefined.
- I'll stay alert at all times in long meetings. On the condition that I slept soundly the previous night. Else my attentiveness is undefined.
- I'll continue adding more examples to the list. On the condition that I don't start to feel bored. Else the number of more examples is undefined.
1The product can exploit the flexibility in myriad ways. It could do nothing or fail. Or it could provide additional benefit. It could violate unstated expectations. Or it could guarantee a result which the standard doesn't. It could optimize for greater efficiency by performing fewer checks. Or it could optimize for greater infallibility by performing innumerable checks.
Tuesday, November 22, 2011
the rational course for tiresome dollar pessimists
I Am Not A Financial Advisor. Nevertheless, here's some advice for the most tiresome dollar pessimists.
I'm not referring to lowly savers who are understandably anxious about the negative real interest rates of their risk-free deposit accounts. I'm referring to the strident soothsayers who remark repeatedly that the dollar is primed to become worthless in the near future, i.e. within the next two years. They know just enough about economics to support the opinions that they hold from the beginning to the end of time, because changing your thoughts in response to objective data is considered a sign of weakness. They're enthusiastic to spread their discoveries of the various factors that are poised to sink the dollar.
If any of them are reading this, please accept the criticism that you're missing a once in a lifetime opportunity to exploit your special knowledge about the dollar's imminent path. First, you need to keep quiet. The more that the rest of the market shares your information, the lesser advantage you have. They don't know that their dollars are attached to little imaginary fuses that have almost run down. If they did, they would compete directly with your financial strategies; in any case they'd refuse to be the ignorant suckers whom you need to carry out the transactions.
Second, you need to act immediately. Your prediction is time-limited by its very nature. Once the shocking collapse springs into action, no more profit is possible. The longer that you delay, your unique prescience decreases in value. Like a stock, it's best to jump in before everyone else, at the earliest time, on the "ground floor".
Third, now that you've established the supreme unreliability and undesirability of the US dollar, you should think of a suitable alternative store of value. Since other economic agents will foolishly continue to expect that you honor their dollars and demand that you exchange dollars for their goods/services, the alternative must be relatively easy to convert. You could opt for a number of wild choices, but I suggest acting conservatively in this instance: gold pieces. Find a trustworthy local dealer who's not afraid of high volume and low margins. Don't resort to one of those huge gold piece dealers who show TV commercials; they're far too inconvenient for everyday use.
After buying gold pieces with your volatile dollars, treat the local dealer as your replacement bank. That is, periodically trade some of your gold pieces for intermediate dollars so that you can make your next batch of purchases. And when someone tries to hand you dollars, hurry to the your dealer to trade those "hot potatoes" for comforting hard gold pieces. Throughout your dealings, keep in mind that for you, dollars are like credit card balances. You use dollars purely for convenience and never hold the dollars long-term. That way, the horrible inflation rate that's right around the corner won't affect you too deeply. Going back to the stock analogy, you're closer to a day trader than a value investor. True, your earnings will be affected by the unfortunate fluctuations of the dollar, but only over very short time spans. You can also expect to lose some value due to the constant churn of conversion between dollar and gold piece, because the dealer probably expects to be compensated for his or her service and the general overhead of running the business. Think of these transaction costs as a reasonable price to pay in return for your peace of mind.
All this dependency on the gold market may make you queasy. Apart from the projected inflation of those irritating dollars, what if there are significant swings in that individual part of the economy? Those swings could erode the value of your gold pieces. To avoid that risk, you might diversify. In addition to the gold dealer, you might pursue other nonperishable assets for storing value. eBay is packed with smooth open markets for a wide variety of options. As one market goes down, you could buy using that market rather than the rest. As a second goes up, you could sell using that market rather than the rest. Call your diversified collection of markets your "basket of goods". Perhaps, at that point, you could publish your forecast of massive inflation far and wide, and then your audience would scurry to buy from your basket of goods. In this brave new world, where market participants cease to hoard dollars, you will be king.
Or you could avoid both gold pieces and diversified baskets of goods. Consider the Canadian dollar. The Loonie could be quite apropos.
I'm not referring to lowly savers who are understandably anxious about the negative real interest rates of their risk-free deposit accounts. I'm referring to the strident soothsayers who remark repeatedly that the dollar is primed to become worthless in the near future, i.e. within the next two years. They know just enough about economics to support the opinions that they hold from the beginning to the end of time, because changing your thoughts in response to objective data is considered a sign of weakness. They're enthusiastic to spread their discoveries of the various factors that are poised to sink the dollar.
If any of them are reading this, please accept the criticism that you're missing a once in a lifetime opportunity to exploit your special knowledge about the dollar's imminent path. First, you need to keep quiet. The more that the rest of the market shares your information, the lesser advantage you have. They don't know that their dollars are attached to little imaginary fuses that have almost run down. If they did, they would compete directly with your financial strategies; in any case they'd refuse to be the ignorant suckers whom you need to carry out the transactions.
Second, you need to act immediately. Your prediction is time-limited by its very nature. Once the shocking collapse springs into action, no more profit is possible. The longer that you delay, your unique prescience decreases in value. Like a stock, it's best to jump in before everyone else, at the earliest time, on the "ground floor".
Third, now that you've established the supreme unreliability and undesirability of the US dollar, you should think of a suitable alternative store of value. Since other economic agents will foolishly continue to expect that you honor their dollars and demand that you exchange dollars for their goods/services, the alternative must be relatively easy to convert. You could opt for a number of wild choices, but I suggest acting conservatively in this instance: gold pieces. Find a trustworthy local dealer who's not afraid of high volume and low margins. Don't resort to one of those huge gold piece dealers who show TV commercials; they're far too inconvenient for everyday use.
After buying gold pieces with your volatile dollars, treat the local dealer as your replacement bank. That is, periodically trade some of your gold pieces for intermediate dollars so that you can make your next batch of purchases. And when someone tries to hand you dollars, hurry to the your dealer to trade those "hot potatoes" for comforting hard gold pieces. Throughout your dealings, keep in mind that for you, dollars are like credit card balances. You use dollars purely for convenience and never hold the dollars long-term. That way, the horrible inflation rate that's right around the corner won't affect you too deeply. Going back to the stock analogy, you're closer to a day trader than a value investor. True, your earnings will be affected by the unfortunate fluctuations of the dollar, but only over very short time spans. You can also expect to lose some value due to the constant churn of conversion between dollar and gold piece, because the dealer probably expects to be compensated for his or her service and the general overhead of running the business. Think of these transaction costs as a reasonable price to pay in return for your peace of mind.
All this dependency on the gold market may make you queasy. Apart from the projected inflation of those irritating dollars, what if there are significant swings in that individual part of the economy? Those swings could erode the value of your gold pieces. To avoid that risk, you might diversify. In addition to the gold dealer, you might pursue other nonperishable assets for storing value. eBay is packed with smooth open markets for a wide variety of options. As one market goes down, you could buy using that market rather than the rest. As a second goes up, you could sell using that market rather than the rest. Call your diversified collection of markets your "basket of goods". Perhaps, at that point, you could publish your forecast of massive inflation far and wide, and then your audience would scurry to buy from your basket of goods. In this brave new world, where market participants cease to hoard dollars, you will be king.
Or you could avoid both gold pieces and diversified baskets of goods. Consider the Canadian dollar. The Loonie could be quite apropos.
Monday, November 14, 2011
Reverse Causality Illusion
I debated whether to mention the Reverse Causality Illusion in my response to the book Willpower, but it was too tangential to merit space in the text and too obscure to describe in five or fewer sentences in a footnote. It's a plain idea with a short definition and lengthy implications. The Reverse Causality Illusion is a proposition about the future that "retroactively" causes changes in the present. It's much more common than it sounds. It underpins the driving narratives that motivate many human actions that are otherwise inexplicable. In a word, it's destiny1. It's willpower's "precommitment" taken to the nth degree.
The strength of the Reverse Causality Illusion depends directly on the realistic treatment of the proposition about the future: adherence to every facet of the proposition. From a proponent's standpoint, the proposition is firmly true. Its part of the future is as irrevocable as the past2. Contemplation of alternatives is fruitless. It's so certain that it repels emotional or intellectual fixation on its likelihood. Anxiety is unnecessary. Dread serves no purpose. Evidence that supports other outcomes has to be inconclusive. Decisions can't contradict it. Impulses that disagree are temporary delusions.
Is this a fatalistic outlook? Definitively. Is it restrictive? Sure, as restrictive as tying Odysseus to the boat mast so he's unable to follow the Sirens. Is it double-think? Yep, since the brain is fighting its thoughts with its thoughts (who gave me the right to order me around?). Is it nonsensical? Perhaps, unless and until someone's actions confirm its accuracy.
Some specimens achieve a still-higher notch of curious self-reinforcement. Whereas lower illusions enforce boundaries on actions and feelings, superior illusions enforce boundaries on doubts. Someone reasons, "The future is fixed but my belief in it is wavering; I should try harder to reassure myself about these facts that are yet-to-come." Essentially the "reality" of a proposition about the future takes precedence over each doubt of it. Thoughts that contradict the proposition are proof that the thinker is mistaken, not proof that the proposition is less likely! Thereby the supposed future can constrict just as completely as the past.
Nevertheless, a judicious Reverse Causality Illusion has its uses, especially for inducing calmness3 or pursuing an appropriate goal. To reiterate, in order for the illusory proposal to push through time in reverse and cause differences immediately, it cannot be weak. A working illusion is vivid and plausible. If it's not feasible or believable, it's too easily interpreted as an ignorable wish or passing whim. Necessary mental revisions or physical adjustments will increase its perceived probability4.
A sturdy illusion isn't enough. Perverse commitment is the second requisite. An internal mind game needs willing players. At all costs, the existence of the illusion must seem independent and indestructible. Regardless of its authorship, it transforms into an objective occurrence that happens to not be tangible. First the brain cultivates the illusion, then the brain obeys the illusion thereafter despite remembering the creative origin. Illogical or not, this strategy is actually consistent with well-known observations. Isolated parts of the brain aren't able to unilaterally sort reality from fantasy or true from false5. Uncritical acceptance is the default effortless response. Sections of the brain can deceive other sections by issuing prophecies.
Of course, the prophecy might be correct. Days ago, I had foretold that yesterday afternoon was the time period of tidying the exterior of my house. I dislike miscellaneous humdrum tasks, but what else could I do? I'd been expecting that event. My sense of annoyance about it had preemptively risen and subsided. I'd released my misgivings about it and shifted my angst to uncertain outcomes. Why should I continue to expend my limited animus on an impersonal and unavoidable circumstance?6
1Reverse Causality Illusion is distinct from normal predictions. Predictions are far less pretentious, bossy, and insistent. Predictions are reasonable guesses which are tied to bounded confidence levels.
2Someone who's learned about Minkowski spacetime may ponder the notion that their whole worldline exists. Past, present, and future are merely coordinates. The future is "there" like the room at the end of the hallway is there.
3With the caveat that strong phobias, persistent depression, or pervasive anxiety require more drastic techniques and/or chemicals. Your mileage may vary.
4For example, in my communication courses I found that the cheery proposition "My next speech will go well" only felt tenable after rehearsing the speech and laying out my phrasing and gestures beforehand. Rehearsal may bother procrastinators precisely because it reminds them of the impending unpleasantness, but why not use a "safe" environment to confront and eliminate the worrisome unknowns of how the speech will go? It doesn't prohibit spontaneity at delivery time. To the contrary, close familiarity with the speech's overall delivery allows for better-informed choices of how to tweak its details freely as the situation warrants or as inspiration strikes.
5The same gullibility affects conscious memory. Oft-repeated or pleasing lies tend to be later recalled as truths, including when the hearer is aware of falsity at the original time of hearing. As the popular saying goes, never let the facts ruin a good story (or conventional wisdom, or common sense).
6I'm dramatizing. My hatred of that category of work is real but not deep.
The strength of the Reverse Causality Illusion depends directly on the realistic treatment of the proposition about the future: adherence to every facet of the proposition. From a proponent's standpoint, the proposition is firmly true. Its part of the future is as irrevocable as the past2. Contemplation of alternatives is fruitless. It's so certain that it repels emotional or intellectual fixation on its likelihood. Anxiety is unnecessary. Dread serves no purpose. Evidence that supports other outcomes has to be inconclusive. Decisions can't contradict it. Impulses that disagree are temporary delusions.
Is this a fatalistic outlook? Definitively. Is it restrictive? Sure, as restrictive as tying Odysseus to the boat mast so he's unable to follow the Sirens. Is it double-think? Yep, since the brain is fighting its thoughts with its thoughts (who gave me the right to order me around?). Is it nonsensical? Perhaps, unless and until someone's actions confirm its accuracy.
Some specimens achieve a still-higher notch of curious self-reinforcement. Whereas lower illusions enforce boundaries on actions and feelings, superior illusions enforce boundaries on doubts. Someone reasons, "The future is fixed but my belief in it is wavering; I should try harder to reassure myself about these facts that are yet-to-come." Essentially the "reality" of a proposition about the future takes precedence over each doubt of it. Thoughts that contradict the proposition are proof that the thinker is mistaken, not proof that the proposition is less likely! Thereby the supposed future can constrict just as completely as the past.
Nevertheless, a judicious Reverse Causality Illusion has its uses, especially for inducing calmness3 or pursuing an appropriate goal. To reiterate, in order for the illusory proposal to push through time in reverse and cause differences immediately, it cannot be weak. A working illusion is vivid and plausible. If it's not feasible or believable, it's too easily interpreted as an ignorable wish or passing whim. Necessary mental revisions or physical adjustments will increase its perceived probability4.
A sturdy illusion isn't enough. Perverse commitment is the second requisite. An internal mind game needs willing players. At all costs, the existence of the illusion must seem independent and indestructible. Regardless of its authorship, it transforms into an objective occurrence that happens to not be tangible. First the brain cultivates the illusion, then the brain obeys the illusion thereafter despite remembering the creative origin. Illogical or not, this strategy is actually consistent with well-known observations. Isolated parts of the brain aren't able to unilaterally sort reality from fantasy or true from false5. Uncritical acceptance is the default effortless response. Sections of the brain can deceive other sections by issuing prophecies.
Of course, the prophecy might be correct. Days ago, I had foretold that yesterday afternoon was the time period of tidying the exterior of my house. I dislike miscellaneous humdrum tasks, but what else could I do? I'd been expecting that event. My sense of annoyance about it had preemptively risen and subsided. I'd released my misgivings about it and shifted my angst to uncertain outcomes. Why should I continue to expend my limited animus on an impersonal and unavoidable circumstance?6
1Reverse Causality Illusion is distinct from normal predictions. Predictions are far less pretentious, bossy, and insistent. Predictions are reasonable guesses which are tied to bounded confidence levels.
2Someone who's learned about Minkowski spacetime may ponder the notion that their whole worldline exists. Past, present, and future are merely coordinates. The future is "there" like the room at the end of the hallway is there.
3With the caveat that strong phobias, persistent depression, or pervasive anxiety require more drastic techniques and/or chemicals. Your mileage may vary.
4For example, in my communication courses I found that the cheery proposition "My next speech will go well" only felt tenable after rehearsing the speech and laying out my phrasing and gestures beforehand. Rehearsal may bother procrastinators precisely because it reminds them of the impending unpleasantness, but why not use a "safe" environment to confront and eliminate the worrisome unknowns of how the speech will go? It doesn't prohibit spontaneity at delivery time. To the contrary, close familiarity with the speech's overall delivery allows for better-informed choices of how to tweak its details freely as the situation warrants or as inspiration strikes.
5The same gullibility affects conscious memory. Oft-repeated or pleasing lies tend to be later recalled as truths, including when the hearer is aware of falsity at the original time of hearing. As the popular saying goes, never let the facts ruin a good story (or conventional wisdom, or common sense).
6I'm dramatizing. My hatred of that category of work is real but not deep.
Sunday, November 13, 2011
belated comments on the Tron Legacy iso concept
Far be it from me to pretend that the movie Tron Legacy is realistic or self-consistent. In an otherworldly setting, ridiculous details of many varieties are normal, if not expected. Why must the heroes do this or that? Why do objects have those peculiar shapes and colors? Just because1.
However, the concept of the "iso" is an exception. It's as interesting to me now, when I stream the movie from Netflix, as it was when I first saw the movie in the theater. An iso is an incredibly complex intelligence that evolved from out of the movements of the information in the Tron "digital world", a highly unusual computer system that apparently is POSIX-compatible. The isoes are "bio-digital jazz, man." Supposedly the answers to every problem are achievable by them. No human or program can entirely comprehend them. They're judged too chaotic to be part of an unchanging perfection.
In my interpretation, the isoes are programs with a mind-blowing aptitude for modeling and understanding. A model is a duplication of a thing's information. Useful models enable prediction, analysis, and manipulation. To understand is to employ a working model. Of course, models may be imprecise, nonverbal, incomplete, etc.
We humans constantly brush up against limits in our ability to make models. It could be difficult to invent the right metaphor or to deduce the right mathematical expression. In any case, our brains, and therefore our mental resources for such tasks, are obviously finite. We can work around these hard realities in any number of ways, and we often do.
But imagine that the iso is a program whose computational equivalent is a brain as large as an average adult human body. With that magnitude of potential representations devoted to a model, it could be much more comprehensive and true-to-life. Although there would still be limiting factors (cf. combinatorial explosion), the model's simulations would more likely reflect the full subtlety2. It would catch the higher-order, peripheral, and mutually-interacting effects, which are emergent3.
Furthermore, creative modifications of the model would have more room for exploration, so it could more easily compute how to respond to the model in order to achieve a benefit. In this sense, an iso is indeed jazz-like in its methods. It imitates, then it adds or subtracts to the original by informed trial and error. It doesn't need to resort to the human simplification of forcing something into a preconceived pattern for the sake of comprehension. Instead, it "absorbs" the thing into a mirror model, and travels the depths of that model.
An iso learns to defeat you at chess by discerning the reasoning process which you use for chess. Sooner or later it uses that knowledge to figure out which moves will circumvent the effectiveness of that process. After that point there's no real competition or game remaining; you can no longer "surprise" the iso with your decisions. You're the iso's pawn. What if the iso could similarly anticipate cancer's "moves"?
I'll grant that my interpretation might not be close to what the movie meant, assuming the movie had a specific idea in mind. Once someone presumes that artificial intelligence is possible, it's no great leap to speculate that it could be made superior to human intelligence, at least in restricted domains or according to superficial measurements.
1To take just one example of the missed opportunities embedded in the movie's premises, the conversion of a human to and from digital data is essentially a teleporter. Or perhaps Wonkavision, when it happens over a wireless connection. Completing orders for merchandise "over the Internet" would be literal. Sharing a digitization of a person in Bittorrent would violate laws against human cloning, I suppose.
2Philosophically or theoretically, no one can ever say with absolute certainty that a sufficiently complicated simulation is perfectly accurate to an original. Yet most humans would opine that a simulation "works" (and is true) when it consistently matches all actual/empirical tests within a reasonable error threshold.
3I purposefully compared the iso to a brain rather than a supercomputer. It wouldn't be a massive array of motionless memory acted upon by unvarying procedures. The movie states that an iso is artificial life. The expectation is software layout that's inseparable from hardware layout, innumerable adjustable junctions between parts, and feedback loops galore.
Furthermore, creative modifications of the model would have more room for exploration, so it could more easily compute how to respond to the model in order to achieve a benefit. In this sense, an iso is indeed jazz-like in its methods. It imitates, then it adds or subtracts to the original by informed trial and error. It doesn't need to resort to the human simplification of forcing something into a preconceived pattern for the sake of comprehension. Instead, it "absorbs" the thing into a mirror model, and travels the depths of that model.
An iso learns to defeat you at chess by discerning the reasoning process which you use for chess. Sooner or later it uses that knowledge to figure out which moves will circumvent the effectiveness of that process. After that point there's no real competition or game remaining; you can no longer "surprise" the iso with your decisions. You're the iso's pawn. What if the iso could similarly anticipate cancer's "moves"?
I'll grant that my interpretation might not be close to what the movie meant, assuming the movie had a specific idea in mind. Once someone presumes that artificial intelligence is possible, it's no great leap to speculate that it could be made superior to human intelligence, at least in restricted domains or according to superficial measurements.
1To take just one example of the missed opportunities embedded in the movie's premises, the conversion of a human to and from digital data is essentially a teleporter. Or perhaps Wonkavision, when it happens over a wireless connection. Completing orders for merchandise "over the Internet" would be literal. Sharing a digitization of a person in Bittorrent would violate laws against human cloning, I suppose.
2Philosophically or theoretically, no one can ever say with absolute certainty that a sufficiently complicated simulation is perfectly accurate to an original. Yet most humans would opine that a simulation "works" (and is true) when it consistently matches all actual/empirical tests within a reasonable error threshold.
3I purposefully compared the iso to a brain rather than a supercomputer. It wouldn't be a massive array of motionless memory acted upon by unvarying procedures. The movie states that an iso is artificial life. The expectation is software layout that's inseparable from hardware layout, innumerable adjustable junctions between parts, and feedback loops galore.
Wednesday, November 09, 2011
Willpower and narratives
I read Willpower by Roy F. Baumeister and John Tierney. According to the described research, energy is integral to self-control. Resistance to the temptation of short-term impulses is "real" work. These systematic study findings agree with anecdotal experience: tired or hungry humans are more likely to act irritable and self-indulgent, even when the fatigue is mental. Thus good sleeping and eating routines are causes or fuel for self-control as well as effects of it.
Philosophically speaking, these overall facts on willpower add to the formidable pile of compelling evidence against the mythical "disembodied mind" (soul). Humans have startling potential for decision-making and self-denial. But aside from abnormal genes and/or painstaking training, they have limits1. They can't actually force their bodies to do whatever they wish, no matter how much perseverance stirs within their alleged supernatural "hearts". Most obviously, the underlying physicality, i.e. the brain, will shut down at some point due to simple exhaustion. Although before that happens, involuntary reactions probably will wrest the weary body away from the tyranny of conscious direction2.
Whether or not someone believes in a mystical basis for willpower, the book has practical suggestions. I imagine that my initial retort is like other know-it-all readers: "That's it? If lasting behavior modification were that straightforward, why is failure so prevalent?" However, a few of the book's propositions fit snugly into my favorite personal schema for human willpower: narratives. The more effectively that someone constructs and follows a narrative of a desired plan of action to reach a goal, the greater the chance of success. Examples:
Entrenched narratives are more than ideas, too, because humans act in response5. Unlike other organisms, which are dominated by fairly simple inborn drives and brains, humans incorporate surprising complexity in their decisions. Viewing themselves as part of larger narratives, their acts and roles need not exhibit complete biological/evolutionary reasonableness. The narrative mechanism enables a vertiginous third-person perspective beyond the self: "What I do here today will echo across history and inspire other trite expressions..." Angst-heavy humans are capable of envisioning the implications of a choice on the trajectory of the chooser's life story. They can be embodiments of principles. They can feel the coercion of idolizing an ideal version of themselves6. The grip of relentless narratives yields levels of human willpower which shouldn't be underestimated.
1My father once commented that part of the fun of watching Survivor is to see how long the contestants manage to act normally. As normal as the typical Survivor contestant, anyway. The strain breaks/hardens people in different ways.
2Recall the common remarks, "I don't remember giving in. Eventually my attention wandered for a moment, and it happened automatically." Some commentators have said that humans more accurately have free-won't rather than free-will. Alcohol doesn't introduce strange motives. All it needs to do is suspend rational judgment. Conscious courage is the ability to override impulsive fear. Mere fearlessness could come from deficient perception or comprehension of risks.
3Distraction is underrated. Illusionists and experimenters have demonstrated repeatedly that a sufficient distraction virtually eliminates other stimuli. Stopping to ponder a questionable option is less advantageous than minimizing it by moving on. Simply put, more time spent simultaneously contemplating and yet "fighting" a motive corresponds to more moment-by-moment opportunities to surrender.
4Specifically, the "theory of mind" presupposes that the self's mind is an adequate model for others' minds. Their point of view is obtained by putting the self's mind in their narratives. Carl's eyes are shifty. If I were Carl, shifty eyes would indicate that I'm was hiding something. So by matching the real narrative of Carl with a hypothetical narrative about myself, I assume that Carl is hiding something.
5And according to what I've previously mentioned about philosophical Pragmatism, entrenched narratives play a still deeper role. The narratives that a human has judged to "work", by whatever standard, are the tools for constructing human truth from confusing/ambiguous raw data. Moreover, numerous confrontations with reality may prompt complicated revisions and additions to patchwork narratives. Otherwise the narratives no longer would "work". ("My paranoia can accommodate the new facts quite neatly...")
6The parenting section of the book doubts the effectiveness of training self-esteem versus training willpower. I don't dispute that. In terms of an aspirational narrative about the self, the distinction is how demanding the narrative is: not only the size of the relative gap between the ideal and the "real" self but also the extremity of the ideal. Humans who think "I've reached my apex just the way I already am" are aiming at a lower target than humans who think "I'm going to try to become elite".
Philosophically speaking, these overall facts on willpower add to the formidable pile of compelling evidence against the mythical "disembodied mind" (soul). Humans have startling potential for decision-making and self-denial. But aside from abnormal genes and/or painstaking training, they have limits1. They can't actually force their bodies to do whatever they wish, no matter how much perseverance stirs within their alleged supernatural "hearts". Most obviously, the underlying physicality, i.e. the brain, will shut down at some point due to simple exhaustion. Although before that happens, involuntary reactions probably will wrest the weary body away from the tyranny of conscious direction2.
Whether or not someone believes in a mystical basis for willpower, the book has practical suggestions. I imagine that my initial retort is like other know-it-all readers: "That's it? If lasting behavior modification were that straightforward, why is failure so prevalent?" However, a few of the book's propositions fit snugly into my favorite personal schema for human willpower: narratives. The more effectively that someone constructs and follows a narrative of a desired plan of action to reach a goal, the greater the chance of success. Examples:
- The book's preference of specific details over broad intentions is a factor in a narrative's perceived level of "reality". A vague narrative is prone to treatment as a fantasy. Mushy rules or to-dos are difficult narratives to interpret as true - or as false in the case of failure.
- One of the book's foremost points is the need for frequent monitoring and short-term milestones, combined with the willingness to be flexible and forgiving in the immediate-term when inevitable complications occur. The same applies to a narrative, which must inhabit attention for it to be a tangible guide. Narratives must often enter everyday awareness in order to be measurements and signposts for actual deeds.
- Long-term rewards and consequences require extra emphasis in thoughts, because the competing short-term incentives are naturally louder. Good narratives are clear on the desirability of the far-off benefit, so attentiveness to a narrative substitutes a different, imagined item for the present temptation. The narrative projects the decision-maker outside the influence of the current time and place. Taste of future victory compensates for withdrawal now. Other than abstract reinforcement, this mental operation is a helpful distraction3.
- Orderly environments and personal habits aid willpower. Messes consume mental resources, leaving less room for considering proposed narratives. In contrast, tidiness of oneself is a subconscious increase in the narrative's plausibility. Little triumphs and confidence boosters bring it within reach. "I see that I can exert control. Maybe I can carry out a hard narrative too."
- "Precommitment" could be the willpower technique that aligns closest with narratives. What else is the essence of a narrative, if not a vision of what future-me will do (and be)?
Entrenched narratives are more than ideas, too, because humans act in response5. Unlike other organisms, which are dominated by fairly simple inborn drives and brains, humans incorporate surprising complexity in their decisions. Viewing themselves as part of larger narratives, their acts and roles need not exhibit complete biological/evolutionary reasonableness. The narrative mechanism enables a vertiginous third-person perspective beyond the self: "What I do here today will echo across history and inspire other trite expressions..." Angst-heavy humans are capable of envisioning the implications of a choice on the trajectory of the chooser's life story. They can be embodiments of principles. They can feel the coercion of idolizing an ideal version of themselves6. The grip of relentless narratives yields levels of human willpower which shouldn't be underestimated.
1My father once commented that part of the fun of watching Survivor is to see how long the contestants manage to act normally. As normal as the typical Survivor contestant, anyway. The strain breaks/hardens people in different ways.
2Recall the common remarks, "I don't remember giving in. Eventually my attention wandered for a moment, and it happened automatically." Some commentators have said that humans more accurately have free-won't rather than free-will. Alcohol doesn't introduce strange motives. All it needs to do is suspend rational judgment. Conscious courage is the ability to override impulsive fear. Mere fearlessness could come from deficient perception or comprehension of risks.
3Distraction is underrated. Illusionists and experimenters have demonstrated repeatedly that a sufficient distraction virtually eliminates other stimuli. Stopping to ponder a questionable option is less advantageous than minimizing it by moving on. Simply put, more time spent simultaneously contemplating and yet "fighting" a motive corresponds to more moment-by-moment opportunities to surrender.
4Specifically, the "theory of mind" presupposes that the self's mind is an adequate model for others' minds. Their point of view is obtained by putting the self's mind in their narratives. Carl's eyes are shifty. If I were Carl, shifty eyes would indicate that I'm was hiding something. So by matching the real narrative of Carl with a hypothetical narrative about myself, I assume that Carl is hiding something.
5And according to what I've previously mentioned about philosophical Pragmatism, entrenched narratives play a still deeper role. The narratives that a human has judged to "work", by whatever standard, are the tools for constructing human truth from confusing/ambiguous raw data. Moreover, numerous confrontations with reality may prompt complicated revisions and additions to patchwork narratives. Otherwise the narratives no longer would "work". ("My paranoia can accommodate the new facts quite neatly...")
6The parenting section of the book doubts the effectiveness of training self-esteem versus training willpower. I don't dispute that. In terms of an aspirational narrative about the self, the distinction is how demanding the narrative is: not only the size of the relative gap between the ideal and the "real" self but also the extremity of the ideal. Humans who think "I've reached my apex just the way I already am" are aiming at a lower target than humans who think "I'm going to try to become elite".
Thursday, November 03, 2011
to be agile is to adapt
Not too long ago, I read Adapt by Tim Harford. It's an engrossing presentation of a profound idea: beyond a particular bound of complexity, logical or top-down analysis and planning is inferior to creative or bottom-up variations and feedback. Adaptation can be indispensable. Often, humans don't know enough for other approaches to really work. They oversimplify, refuse to abandon failing plans, and force the unique aspects of "fluid" situations into obsolete or inapplicable generalizations. They're too eager to disregard the possible impact of "local" conditions. Biological evolution is the prime example of adaptation, but Harford effectively explores adaptation, or non-adaptation, in economies, armies, companies, environmental regulations, research funding, and more. Although the case studies benefit from adept narration, some go on for longer than I prefer.
Software developers have their own example. Adapting is the quintessence of Agile project management1. As explained in the book, adaptive solutions exploit 1. variation, 2. selection, and 3. survivability. Roughly speaking, variation is attempting differing answers, selection is evaluating and ranking the answers, and survivability is preventing wrong answers from inflicting fatal damage.
Agile projects have variation through refactoring and redesign while iterations proceed. Agile code is rewritten appropriately when the weaknesses of past implementations show up in real usage. Agile developers aren't "wedded" to their initial naive thoughts; they try and try again.
Agile projects have selection through frequent and raw user feedback. Unlike competing methodologies with excessive separation between developers and users, information flows freely. Directly expressed needs drive the direction of the software. The number of irrelevant or confusing features is reduced. Developers don't code whatever they wish or whatever they inaccurately guess about the users.
Agile projects have survivability through small and focused cycles. The software can't result in massive failure or waste because the cost and risk are broken up into manageable sections. Agile coaches repeat a refrain that resembles the book's statements: your analysis and design is probably at least a little bit wrong, so it's better to find out sooner and recover than to compound those inevitable flaws.
1Of course, the priority of people over process is also quintessential.
Software developers have their own example. Adapting is the quintessence of Agile project management1. As explained in the book, adaptive solutions exploit 1. variation, 2. selection, and 3. survivability. Roughly speaking, variation is attempting differing answers, selection is evaluating and ranking the answers, and survivability is preventing wrong answers from inflicting fatal damage.
Agile projects have variation through refactoring and redesign while iterations proceed. Agile code is rewritten appropriately when the weaknesses of past implementations show up in real usage. Agile developers aren't "wedded" to their initial naive thoughts; they try and try again.
Agile projects have selection through frequent and raw user feedback. Unlike competing methodologies with excessive separation between developers and users, information flows freely. Directly expressed needs drive the direction of the software. The number of irrelevant or confusing features is reduced. Developers don't code whatever they wish or whatever they inaccurately guess about the users.
Agile projects have survivability through small and focused cycles. The software can't result in massive failure or waste because the cost and risk are broken up into manageable sections. Agile coaches repeat a refrain that resembles the book's statements: your analysis and design is probably at least a little bit wrong, so it's better to find out sooner and recover than to compound those inevitable flaws.
1Of course, the priority of people over process is also quintessential.
Wednesday, October 26, 2011
shared links are different than RSS
The point of RSS is automation. RSS is an established standard1 so that no human needs to keep revisiting a long list of vetted topical sources that regularly publish worthwhile content. With RSS, a computer can rapidly and flawlessly determine whether another computer offers new content. If available, the content itself can be automatically retrieved, too.
Human link sharing is different. When humans share links, those links are likely intermixed with irrelevant items like unsolicited opinions. The links may be about a wide range of uninteresting topics. Although link sharing distributes the work of revisiting valuable websites and flagging fresh information, the flaws of a manual procedure continue to apply. How reliable is the crowd at performing this task? As a unit, they're more reliable than one person, but I daresay that a program is better still.
RSS is for mechanizing the syndication of "publishing sources". I'd say that humans who share links are more accurately classified as additional publishing sources than as "John Henry competitors" to efficient RSS software. I'll concede that link sharing can be a more convenient answer than RSS to the question, "How do I obtain a simple list of interesting links to visit?" But RSS is superior if the question is, "How do I, a unique individual, follow the ongoing activity of every source in a list that I control?"
Shared links won't replace RSS for my needs. To suggest otherwise is to misunderstand the purpose of RSS. When a website shuts down its feeds due to "lack of interest", I'll gladly turn my attention elsewhere.
1Ha! RSS is a standard in the sense that you pick whichever one you like.
Human link sharing is different. When humans share links, those links are likely intermixed with irrelevant items like unsolicited opinions. The links may be about a wide range of uninteresting topics. Although link sharing distributes the work of revisiting valuable websites and flagging fresh information, the flaws of a manual procedure continue to apply. How reliable is the crowd at performing this task? As a unit, they're more reliable than one person, but I daresay that a program is better still.
RSS is for mechanizing the syndication of "publishing sources". I'd say that humans who share links are more accurately classified as additional publishing sources than as "John Henry competitors" to efficient RSS software. I'll concede that link sharing can be a more convenient answer than RSS to the question, "How do I obtain a simple list of interesting links to visit?" But RSS is superior if the question is, "How do I, a unique individual, follow the ongoing activity of every source in a list that I control?"
Shared links won't replace RSS for my needs. To suggest otherwise is to misunderstand the purpose of RSS. When a website shuts down its feeds due to "lack of interest", I'll gladly turn my attention elsewhere.
1Ha! RSS is a standard in the sense that you pick whichever one you like.
Tuesday, October 04, 2011
model dependent realism
I read The Grand Design. I'm long acquainted with much of the history and physics therein, albeit at a conceptual not mathematical level. However, I was fascinated by the description of the entire universe as a Feynman path. I can't make any knowledgeable comments on that or the M-theory stuff, of course. I couldn't help wondering if the sections on renormalization and "negative energy" would've been easier to understand with the careful and hand-held inclusion of some undergraduate-level math. That's a hard balance to strike, though. Maybe I'll try some cross-referencing with the tome that's "heavy" in several senses of the word, The Road To Reality by Penrose. I doubt the two books share the same general opinions.
Since I'm monotonous, I'm obligated to compare the book's "model dependent realism" with my interpretation of philosophical Pragmatism. I noticed many similarities. In model dependent realism, humans perceive reality through the lens of a model. In Pragmatism, humans perceive reality through the lens of subjective elements like desire, focus, analysis, synthesis, theory-building, etc. In model dependent realism, humans select models for the sake of "convenience". In Pragmatism, the convenience of thoughts about reality is explicitly tied to how well the thoughts "work" for purposes. In model dependent realism, humans replace models as they compare the accuracy by experiment. In Pragmatism, humans adjust their knowledge of truth as they actively determine which individual truths are confirmed "in practice". Most infamously, in model dependent realism, an ultimate universal model of reality might simply be impossible, except as a quilted combination of an array of limited models. Just as infamously, in Pragmatism, truth isn't a standalone all-encompassing entity, except as an evolving collection of ideas whose two coauthors are the human and their whole environment.
Since I'm monotonous, I'm obligated to compare the book's "model dependent realism" with my interpretation of philosophical Pragmatism. I noticed many similarities. In model dependent realism, humans perceive reality through the lens of a model. In Pragmatism, humans perceive reality through the lens of subjective elements like desire, focus, analysis, synthesis, theory-building, etc. In model dependent realism, humans select models for the sake of "convenience". In Pragmatism, the convenience of thoughts about reality is explicitly tied to how well the thoughts "work" for purposes. In model dependent realism, humans replace models as they compare the accuracy by experiment. In Pragmatism, humans adjust their knowledge of truth as they actively determine which individual truths are confirmed "in practice". Most infamously, in model dependent realism, an ultimate universal model of reality might simply be impossible, except as a quilted combination of an array of limited models. Just as infamously, in Pragmatism, truth isn't a standalone all-encompassing entity, except as an evolving collection of ideas whose two coauthors are the human and their whole environment.
Wednesday, September 28, 2011
evolution and DRY
Reality may be unintuitive. One instance among many is the evolutionary march that yields complexity from chaos, harmony from cacophony, solution from error. At the timescale of a human life, and applied to everyday objects, such a progression is nonsense. In information-theoretic thinking, the introduction of greater randomness to existent information only degrades it with greater uncertainty or "noise", necessitating a communication code that can compensate by adding greater message redundancy. In thermodynamic thinking, the much larger number of disordered states overwhelm the small number of ordered states. Two gases in one box will mix simply because that's far more probable on average than all the undirected gas particles staying apart in the original groupings. In parental thinking, miscellaneous household items won't land in specific designated positions if children drop the items at randomly-chosen times and locations (a four-dimensional vector governed by a stochastic variable).
Clearly, accurate metaphors for biological evolution are lacking. Humans are justifiably amazed by the notion of entire populations of intricate organisms changing for many millions of years. And the changes are related to numerous shifts in myriad factors, including habitat and competition. It's problematic to transplant prior assumptions into that expansively complicated picture.
But a metaphor from software development might be helpful for contemplating the stunning results achieved by the unpremeditated mechanisms of evolution. No matter the particular need served by the software, a solemn rule of development is "don't repeat yourself", which is abbreviated to "DRY". The intent of the rule certainly isn't the naive claim that no repetition ever happens. To the contrary, the rule is about handling repetition correctly. Following DRY is typing a "chunk" of software just once and then somehow rerunning that solitary chunk on different data whenever necessary. The alternative to DRY is duplication. Duplication is often cheaper at the time of initial development, although it's costlier thereafter: two or more chunks are naturally laborious compared to one.
Besides the long-term savings in maintenance work, aggressive DRY has a second effect. The software is divided into chunks. These divisions and subdivisions are easier to read, understand, and analyze. Organization and interconnections take the place of a flat sequence. Appearances suggest conscientious craftsmanship, independent of any knowledge of the software's developers.
Hence, a DRY-compliant outcome has a tendency to look artificially arranged. Evolution could fall in that category. Obviously, unlike in software development DRY can't be a conscious guideline. Instead, inherent constraints "encourage" DRY to occur. Since normally DNA is strongly resistant to massive change, perhaps outright duplication of a gene across separate strand locations is improbable. Reuse of the original gene in a new context accomplishes an identical adjustment in the organism. The DRY-like modifications of the DNA trigger matching DRY-like modifications to the creature. It appears to be the product of a history that's less chaotic than it really is. Thus phenomena that human brains find comprehensible or beautiful, like symmetry or hierarchical containment, arise from frugal genes. So do displeasing phenomena, like unsightly remnants of a contemporary body part's surprising past. DRY pushes for tweaking existing genes and transforming an existing appendage of little value, rather than partial duplication of existing genes and adding more appendages.
Insistent objectors could aver that the DRY metaphor violates the common sense of the conceptual chasm between thoughtful human software developers and thoughtless genetic mutations and transcriptions. The computer programming languages of typical software are specifically designed to enable smaller chunks. Languages incorporate syntax and semantics. How can that be comparable to the simple streams of codons in DNA? Reuse can't happen without a structure to support it. Pronouns are literally meaningless without basic grammar.
Odd as it seems, the genetic code indeed has a grammar. The fundamental building blocks of grammar are symbols that modify the interpretation of other symbols. Here, "interpretation" is the translation of nucleic acids into working cell proteins. Over time, discoveries have shown how subtle the translation can be. It's affected potentially by a host of activity. Genes definitely can adjust the expression of other genes, which is why geneticists hesitate to assign premature importance to single genes. Some of the "words" in this haphazard language are likely grammatical in impact on protein replication, akin to "and", "or", "not". Some might serve both independent and regulatory functions. Incomplete human understanding doesn't cast doubt on the existence or the capacity of evolution's code. It could very well be able to encode the reuse of sections in accordance with DRY-like conservatism. Just as replacing one word in a sentence might have drastic overall implications, replacing a minimal quantity of genes might have drastic overall consequences that give off the impression of evolution acting smart or informative or experienced.
Clearly, accurate metaphors for biological evolution are lacking. Humans are justifiably amazed by the notion of entire populations of intricate organisms changing for many millions of years. And the changes are related to numerous shifts in myriad factors, including habitat and competition. It's problematic to transplant prior assumptions into that expansively complicated picture.
But a metaphor from software development might be helpful for contemplating the stunning results achieved by the unpremeditated mechanisms of evolution. No matter the particular need served by the software, a solemn rule of development is "don't repeat yourself", which is abbreviated to "DRY". The intent of the rule certainly isn't the naive claim that no repetition ever happens. To the contrary, the rule is about handling repetition correctly. Following DRY is typing a "chunk" of software just once and then somehow rerunning that solitary chunk on different data whenever necessary. The alternative to DRY is duplication. Duplication is often cheaper at the time of initial development, although it's costlier thereafter: two or more chunks are naturally laborious compared to one.
Besides the long-term savings in maintenance work, aggressive DRY has a second effect. The software is divided into chunks. These divisions and subdivisions are easier to read, understand, and analyze. Organization and interconnections take the place of a flat sequence. Appearances suggest conscientious craftsmanship, independent of any knowledge of the software's developers.
Hence, a DRY-compliant outcome has a tendency to look artificially arranged. Evolution could fall in that category. Obviously, unlike in software development DRY can't be a conscious guideline. Instead, inherent constraints "encourage" DRY to occur. Since normally DNA is strongly resistant to massive change, perhaps outright duplication of a gene across separate strand locations is improbable. Reuse of the original gene in a new context accomplishes an identical adjustment in the organism. The DRY-like modifications of the DNA trigger matching DRY-like modifications to the creature. It appears to be the product of a history that's less chaotic than it really is. Thus phenomena that human brains find comprehensible or beautiful, like symmetry or hierarchical containment, arise from frugal genes. So do displeasing phenomena, like unsightly remnants of a contemporary body part's surprising past. DRY pushes for tweaking existing genes and transforming an existing appendage of little value, rather than partial duplication of existing genes and adding more appendages.
Insistent objectors could aver that the DRY metaphor violates the common sense of the conceptual chasm between thoughtful human software developers and thoughtless genetic mutations and transcriptions. The computer programming languages of typical software are specifically designed to enable smaller chunks. Languages incorporate syntax and semantics. How can that be comparable to the simple streams of codons in DNA? Reuse can't happen without a structure to support it. Pronouns are literally meaningless without basic grammar.
Odd as it seems, the genetic code indeed has a grammar. The fundamental building blocks of grammar are symbols that modify the interpretation of other symbols. Here, "interpretation" is the translation of nucleic acids into working cell proteins. Over time, discoveries have shown how subtle the translation can be. It's affected potentially by a host of activity. Genes definitely can adjust the expression of other genes, which is why geneticists hesitate to assign premature importance to single genes. Some of the "words" in this haphazard language are likely grammatical in impact on protein replication, akin to "and", "or", "not". Some might serve both independent and regulatory functions. Incomplete human understanding doesn't cast doubt on the existence or the capacity of evolution's code. It could very well be able to encode the reuse of sections in accordance with DRY-like conservatism. Just as replacing one word in a sentence might have drastic overall implications, replacing a minimal quantity of genes might have drastic overall consequences that give off the impression of evolution acting smart or informative or experienced.
Sunday, September 25, 2011
MythTV archiving nowadays
I haven't mentioned MythTV in a long, long time, largely because I stopped messing around with either the hardware or software in that machine. Some months ago I attempted to upgrade the software but it kept failing before completion. Fortunately, the backup/restore feature allowed me to recover fairly easily each time. Between that difficulty and the extent by which the rest of my devices have since left the machine's hardware in the dust, I'd need to restart the entire project to do it properly. I'm not eager to expend the time, money, or energy for that (plus, the MythTV competitors are so much better now than back then...).
Regardless, it keeps working year after year, so I keep using it. When I seldom overrode the auto-delete of old recordings, my customary procedure for "archiving" was to run nuvexport to convert the source MPEG-2 file, captured/encoded by the PVR-350, into an XVid AVI. The result sometimes contained some bursts of unfortunate side effects, like small to medium "blocking", yet I thought this was a reasonable compromise for the sharply reduced storage requirements. Watching too closely yielded the impression of a video being portrayed by many tiny crawling ants. But quick and well-organized ants, I must admit, especially when seen from twice the usual viewing distance.
Recently, as I kicked off a nuvexport job and felt annoyed once again by the estimated time, I finally recognized the antiquated absurdity. The MythTV machine was put together using rather cheap parts that were current a few generations previously. My main Ubuntu desktop is a more modern computer with the corresponding increases in capability and performance. Moreover, my experiences with streaming services like Netflix or Amazon have reminded me of the advances in video compression. Time to rethink.
Instead, I've switched to transferring the source MPEG-2 from MythTV using the MythWeb interface's Direct Download, so the archiving work can exploit the newer hardware and software of the desktop. I run the h264enc script without many custom answers. The H.264 MP4 files look pretty good at around the same bitrate. And probably due to both the higher clock rate and additional specialized CPU instructions, the process really doesn't take that long to run: the stated output fps rate is faster than playback. This is despite the "nice" priority which prevents interference with all other tasks.
Of course, one "pre-processing" step remains in MythTV; I continue to employ the extremely easy interactive "Edit Recordings" feature (I've never trusted the automatic commercial detection). With the rapid "loss-less" option, the chosen edits produce just a shorter MPEG-2 file, ready for further manipulation elsewhere.
NOTE (Oct 1): Another side effect of switching to a video compression format that's conquered most of the world is that the Roku 2 supports it. But given that I have a working MythTV installation, this hardly matters...
Regardless, it keeps working year after year, so I keep using it. When I seldom overrode the auto-delete of old recordings, my customary procedure for "archiving" was to run nuvexport to convert the source MPEG-2 file, captured/encoded by the PVR-350, into an XVid AVI. The result sometimes contained some bursts of unfortunate side effects, like small to medium "blocking", yet I thought this was a reasonable compromise for the sharply reduced storage requirements. Watching too closely yielded the impression of a video being portrayed by many tiny crawling ants. But quick and well-organized ants, I must admit, especially when seen from twice the usual viewing distance.
Recently, as I kicked off a nuvexport job and felt annoyed once again by the estimated time, I finally recognized the antiquated absurdity. The MythTV machine was put together using rather cheap parts that were current a few generations previously. My main Ubuntu desktop is a more modern computer with the corresponding increases in capability and performance. Moreover, my experiences with streaming services like Netflix or Amazon have reminded me of the advances in video compression. Time to rethink.
Instead, I've switched to transferring the source MPEG-2 from MythTV using the MythWeb interface's Direct Download, so the archiving work can exploit the newer hardware and software of the desktop. I run the h264enc script without many custom answers. The H.264 MP4 files look pretty good at around the same bitrate. And probably due to both the higher clock rate and additional specialized CPU instructions, the process really doesn't take that long to run: the stated output fps rate is faster than playback. This is despite the "nice" priority which prevents interference with all other tasks.
Of course, one "pre-processing" step remains in MythTV; I continue to employ the extremely easy interactive "Edit Recordings" feature (I've never trusted the automatic commercial detection). With the rapid "loss-less" option, the chosen edits produce just a shorter MPEG-2 file, ready for further manipulation elsewhere.
NOTE (Oct 1): Another side effect of switching to a video compression format that's conquered most of the world is that the Roku 2 supports it. But given that I have a working MythTV installation, this hardly matters...
Sunday, September 11, 2011
peeve no. 265 is users blaming the computer
No, user of the line-of-business program, the computer isn't the trouble-maker. It could be from time to time, if its parts are old or poorly-treated, but problems at that level tend to be much more noticeable than what you're describing. Generally, computers don't make occasional mistakes at random times. Despite what you may think, computers are dogged rather than smart. Computers do as instructed, and by "instructed" I mean nothing more than configuring the electricity to move through integrated circuits in a particular way. Computers can't reject or misunderstand instructions. No "inner presence" exists that could possibly do so.
I understand that placing blame on "the computer" can be a useful metaphor for our communication. But the distinction I'm drawing this time is substantive. To identify the precise cause of the issue that you've reported, a more complete picture is necessary. Your stated complaints about the computer's misdeeds really are complaints about something else. The reason to assign blame properly isn't to offer apologies or excuses. Figuring out the blame is the first step in correcting the issue and also in preventing similar issues.
I understand that placing blame on "the computer" can be a useful metaphor for our communication. But the distinction I'm drawing this time is substantive. To identify the precise cause of the issue that you've reported, a more complete picture is necessary. Your stated complaints about the computer's misdeeds really are complaints about something else. The reason to assign blame properly isn't to offer apologies or excuses. Figuring out the blame is the first step in correcting the issue and also in preventing similar issues.
- Possibility one is a faulty discussion of the needed behavior for the program, way back before any computer played a role. Maybe the right set of people weren't consulted. Maybe the right people were involved, but they forgot to mention many important details. Maybe the analyst missed asking the relevant questions. Now, since the program was built with this blind spot, the issue that you reported is the eventual result.
- Possibility two is a faulty translation of the needed behavior into ideas for the program. Maybe the analyst assumed too much instead of asking enough questions. Maybe the analyst underestimated the wide scope of one or more factors. Maybe the analyst was too reluctant to abandon an initial idea and overextended it. Maybe the analyst neglected to consider rare events that are not so rare.
- Possibility three is faulty writing of the program itself. Maybe the coders overestimated their understanding of their tools and their work. Maybe the coders had comprehensive knowledgeable and didn't correctly or fully express what they intended. Maybe a fix had unfortunate side effects. Maybe the tests weren't adequate.
- Possibility four is faulty data. Like blaming the computer, blaming the data is a symptom. Maybe something automated quit abruptly. Maybe manual entry was sloppy. Maybe the data is accurate and nevertheless unexpected. Maybe someone tried to force shortcuts. Maybe management is neither training nor enforcing quality control.
- Possibility five is faulty usability, which faulty data might accompany. "Usable" programs ease information processing from the standpoint of the user. Maybe the program isn't clear about what the user can do next. Maybe unknown terminology is everywhere. Maybe needless repetition encourages boredom and mistakes. Maybe, in the worst cases, staff decide to replace or supplement the program with pen marks on papers or fragile spreadsheets containing baroque formulae. Downfalls in usability may disconnect excellent users from excellent programs.
- Possibility six is the dreaded faulty organization, in which various units disagree or the decision-makers are ignorant. Maybe definitions are interpreted differently. Maybe the "innovators" are trying to push changes informally. Maybe the realm of each unit's authority are murky and negotiable at best. Maybe units are intentionally pulling in opposite directions. Regardless, the program probably will fail to reconcile the inherent contradictions across the organization.
Thursday, September 08, 2011
software developers like punctuation
Comparison of equivalent snippets in various programming languages leads to a stable conclusion about what developers like: punctuation. Namespaces/packages, object hierarchies, composing reusable pieces into the desired aggregate, and so on are relegated to the despicable category of "ceremony". Better to use built-in punctuation syntax than to type letter sequences that signify items in the standard libraries. Developers don't hate objects. They hate typing names. Hand them a method and they'll moan. Hand them a new operator that accomplishes the same purpose and they'll grin. What's the most noticeable difference between Groovy and Java syntax, in many cases? Punctuation as shortcuts. Why do some of them have trouble following Lisp-y programming languages? Containment in place of punctuation.
Oddly enough, some of the same developers also immediately switch opinions when they encounter operators overloaded with new meanings by code. Those punctuation marks are confusing, unlike the good punctuation marks that are built in to the language. Exceedingly common behavior can still be unambiguous if its method calls are replaced by punctuation, but behavior of user modules is comparatively rare so the long names are less ambiguous than overloaded operators. "By this module's definition of plus '+', the first argument is modified? Aaaaaghhhh! I wish the module writer had just used a call named 'append' instead!"
Oddly enough, some of the same developers also immediately switch opinions when they encounter operators overloaded with new meanings by code. Those punctuation marks are confusing, unlike the good punctuation marks that are built in to the language. Exceedingly common behavior can still be unambiguous if its method calls are replaced by punctuation, but behavior of user modules is comparatively rare so the long names are less ambiguous than overloaded operators. "By this module's definition of plus '+', the first argument is modified? Aaaaaghhhh! I wish the module writer had just used a call named 'append' instead!"
Wednesday, September 07, 2011
deadlocks in the economy
The affliction of a specialist is the compulsion to redefine every discipline into that specialty. As someone in a software job, I see the economy exhibiting mishaps of coordination. Programs executing simultaneously, or one program whose several parts execute at once, might interfere and cause confusion. In an economy, legions of economic agents engage in transactions. Given the difference in scale, coordination issues probably are more, not less, applicable to the economy than to a computer. Some of the names, like producer-consumer, even invoke the comparison.
A fundamental topic in program coordination is the "deadlock". Put simply, a deadlock is whenever all collaborators end up waiting on counterparts to act. Say that there are two programs, each of which needs exclusive access to a pair of files to do some work (e.g. the second file might be a "summary" which needs to be updated to stay consistent with the first file after it changes). 1) The first program opens the first file. 2) The second program sees that the first file is already opened, so it naively opens the second file before the first program can. Voila! The first program waits for the second program to finish up and relinquish the second file, while the second program waits for the first program to finish up and relinquish the first file. Everything is "locked" without any way to proceed.
Back to economics. An economy is a massive set of roughly circular flows. Buyers send money (or liquid credit) to a seller, and the seller sends the desired item back. The seller then (possibly) reuses the money as a buyer, and the buyer then (possibly) reuses the item as a seller. If the buyer obtains money in the labor market, i.e. working a job to earn wages, then that's another flow which connects up to this one. These flows continually recirculate during normal functioning.
However, clearly a stoppage (or slippage) in one flow will also affect other flows. This is the economic form of a deadlock: economic agents that halt and in so doing motivate additional agents to halt. Until flows restart, or a newly created substitute flow starts, nobody progresses. No money or items are moving, so each is facing a shortage. Moreover, without the assurance of complementary flows in action, it's in an agent's selfish interest to wait rather than take risks. Therefore everyone waits for everyone to take the first step. Sounds like a deadlock condition to me.
Examined from a high level, deadlocks are clearer to spot. For instance, if the interest paid on a loan for a house is linked to a shifting rate and the rate and the interest both increase, then there could be a lack of funding to cover it. Unpaid interest implies a reduced flow in the money earned by the loan, as well as a corresponding reduced value for the loan itself (a loan without paid interest isn't worth much!). The current and projected reduction in the flow of interest disrupts "downstream" flows that otherwise would've relied on that interest. So the owner of the loan must reallocate money. That reallocated money isn't available for other lending flows. The intended recipients of the other lending flows are left unable to follow their own economic plans. And so forth. Eventually, the original cutoff may come full circle; due to the propagation of effects, larger numbers of loan-payers don't have the flow to pay their interest. The payers can't fulfill the interest payments when lenders have ceased usual risk-taking, and the lenders continue to cease usual risk-taking when payers can't fulfill interest payments. Money is in deadlock. Thus the economy's assorted flows of items (including jobs), which require the central flow of money (or liquid credit) to temporarily store and exchange value within transactions, are in deadlock too. The trillion-dollar question is which technique is most beneficial to dislodge specific deadlocks of money or to cajole activity in general.
Unlike software, humans are improvisational. Confronted with deadlock, they don't wait forever for the deadlock to break. Instead, they adjust, although it could be uncomfortable. Economic flows that were formerly wide rivers might become brooks. Flows that started out as trickles might become streams. Over a long time period, deadlocks in an evolving economy are temporary. Circumstances change, forcing humans and their trading to change.
A fundamental topic in program coordination is the "deadlock". Put simply, a deadlock is whenever all collaborators end up waiting on counterparts to act. Say that there are two programs, each of which needs exclusive access to a pair of files to do some work (e.g. the second file might be a "summary" which needs to be updated to stay consistent with the first file after it changes). 1) The first program opens the first file. 2) The second program sees that the first file is already opened, so it naively opens the second file before the first program can. Voila! The first program waits for the second program to finish up and relinquish the second file, while the second program waits for the first program to finish up and relinquish the first file. Everything is "locked" without any way to proceed.
Back to economics. An economy is a massive set of roughly circular flows. Buyers send money (or liquid credit) to a seller, and the seller sends the desired item back. The seller then (possibly) reuses the money as a buyer, and the buyer then (possibly) reuses the item as a seller. If the buyer obtains money in the labor market, i.e. working a job to earn wages, then that's another flow which connects up to this one. These flows continually recirculate during normal functioning.
However, clearly a stoppage (or slippage) in one flow will also affect other flows. This is the economic form of a deadlock: economic agents that halt and in so doing motivate additional agents to halt. Until flows restart, or a newly created substitute flow starts, nobody progresses. No money or items are moving, so each is facing a shortage. Moreover, without the assurance of complementary flows in action, it's in an agent's selfish interest to wait rather than take risks. Therefore everyone waits for everyone to take the first step. Sounds like a deadlock condition to me.
Examined from a high level, deadlocks are clearer to spot. For instance, if the interest paid on a loan for a house is linked to a shifting rate and the rate and the interest both increase, then there could be a lack of funding to cover it. Unpaid interest implies a reduced flow in the money earned by the loan, as well as a corresponding reduced value for the loan itself (a loan without paid interest isn't worth much!). The current and projected reduction in the flow of interest disrupts "downstream" flows that otherwise would've relied on that interest. So the owner of the loan must reallocate money. That reallocated money isn't available for other lending flows. The intended recipients of the other lending flows are left unable to follow their own economic plans. And so forth. Eventually, the original cutoff may come full circle; due to the propagation of effects, larger numbers of loan-payers don't have the flow to pay their interest. The payers can't fulfill the interest payments when lenders have ceased usual risk-taking, and the lenders continue to cease usual risk-taking when payers can't fulfill interest payments. Money is in deadlock. Thus the economy's assorted flows of items (including jobs), which require the central flow of money (or liquid credit) to temporarily store and exchange value within transactions, are in deadlock too. The trillion-dollar question is which technique is most beneficial to dislodge specific deadlocks of money or to cajole activity in general.
Unlike software, humans are improvisational. Confronted with deadlock, they don't wait forever for the deadlock to break. Instead, they adjust, although it could be uncomfortable. Economic flows that were formerly wide rivers might become brooks. Flows that started out as trickles might become streams. Over a long time period, deadlocks in an evolving economy are temporary. Circumstances change, forcing humans and their trading to change.
Tuesday, September 06, 2011
local minima and maxima
I'm sure it counts as trite to mention that humans aren't great at coping with complexity. (Computers can but only if the complexity doesn't require adaptability or comprehension.) One example is the oversimplified dichotomy between systems: 1) few pieces with highly organized interconnections and controlled variances among the pieces, 2) numerous similar pieces with little oversight that nevertheless mostly act the same and have few decisions to make. An engine is in #1. An ant colony is in #2. A projectile is in #1. A contained cloud of gas particles is in #2. In #1 systems, analysis is rather easy because all the pieces are ordered to accomplish parts or stages of a defined objective. In #2 systems, analysis is rather easy because the actions of all the pieces are generalizable into "overall/average forces". In #1 systems, statistics consist of a series of well-determined numbers. In #2 systems, variances and aggregates are tameable by modeling the population distribution.
The problem is that as useful as these two categories are, reality can often be more complicated. Loosely-connected systems could consist of many unlike pieces. Or the pieces could be alike yet affected in a nonuniform manner by an external disturbance. Or each piece could individually respond to five factors in its immediate neighbor pieces. Or pieces might have ephemeral subgroups which act in ways that loners don't. The possibilities are abundant and stymie attempts to classify the system as #1 or #2.
Consider a minimum or maximum quantity that represents a system. In a #1, that quantity is calculable directly by finding the corresponding quantities for each piece. In a #2, that quantity is an equilibrium that all the pieces yield through collective activity. Either way, the system has one minimum or maximum, and it's reached predictably.
However, this conclusion breaks down when a system is of a "more complicated" kind. Those systems contain pieces that, taken one at a time, are easily understood, but whose final effect is difficult to fathom. As a representation of that system, the minima and maxima could be messy. For instance, a specific constraint is strongest at the lower end of a range but a second constraint is strongest at the higher end. Under such circumstances, the system has more than one minimum or maximum. To the extent that the description works, the "forces" of the system then push toward the local minimum or maximum, whichever is closest.
From the viewpoint of an uninformed observer trying to cram a complex system into #1 or #2, the apparent failure to reach the absolute furthest (global) maximum could be mystifying. If it's caught in the grip of a local maximum, then the failure is more intelligible. The system "rejects" small changes that result in an immediate "worse" outcome regardless of whether or not it's on the path to an ultimately "better" outcome. In short, a wildly intricate system occasionally gets stuck in a pothole of inferiority. And for that system, that state is as natural as anything else.
Hence knowledge of local minima and maxima provides greater nuance to human interpretation. Reasoning about the national economy is a ripe area. The temptation is to reduce discussion into the relative merits of a #1 system, in which the economy is like a tightly-directed machine operated by government, compared to a #2 system, in which the economy is like a spontaneous clump of microscopic participants. This is a discussion about nonexistent options. The economy isn't solely a dangerous beast that needs strict supervision. It isn't solely a genie that showers gifts on anyone who sets it free. It's beyond these metaphors altogether.
An economy that has run aground on local minima or maxima can't be adjusted successfully by treating it as a #1 or a #2 system. "Freeing" it won't erase the slope toward the local minimum. "Ordering" it to stop misbehaving also won't. The economy doesn't always accomplish every desired purpose. On the other hand, government can't completely override the economic transactions of the entire populace (nor should it try). What government can do, potentially, is help nudge the economy out of a local minimum by bending the system. Of course, the attendant risk is that excessive bending by the government might set up a new local minimum in the economic system...
The problem is that as useful as these two categories are, reality can often be more complicated. Loosely-connected systems could consist of many unlike pieces. Or the pieces could be alike yet affected in a nonuniform manner by an external disturbance. Or each piece could individually respond to five factors in its immediate neighbor pieces. Or pieces might have ephemeral subgroups which act in ways that loners don't. The possibilities are abundant and stymie attempts to classify the system as #1 or #2.
Consider a minimum or maximum quantity that represents a system. In a #1, that quantity is calculable directly by finding the corresponding quantities for each piece. In a #2, that quantity is an equilibrium that all the pieces yield through collective activity. Either way, the system has one minimum or maximum, and it's reached predictably.
However, this conclusion breaks down when a system is of a "more complicated" kind. Those systems contain pieces that, taken one at a time, are easily understood, but whose final effect is difficult to fathom. As a representation of that system, the minima and maxima could be messy. For instance, a specific constraint is strongest at the lower end of a range but a second constraint is strongest at the higher end. Under such circumstances, the system has more than one minimum or maximum. To the extent that the description works, the "forces" of the system then push toward the local minimum or maximum, whichever is closest.
From the viewpoint of an uninformed observer trying to cram a complex system into #1 or #2, the apparent failure to reach the absolute furthest (global) maximum could be mystifying. If it's caught in the grip of a local maximum, then the failure is more intelligible. The system "rejects" small changes that result in an immediate "worse" outcome regardless of whether or not it's on the path to an ultimately "better" outcome. In short, a wildly intricate system occasionally gets stuck in a pothole of inferiority. And for that system, that state is as natural as anything else.
Hence knowledge of local minima and maxima provides greater nuance to human interpretation. Reasoning about the national economy is a ripe area. The temptation is to reduce discussion into the relative merits of a #1 system, in which the economy is like a tightly-directed machine operated by government, compared to a #2 system, in which the economy is like a spontaneous clump of microscopic participants. This is a discussion about nonexistent options. The economy isn't solely a dangerous beast that needs strict supervision. It isn't solely a genie that showers gifts on anyone who sets it free. It's beyond these metaphors altogether.
An economy that has run aground on local minima or maxima can't be adjusted successfully by treating it as a #1 or a #2 system. "Freeing" it won't erase the slope toward the local minimum. "Ordering" it to stop misbehaving also won't. The economy doesn't always accomplish every desired purpose. On the other hand, government can't completely override the economic transactions of the entire populace (nor should it try). What government can do, potentially, is help nudge the economy out of a local minimum by bending the system. Of course, the attendant risk is that excessive bending by the government might set up a new local minimum in the economic system...
Friday, September 02, 2011
git-svn on Windows and Robocopy
So...git clones of git-svn repositories aren't recommended. (Neither are fetches between git-svn clones. All collaboration should happen through the Subversion server.) Clones don't include the metadata that links the git and Subversion histories. However, unless commits in local branches are backed up elsewhere, work could be lost when catastrophe strikes the lone copy of those commits.
As decentralized version control, git's information is self-contained in the ".git" subdirectory of the repository. Thus creating a backup is straightforward: duplicate that subdirectory. But the common copy commands are wasteful and unintelligent. Must one overwrite everything every time? What about data that's no longer necessary and should be deleted in the backup location as well?
Fortunately, there has been a ready command available in Windows: Robocopy. In this case, it's executed with the /MIR switch. Between git's filesystem-based design (i.e. no database dependencies or complicated binary coding) and Robocopy's smarts, incremental changes to the git-svn repository usually result in minimal work performed during subsequent calls to Robocopy.
A developer could also mirror the entire contents of the working directory, but the pattern of making many small local commits in the course of a workday means that at any time there are few uncommitted changes in the working directory. At the time that the commits will be pushed to Subversion, an interactive rebase beforehand ensures that the "noise" of the numerous commits won't be preserved in the permanent version control record of the project.
As decentralized version control, git's information is self-contained in the ".git" subdirectory of the repository. Thus creating a backup is straightforward: duplicate that subdirectory. But the common copy commands are wasteful and unintelligent. Must one overwrite everything every time? What about data that's no longer necessary and should be deleted in the backup location as well?
Fortunately, there has been a ready command available in Windows: Robocopy. In this case, it's executed with the /MIR switch. Between git's filesystem-based design (i.e. no database dependencies or complicated binary coding) and Robocopy's smarts, incremental changes to the git-svn repository usually result in minimal work performed during subsequent calls to Robocopy.
A developer could also mirror the entire contents of the working directory, but the pattern of making many small local commits in the course of a workday means that at any time there are few uncommitted changes in the working directory. At the time that the commits will be pushed to Subversion, an interactive rebase beforehand ensures that the "noise" of the numerous commits won't be preserved in the permanent version control record of the project.
Thursday, September 01, 2011
rule of 8...
...If someone discusses the budget of the federal government of the U.S.A. with fewer than 8 independent numbers or factors or categories, then chances are that the discussion is oversimplified. Figures have been skipped to present a "condensed" viewpoint that's more consistent with a partisan narrative. The 8 or more numbers are "independent" in a statistical (information-theoretical) sense: none of the numbers is a complete mathematical function of the others, such as the third always equaling the difference of the first and second.
Wednesday, August 31, 2011
the absent mind
Recently I watched Memento again. I've never owned it but I can watch it via Netflix. No later viewing of it compares to the first, because surprise is gone. Nevertheless the progression of scenes in reverse chronological order is appropriately disorienting as ever.
This time, though, I reacted in a new way. Like an untrained reader who self-diagnoses every disease in the book, I identified with the main character. In my case, the similarity has nothing to do with the inability to form new memories. My memory isn't problematic; based on academic accomplishment, it's above average. No, the similarity is the need to leave reminders for myself. As I watched the movie, I distinctly recall thinking, "Doesn't everyone do that, but on a smaller scale?"
Age can't be the cause. I've been "forgetful" in this way since childhood. The usual driver seems to be having an "absent mind". Some people mock the truism, "Wherever you go, there you are." Those must be the people for whom it isn't a challenge. My thoughts are noisy. Emptying my mind is difficult because sometimes it churns and overflows without prodding. It's as if I'm walking through dreamlike surroundings that are far more engrossing than my real environment. External occurrences can't be guaranteed to leave a lot of impact, during a deep swim in the brainwaves. Of all activities done to excess, theorizing has to be one of the most unobtrusive to onlookers. On occasion, when the train of thought jumps tracks, I'm shocked by the time that's passed. The questions "Where am I?" and "What is the time and date?" are generally unimportant. Depending on which ideas are consuming you, the location of your body and the continual ticking of seconds are totally irrelevant.
Calling it attention deficit disorder would be inaccurate. It's closer to having too much focus or fixation than having too little. And it's also not autism. I tend to have trouble interacting naturally in casual conversation, yet the obstacle is distraction rather than disinterest (I make eye contact although some have commented that I appear to be looking "through" the speaker). My sense is that the low severity of absent mind doesn't qualify at all as a disability. It's a disposition, a personality quirk. As a child, it manifested in toothbrushes left behind after a sleepover or gloves left behind after winter travel from one building's interior into another. The tips for coping aren't too complicated:
I also recognize that a sound albeit absent mind comes in varying degrees. I've only forgotten my keys when leaving the house once. A while back someone notified me that I was wearing two unmatched shoes. I have a desk drawer that contains a stack of papers with the cryptic markings that result from pursuing a particular flash of inspiration over a few days. It could be worse. Apparently, others have been known to go out in public in partial undress, and their mysterious notes fill filing cabinets...
This time, though, I reacted in a new way. Like an untrained reader who self-diagnoses every disease in the book, I identified with the main character. In my case, the similarity has nothing to do with the inability to form new memories. My memory isn't problematic; based on academic accomplishment, it's above average. No, the similarity is the need to leave reminders for myself. As I watched the movie, I distinctly recall thinking, "Doesn't everyone do that, but on a smaller scale?"
Age can't be the cause. I've been "forgetful" in this way since childhood. The usual driver seems to be having an "absent mind". Some people mock the truism, "Wherever you go, there you are." Those must be the people for whom it isn't a challenge. My thoughts are noisy. Emptying my mind is difficult because sometimes it churns and overflows without prodding. It's as if I'm walking through dreamlike surroundings that are far more engrossing than my real environment. External occurrences can't be guaranteed to leave a lot of impact, during a deep swim in the brainwaves. Of all activities done to excess, theorizing has to be one of the most unobtrusive to onlookers. On occasion, when the train of thought jumps tracks, I'm shocked by the time that's passed. The questions "Where am I?" and "What is the time and date?" are generally unimportant. Depending on which ideas are consuming you, the location of your body and the continual ticking of seconds are totally irrelevant.
Calling it attention deficit disorder would be inaccurate. It's closer to having too much focus or fixation than having too little. And it's also not autism. I tend to have trouble interacting naturally in casual conversation, yet the obstacle is distraction rather than disinterest (I make eye contact although some have commented that I appear to be looking "through" the speaker). My sense is that the low severity of absent mind doesn't qualify at all as a disability. It's a disposition, a personality quirk. As a child, it manifested in toothbrushes left behind after a sleepover or gloves left behind after winter travel from one building's interior into another. The tips for coping aren't too complicated:
- If something needs to be remembered, but at some point after the immediate moment, make a reminder soon. Otherwise there's the possibility that an intriguing stream of thought will resurface before too long and flood everything else. Don't presume to remember to make a reminder...
- Reminders don't need to be lengthy, detailed, or time-consuming. Unlike Memento's main character, the hang-up isn't that the memory is completely missing but the high probability that the intact memory won't return to mind at the proper time. All that's necessary is an unambiguous trigger. For instance, the infamous trick of tying a string around a finger is too uninformative, while a note consisting of one significant word could work. One mustn't then misplace or ignore the note, of course. It should be located within normal view.
- Memento emphasizes routine and the ease of learning repeated subconscious reactions. That fits with my experience. Habits demand less concentration, which is a perpetually scarce resource. Breaking habit is healthy and invigorating, but special effort is called for. To succeed, novel actions require greater levels of deliberation and care. More present, less absent.
- As in all things, simple strategies shouldn't be overlooked. Startling sensations sharply disrupt introspection. Increased alertness to the world is an automatic side effect of fresh perceptions. Simply keeping your eyes moving might be adequate. Administering pinches is a second tactic, as long as it isn't overdone.
- Reminders work well for single tasks. Habits work well for routine or periodic tasks. A large list of irregular tasks, perhaps to be completed in the upcoming week, is better attacked by outlining a comprehensive schedule. Without explicit scheduling, there's considerable risk that awareness of the tasks will weaken, only to finally revitalize at an inopportune time. Or that one prominent task will monopolize available energy and mask the rest. Either way, an all-encompassing arrangement of what to do and when is, strangely enough, less of a chore than trying to repeatedly rein in a stampeding brain and urge it toward item by item. I compose the whole schedule, as a draft in progress. Then I think of it whenever I'm pondering or visualizing the time periods in the schedule. Writing or typing it is purely optional (memory isn't the problem, engulfment by my own reflections is the problem). As for an event more than two weeks away, those of us with absent minds act like everyone else. We enter it on calendars.
- As illustrated by Memento, anticipation is a highly sophisticated approach. Self-knowledge enables self-manipulation. Small adjustments, like temporarily moving objects to atypical positions, can prompt corresponding adjustments in behavior. "Why is that there?...oh, to ensure I don't leave it behind." Unsubtle clues either illuminate the desired path or erect a barrier for the inadvisable path. I in the future am likely to be all right with or without the hints, but the prevention would help in the eventuality of unforeseen internal distractions (e.g. trying to recollect the name of the hidden Elvish kingdom in The Silmarillion, which was one of the last to fall to Morgoth).
- Selected undertakings force full engagement, regardless of contemplation's pull. With practice, it's easier to adapt. For example, I've gradually developed my skills at driving and athletics. At work, I'm more conscientious about not drifting off during meetings, and I ensure that all my assignments are written down.
I also recognize that a sound albeit absent mind comes in varying degrees. I've only forgotten my keys when leaving the house once. A while back someone notified me that I was wearing two unmatched shoes. I have a desk drawer that contains a stack of papers with the cryptic markings that result from pursuing a particular flash of inspiration over a few days. It could be worse. Apparently, others have been known to go out in public in partial undress, and their mysterious notes fill filing cabinets...
Tuesday, August 30, 2011
sure, I'll use your free storage
I just noticed that my years-old unused Hotmail account has the side effect of a "Sky Drive". 25GB. The per-file limit of 50MB is adequate for most personal data files. Uh, thanks, I guess.
Saturday, August 27, 2011
truth contingent on its effects
It is difficult to get a man to understand something when his salary depends upon his not understanding it. -Upton Sinclair
Depressing though it may be, the above quote is consistent with many people's experiences in attempting to coax someone to a different opinion or just to convince someone to admit unlikable facts. If humans reached truth solely by accumulating and cross-checking knowledge obtained from trustworthy sources, as some philosophies presume, then the truth of a statement couldn't be so contingent on the effects of believing in the statement. Philosophical Pragmatism is singularly unsurprised, however. Since judgment is an ingredient in perception, biased judgment clouds perception. Seeing things "as they are", in practice means seeing things "as I see them". An isolated thing has no meaning. Brains compute meaning by laying isolated things side by side.
In the Pragmatist model, humans observe, reach conclusions, form plans based on those conclusions to reach goals, and then execute the plans. But this indivisible process can surely operate backward as well. Meaning, I don't want to act, therefore I doubt the conclusions that would force me to act, therefore I doubt the observations that would force those conclusions! The longer that a human clings to a "truth", the longer that a human selectively collects evidence in favor of it, and the stronger it becomes, by the human's design.
I'll end by noting a corollary of Sinclair's quote. It's difficult to get a politician to understand something when their election depends on their not understanding it. Basic accounting, biology, and climatology are notable examples.
Friday, August 26, 2011
I wish
Part of acting as a mature adult in a complicated human society is to express statements that are half-true at most. Sincerity isn't a prerequisite. To avoid offense and thereby smooth social interaction, signaling an effort to be considerate is more important than unconditional agreement with the other's ideas on a divisive topic.
Nevertheless, I bristle at one kind of diplomacy that irreligious believers offer: "I wish." The words vary, but the constant is an empathetic disconnect in which emotional preferences are for religious rather than natural notions. "I wish your god existed." "I wish human souls lived forever." "I wish the universe were simple."
My irritation comes from a personal distaste for these sweet wishes. Yes, it may be true that I wish either that reality were more congenial to me or that I could more easily manipulate it to be so. The first is utopia and the second is magic. Of course these wishes are pleasing to compute; that's the whole rationale for a wish. How reassuring is it for the irreligious to admit the obvious, that they too would like human existence to not be so difficult?
A further problem is that courageous advocacy for fictional paradises will fall apart under follow-up questions. What if the hypothetical god acted like _____? What if living forever included harsh judgment for _____? What if a simple universe implied the impossibility of ______? Fully outlined proposals of alternatives might not be as enticing as the one-paragraph summary...
Nevertheless, I bristle at one kind of diplomacy that irreligious believers offer: "I wish." The words vary, but the constant is an empathetic disconnect in which emotional preferences are for religious rather than natural notions. "I wish your god existed." "I wish human souls lived forever." "I wish the universe were simple."
My irritation comes from a personal distaste for these sweet wishes. Yes, it may be true that I wish either that reality were more congenial to me or that I could more easily manipulate it to be so. The first is utopia and the second is magic. Of course these wishes are pleasing to compute; that's the whole rationale for a wish. How reassuring is it for the irreligious to admit the obvious, that they too would like human existence to not be so difficult?
A further problem is that courageous advocacy for fictional paradises will fall apart under follow-up questions. What if the hypothetical god acted like _____? What if living forever included harsh judgment for _____? What if a simple universe implied the impossibility of ______? Fully outlined proposals of alternatives might not be as enticing as the one-paragraph summary...
Tuesday, August 23, 2011
git's index is more than a scratchpad for new commits
Someone relatively inexperienced with git could develop a mistaken impression about the index. After referring to the isolated commands on a quick-reference guide or on a "phrasebook" that shows git equivalents to other VCS commands, the learner might, with good reason, start to consider the index as a scratchpad for the next commit. The most common tasks are consistent with that concept.
However, this impression is limiting. More accurately viewed, the index is git's entire "current" view of the filesystem. Commits are just saved git views of the filesystem. Files that the user has added, removed, modified, renamed, etc. aren't included in git's view of the filesystem until the user says, with "git add" for example. With the exception of before the very first commit, the index is unlikely to ever be empty. It isn't truly a scratchpad, then. When checking out a commit, git changes its current view of the filesystem to match that commit; therefore it changes the index. Through checkouts, history can be used to populate git's view of the filesystem. Through adds, the actual filesystem can be used to populate git's view of the filesystem. Through commits, git's view of the filesystem can be stored for future reference as a descendant of the HEAD.
Without this understanding, usage of "git reset" is infamous for causing confusion. With it, the confusion is lessened. A reset command that changes the index, which happens in the default or with option --hard, is like a checkout in that it changes git's view to the passed commit. (Of course the reset also moves the branch ref and HEAD, i.e. the future parent of the next commit.) A reset command that doesn't change the index, which happens with option --soft, keeps git's "view" the same as if it remained at the old commit. A user who wanted to collapse all of the changes on a branch into a single commit could possibly checkout that branch, git reset --soft to the branch ancestor, and then commit. Depending on the desired effect, merge --squash or rebase --interactive might be more appropriate, though.
Post-Script: Since this is aimed at git newcomers, I should mention that before trying to be too fancy with resets, become close friends with "reflog" and "stash".
Post-Script The Second: Drat. The Pro Git blog addressed the same general topic, but based more directly around "reset". And with attractive pictures. And a great reference table at the bottom.
However, this impression is limiting. More accurately viewed, the index is git's entire "current" view of the filesystem. Commits are just saved git views of the filesystem. Files that the user has added, removed, modified, renamed, etc. aren't included in git's view of the filesystem until the user says, with "git add" for example. With the exception of before the very first commit, the index is unlikely to ever be empty. It isn't truly a scratchpad, then. When checking out a commit, git changes its current view of the filesystem to match that commit; therefore it changes the index. Through checkouts, history can be used to populate git's view of the filesystem. Through adds, the actual filesystem can be used to populate git's view of the filesystem. Through commits, git's view of the filesystem can be stored for future reference as a descendant of the HEAD.
Without this understanding, usage of "git reset" is infamous for causing confusion. With it, the confusion is lessened. A reset command that changes the index, which happens in the default or with option --hard, is like a checkout in that it changes git's view to the passed commit. (Of course the reset also moves the branch ref and HEAD, i.e. the future parent of the next commit.) A reset command that doesn't change the index, which happens with option --soft, keeps git's "view" the same as if it remained at the old commit. A user who wanted to collapse all of the changes on a branch into a single commit could possibly checkout that branch, git reset --soft to the branch ancestor, and then commit. Depending on the desired effect, merge --squash or rebase --interactive might be more appropriate, though.
Post-Script: Since this is aimed at git newcomers, I should mention that before trying to be too fancy with resets, become close friends with "reflog" and "stash".
Post-Script The Second: Drat. The Pro Git blog addressed the same general topic, but based more directly around "reset". And with attractive pictures. And a great reference table at the bottom.
Tuesday, August 16, 2011
why it isn't done yet
Modifying decrepit C code ain't like dusting crops, boy! Without precise calculations we could fly right through a star or bounce too close to a supernova, and that'd end your trip real quick, wouldn't it.
Saturday, August 13, 2011
omniduction break down II
To lampoon the use of omniduction...
Inviolable Proposition 1: The Constitution is a sublime and superb document, and if interpreted in a particular way (not necessarily the same way that judges have), it would cure society's ills.
Inviolable Proposition 2: Political compromise is a despicable, cowardly act that yields terrible results. Staring contests are better.
Historical Fact 1: The Constitution is packed with political compromises.
SYSTEM ERROR!
Inviolable Proposition 1: The Constitution is a sublime and superb document, and if interpreted in a particular way (not necessarily the same way that judges have), it would cure society's ills.
Inviolable Proposition 2: Political compromise is a despicable, cowardly act that yields terrible results. Staring contests are better.
Historical Fact 1: The Constitution is packed with political compromises.
SYSTEM ERROR!
the dash hole principle
The cigarette lighter receptacle has an amusing name. In my automobile and many others that I've seen, the present form isn't actually usable for lighting cigarettes. Now it's a round hole in the dashboard with a cover that's labeled as a power outlet. Over time, cigarette lighter receptacles turned into dash holes. The users of an object emphasized the secondary applications of it until the object itself dropped its primary application. It changed meaning through inventive usage.
Software users can be expected to act the same. Software developers should accept that the users, acting like humans, will adapt by introducing their own concepts and assumptions to a "finished" project. As DDD advises, the key is their language. When they speak about the software, and therefore the underlying design or data model, their words throw attention onto their interpretation of the "problem domain". They might describe data groups/categories and store their evolving understanding with rigid entries, like attaching "special" semantics to product identifiers that start with "Q". They might take several hours to run a series of automated preexisting reports, stuff the conglomerated figures into a spreadsheet, and then generate a chart - additional work which could all be accomplished by a computer in a tenth of the time.
The point is, software in the hands (and brains) of users can easily become a dash hole: an original design that came to be viewed much differently in practice. Developers who don't meet the needs of users will be bypassed manually as time goes on. In some cases, this may be a good approach. Some changes in usage just don't justify substantial software modifications. However, to state the obvious, not everyone is a good software analyst. Ad hoc solutions, enforced not by the software but by scores of unwritten rules, are prone to causing data duplication due to no normalization, chaos due to employee turnover or mere human frailty, and tediousness due to not thinking thoroughly about the whole process.
Dash holes function as adequate power outlets. But imagine if irritating dash holes could've been replaced with something designed to serve that purpose.
Software users can be expected to act the same. Software developers should accept that the users, acting like humans, will adapt by introducing their own concepts and assumptions to a "finished" project. As DDD advises, the key is their language. When they speak about the software, and therefore the underlying design or data model, their words throw attention onto their interpretation of the "problem domain". They might describe data groups/categories and store their evolving understanding with rigid entries, like attaching "special" semantics to product identifiers that start with "Q". They might take several hours to run a series of automated preexisting reports, stuff the conglomerated figures into a spreadsheet, and then generate a chart - additional work which could all be accomplished by a computer in a tenth of the time.
The point is, software in the hands (and brains) of users can easily become a dash hole: an original design that came to be viewed much differently in practice. Developers who don't meet the needs of users will be bypassed manually as time goes on. In some cases, this may be a good approach. Some changes in usage just don't justify substantial software modifications. However, to state the obvious, not everyone is a good software analyst. Ad hoc solutions, enforced not by the software but by scores of unwritten rules, are prone to causing data duplication due to no normalization, chaos due to employee turnover or mere human frailty, and tediousness due to not thinking thoroughly about the whole process.
Dash holes function as adequate power outlets. But imagine if irritating dash holes could've been replaced with something designed to serve that purpose.
Thursday, August 11, 2011
irreligosity does not imply apathy
A typical response to anything said or done by the irreligious is, "Why? If by your own admission religious ideas are irrelevant to you, then what are you accomplishing?" Let me list a few...
- Public expression and inclusion of irreligious beliefs deserves as much protection and accommodation as religious beliefs. Irreliogiosity is of course really a broad classification, so the specific form and content of it varies greatly. Just as there cannot be a single universally recognized symbol for religiosity, there cannot be one for irreligiosity. I like a stylized atomic diagram as a symbol to indicate philosophical materialism ("disciples of Democritus!"), but presumably others of less strict views might prefer alternative irreligious symbols. (A critic of irreligosity would probably make the facetious suggestion of a "No symbol".) In any case, public exposure of irreligous beliefs does matter. Acknowledgment and tolerance of presentations of a belief reinforce the freedom for it to exist in a society. Disqualifying the mere exposure of a belief has a corresponding intimidation factor against potential believers. Removing a belief from discussion also gives the false impression that it doesn't exist or barely exists. Religious believers may expect to receive divine rewards for their publicity efforts, but secondarily they feel mundane emotional satisfaction simply for displaying the "truth" to society. Irreligious believers have purely the latter as motivation; nevertheless, their self-esteem too is boosted by feelings of belief "validation" regardless of how perfunctory it may be.
- Irreligious encouragement of anti-conversions may prompt the taunt, "What are you trying to do, save the lost from going to heaven?" The taunt is well-aimed in that self-consistent irreligious definitely can't be motivated by impossible consequences in a non-existent afterlife. And it's also true that they can't be logically motivated by spiteful envy of a religious individual's bus ticket to heaven. Still, they may wish sincerely and unselfishly for anti-conversions in order that more humans spend lesser proportions of their limited lifespans in accordance with fallacies. Unlike the concept of humans who continue on in some shape for all time, humans composed of atoms have finite time which shouldn't be wasted or predicated on false hopes. Pascal's Wager contains the idea that a religious life ultimately disproved is no loss relative to an irreligious life ultimately proved. Wrong. Religiosity exacts non-zero tangible and intangible costs, which vary widely by belief system, and a singular life implies that the decision to pay those costs can never be recouped. Anti-conversions enable someone to expend the rest of their time as they choose. No guarantees apply to the amount of time left, either.
- Some may opine that irreligious notions are too ill-defined, or too defined by negatives, to have effects like religious notions do: "Acting on behalf of 'no religion' make no sense; there's nothing of substance to debate or achieve!" In some situations, that could be correct. Apart from the definition "uncommitted to a religion", general irreligiosity doesn't have unifying creeds and ideals. It does have a unifying cause, though: blunting or reversing the detrimental outcomes of religiosity on society. I don't intend to claim that religiosity is uniformly awful. I mean that the irreligious are likely to try to counteract destructive repercussions that come to their attention. Assuming that religious notions provide plans of action, irreligious notions could possibly provide corresponding plans of opposition. For instance, attempting to insinuate dogma will provoke strong irreligious reaction. Attempting to relieve poverty won't. (To the contrary, the irreligious who care just as much might successfully ally toward this common enemy.)
Wednesday, August 10, 2011
loopier and loopier
Aware that truth doesn't arrive gift-wrapped on the doorstep, a pragmatic thinker should be willing to consider ideas from any source. In particular, sources of false ideas can be more useful than anticipated. The source could have true ideas intermixed, if not often. False ideas themselves could be stepping-stones or flawed clues to the real truth. Still more subtly, ideas could possibly be literally false and yet "true" through hidden correspondences to nuggets of truth.
For instance, the concept of the human mind, of a nebulous decision-maker that mysteriously controls the brain/body (not vice-versa), is a handy fiction. The self's "mind" provides the setting for an otherwise gaping hole in the categorization of human experience: the locale of subjective mental phenomena. These are actually real in the sense of being real effects of the real activity of the self's real brain. But from the self's perspective, which naturally doesn't include moment-by-moment observation of the originating brain, orienting ethereal thoughts in a dual mind domain is a tidy solution; when one "sees" things without eyes or "hears" sounds without ears, some explanation is warranted, no matter how far-fetched!
At a more advanced level of abstraction, a deepest ghostly "soul", which is distinct from the mind, scrutinizes and directs the gamut of human existence. The mind contemplates and manipulates objective items, but the theorized soul in turn contemplates and manipulates the mind. It thinks about thoughts. Events in the mind, as well as objective reality, are the raw material for the soul. Just as the idea of a mind is constructed to answer the fundamental question, "What and where are subjective mental phenomena?", the idea of the soul is constructed to answer the fundamental question, "What observes subjective mental phenomena?"
Besides having explanatory value, souls are instrumental in behavior modification. Specifically, all the many procedures for "transformation" recommend the triumph of the soul over the mind, although "soul" may not be explicit in the procedures' language. The sign is the treatment of the mind as an object rather than the subject: "calm your mind", "analyze your motivations", "recognize your negatively-charged aura". According to their accounts, transformed humans purposely change their minds. Afterward they cease their past actions and habits, because their new minds are incompatible. They will say that the old inclinations return occasionally, but each one is rejected and ignored not long after inception.
Such tales pose no problem for someone who believes minds and souls to be factual (my position less than six years ago). This isn't an option within the context of the belief that minds and souls are convenient human inventions for meeting some needs. Regardless of the deceptiveness of the impressions described in the stories, something must be behind the honest storytellers' impressions. Since everything occurs in the confines of one brain, the clear indication is that the illusion of hypothetical entity Q observing hypothetical entity M arises from out of a internal brain loop. That is, brain network Q interacts with brain network M. Activation of network M somewhat indirectly activates network Q.
Therefore, in effect human culture's encouragement of the dominance of the "soul" is encouragement of more sophisticated brain loops. Deliberate introspection ("meditation") is a loop in which thoughts appear, enter short-term memory, and then undergo processing. It requires training precisely due to its essential parallelism. Paying too much attention to momentary thoughts interrupts the arrival of new thoughts. Paying too little attention to momentary thoughts fails to accomplish the overall goal of "detachment", of treating thoughts subsequently as independent mental objects. Unsurprisingly, practitioners who are working to set up the brain loop benefit from periodically returning their attention to their breathing. Breathing is a sensation whose inherent rhythm aids in coordinating the brain network that acts as the "observer soul" and the brain networks that act as the "observed mind". (Reminiscent of the synchronizing "clock ticks" of a computer processor.)
Similar loops constitute the enhanced impulse control of the "transformed" human. Someone whose consciousness frequently returns to a moral standard (and/or an omniscient judge) will incidentally discover whatever shortcomings were in progress immediately before the return. Or, after spending extended time associating unpleasant reactions with "evil" impulses, a conditioned response of hesitation starts to accompany the impulses in the future. Alternatively an association could be built up between a strong desire for a positive aim and impulses consistent with that aim. At the ultimate extent, loops could be so ingrained, sensitive, and efficient that some parts of the brain virtually prevent other parts from intruding into decisions under normal circumstances. In any case, the human brain is amazing in its loopy ability to employ its incredible capacity to react to nerve signals that originate from "inside" as well as "outside".
For instance, the concept of the human mind, of a nebulous decision-maker that mysteriously controls the brain/body (not vice-versa), is a handy fiction. The self's "mind" provides the setting for an otherwise gaping hole in the categorization of human experience: the locale of subjective mental phenomena. These are actually real in the sense of being real effects of the real activity of the self's real brain. But from the self's perspective, which naturally doesn't include moment-by-moment observation of the originating brain, orienting ethereal thoughts in a dual mind domain is a tidy solution; when one "sees" things without eyes or "hears" sounds without ears, some explanation is warranted, no matter how far-fetched!
At a more advanced level of abstraction, a deepest ghostly "soul", which is distinct from the mind, scrutinizes and directs the gamut of human existence. The mind contemplates and manipulates objective items, but the theorized soul in turn contemplates and manipulates the mind. It thinks about thoughts. Events in the mind, as well as objective reality, are the raw material for the soul. Just as the idea of a mind is constructed to answer the fundamental question, "What and where are subjective mental phenomena?", the idea of the soul is constructed to answer the fundamental question, "What observes subjective mental phenomena?"
Besides having explanatory value, souls are instrumental in behavior modification. Specifically, all the many procedures for "transformation" recommend the triumph of the soul over the mind, although "soul" may not be explicit in the procedures' language. The sign is the treatment of the mind as an object rather than the subject: "calm your mind", "analyze your motivations", "recognize your negatively-charged aura". According to their accounts, transformed humans purposely change their minds. Afterward they cease their past actions and habits, because their new minds are incompatible. They will say that the old inclinations return occasionally, but each one is rejected and ignored not long after inception.
Such tales pose no problem for someone who believes minds and souls to be factual (my position less than six years ago). This isn't an option within the context of the belief that minds and souls are convenient human inventions for meeting some needs. Regardless of the deceptiveness of the impressions described in the stories, something must be behind the honest storytellers' impressions. Since everything occurs in the confines of one brain, the clear indication is that the illusion of hypothetical entity Q observing hypothetical entity M arises from out of a internal brain loop. That is, brain network Q interacts with brain network M. Activation of network M somewhat indirectly activates network Q.
Therefore, in effect human culture's encouragement of the dominance of the "soul" is encouragement of more sophisticated brain loops. Deliberate introspection ("meditation") is a loop in which thoughts appear, enter short-term memory, and then undergo processing. It requires training precisely due to its essential parallelism. Paying too much attention to momentary thoughts interrupts the arrival of new thoughts. Paying too little attention to momentary thoughts fails to accomplish the overall goal of "detachment", of treating thoughts subsequently as independent mental objects. Unsurprisingly, practitioners who are working to set up the brain loop benefit from periodically returning their attention to their breathing. Breathing is a sensation whose inherent rhythm aids in coordinating the brain network that acts as the "observer soul" and the brain networks that act as the "observed mind". (Reminiscent of the synchronizing "clock ticks" of a computer processor.)
Similar loops constitute the enhanced impulse control of the "transformed" human. Someone whose consciousness frequently returns to a moral standard (and/or an omniscient judge) will incidentally discover whatever shortcomings were in progress immediately before the return. Or, after spending extended time associating unpleasant reactions with "evil" impulses, a conditioned response of hesitation starts to accompany the impulses in the future. Alternatively an association could be built up between a strong desire for a positive aim and impulses consistent with that aim. At the ultimate extent, loops could be so ingrained, sensitive, and efficient that some parts of the brain virtually prevent other parts from intruding into decisions under normal circumstances. In any case, the human brain is amazing in its loopy ability to employ its incredible capacity to react to nerve signals that originate from "inside" as well as "outside".
Friday, August 05, 2011
source code comments and evolution
In judging whether the source code of a computer program has been copied, similarity in the two programs' effects isn't a conclusive exhibit. As long as the programs were both written to accomplish similar purposes or solve similar problems, it's reasonable for the two to behave similarly. The base strategies used by the programs could very well turn out to be not that different at all. When a particular mathematical formula yields the right quantity, other formulas would probably need to be alike in order to also yield the right quantity. (This is why some argue that patents on software, as opposed to copyright of the source code, are questionable and/or unenforceable.)
Although it's not hard to believe that two programs could be created without copying and yet have a resemblance, it stretches credibility to allege that the source code of the two is the same. Human-friendly source code allows for a variety of choices that fall under the amorphous category of "style". Names, extra spacing, units of organization, and so forth distinguish the source code and therefore authorship of two independently-written programs. Still more idiosyncratic are the comments in the source code, which are completely extraneous lines of text that explain the code to readers - including the writer, who will undoubtedly need to return to the code later. It's improbable to see source code with identical details, especially when those details serve no function in the operation of the actual program! One program writer could have used the name "item_count" and the other "number_of_items". One could have included the source code comment, "prevent division by zero", and the other, "avoid a zero denominator". The great freedom that's possible for nonfunctional features of the source code also makes coincidences too unlikely to consider. If these features match, the confident assurance is that the source code is a copy.
DNA is the metaphorical source code of organisms. And like its metaphor, DNA has well-known nonfunctional ("noncoding", perhaps "junk") elements, too. Observations of such elements match across a huge range of organisms. Just as these matches strongly indicate common origins of program source code, the DNA must have common origins. Evolution, in which species spring from other species, is consistent with the observations. Copying is rampant, but at least there aren't any violations of copyright. There can't be; the material is copying itself!
Although it's not hard to believe that two programs could be created without copying and yet have a resemblance, it stretches credibility to allege that the source code of the two is the same. Human-friendly source code allows for a variety of choices that fall under the amorphous category of "style". Names, extra spacing, units of organization, and so forth distinguish the source code and therefore authorship of two independently-written programs. Still more idiosyncratic are the comments in the source code, which are completely extraneous lines of text that explain the code to readers - including the writer, who will undoubtedly need to return to the code later. It's improbable to see source code with identical details, especially when those details serve no function in the operation of the actual program! One program writer could have used the name "item_count" and the other "number_of_items". One could have included the source code comment, "prevent division by zero", and the other, "avoid a zero denominator". The great freedom that's possible for nonfunctional features of the source code also makes coincidences too unlikely to consider. If these features match, the confident assurance is that the source code is a copy.
DNA is the metaphorical source code of organisms. And like its metaphor, DNA has well-known nonfunctional ("noncoding", perhaps "junk") elements, too. Observations of such elements match across a huge range of organisms. Just as these matches strongly indicate common origins of program source code, the DNA must have common origins. Evolution, in which species spring from other species, is consistent with the observations. Copying is rampant, but at least there aren't any violations of copyright. There can't be; the material is copying itself!
Thursday, August 04, 2011
descriptive or prescriptive Pragmatism
As I understand Pragmatist philosophy (and repeat endlessly), truth is a holistic enterprise which a human performs using the available mental tools: sensation, experimentation, imagination, reason, assumption, intuition, inclination, and so on. If a truth survives this gauntlet, it "works".
The all-too-obvious problem (for philosophers) of this rather realistic definition of truth is that it doesn't take a theoretical "position". Essentially, the complaint is that this Pragmatist account of truth is descriptive, not prescriptive. It seems patently evasive to answer the fundamental question, "Is X true?", with the reply, "X is true when the human evaluating X determines it." The questioner can hardly be blamed for making the rejoinder, "By thrusting the evaluation of truth onto individual humans, you can't avoid the conclusion that 'truth' differs depending on who's evaluating. I'm so sorry, but in my estimation Pragmatist 'truth' doesn't work, therefore it's false. In your estimation, it may work and be true, but my results happen to diverge from yours, unfortunately."
Sly remarks aside, acknowledgment of the pivotal role played by the perceiver/interpreter/thinker is both inescapable and valuable. Two uncontroversial corollaries follow. First, there's no excuse for a single human to pretend to have an egocentric total grasp of all truth. Since humans have differing actions and abilities and outlooks, the truth they discover can be different in highly illuminating ways. Pooling the truth among a group is the best strategy. Advanced cooperation and language are two of the strengths that enabled the predominance of the human species. Second, conceding the importance of the subjective contribution leads to a personal "time-based humility". Humans change and adapt. An unrelenting and honest seeker notes that the present collection of truth isn't identical to the collection five years ago. Considered separately, the present self and the prior self are a pair who don't agree about everything. "They" could be equally certain of having superior knowledge of the truth. At any one time, the self could bloviate and/or blog about many things undeniable, and then deny the same at other times. If truth has no element of subjectivity, then how can you say that you know the truth better now than before, or be sure that you will never know the truth better than now? Even declarations of timeless truth happen in time.
No matter the downsides or upsides of the descriptive aspect of the Pragmatist concept, I believe it's prescriptive too. Given no standalone ideas can be inherently true, we are spurred to carry out human verification, i.e. whichever physical or mental actions are sufficient to produce proof. We aren't obligated to categorize an idea as true when it lacks proof convincing to us. Alternative claims on truth aren't satisfactory. Explanations and justifications, of why a statement is concretely true, defeat unfounded assertions.
Furthermore, I believe the Pragmatist "prescription" is to judge the strength of a proof in proportion to the strengths of the proofs on which it depends. As much as possible, proofs and truths shouldn't be treated as independent. Large chunks of evidence are mutual supporters. Data substantiates other data. Chains or networks of proof are the Pragmatist reality. Unconnected pillars or islands of proof are prone to suspicion; "Why is the truth value of this one statement excluded from the standards and substance of the rest?"
The all-too-obvious problem (for philosophers) of this rather realistic definition of truth is that it doesn't take a theoretical "position". Essentially, the complaint is that this Pragmatist account of truth is descriptive, not prescriptive. It seems patently evasive to answer the fundamental question, "Is X true?", with the reply, "X is true when the human evaluating X determines it." The questioner can hardly be blamed for making the rejoinder, "By thrusting the evaluation of truth onto individual humans, you can't avoid the conclusion that 'truth' differs depending on who's evaluating. I'm so sorry, but in my estimation Pragmatist 'truth' doesn't work, therefore it's false. In your estimation, it may work and be true, but my results happen to diverge from yours, unfortunately."
Sly remarks aside, acknowledgment of the pivotal role played by the perceiver/interpreter/thinker is both inescapable and valuable. Two uncontroversial corollaries follow. First, there's no excuse for a single human to pretend to have an egocentric total grasp of all truth. Since humans have differing actions and abilities and outlooks, the truth they discover can be different in highly illuminating ways. Pooling the truth among a group is the best strategy. Advanced cooperation and language are two of the strengths that enabled the predominance of the human species. Second, conceding the importance of the subjective contribution leads to a personal "time-based humility". Humans change and adapt. An unrelenting and honest seeker notes that the present collection of truth isn't identical to the collection five years ago. Considered separately, the present self and the prior self are a pair who don't agree about everything. "They" could be equally certain of having superior knowledge of the truth. At any one time, the self could bloviate and/or blog about many things undeniable, and then deny the same at other times. If truth has no element of subjectivity, then how can you say that you know the truth better now than before, or be sure that you will never know the truth better than now? Even declarations of timeless truth happen in time.
No matter the downsides or upsides of the descriptive aspect of the Pragmatist concept, I believe it's prescriptive too. Given no standalone ideas can be inherently true, we are spurred to carry out human verification, i.e. whichever physical or mental actions are sufficient to produce proof. We aren't obligated to categorize an idea as true when it lacks proof convincing to us. Alternative claims on truth aren't satisfactory. Explanations and justifications, of why a statement is concretely true, defeat unfounded assertions.
Furthermore, I believe the Pragmatist "prescription" is to judge the strength of a proof in proportion to the strengths of the proofs on which it depends. As much as possible, proofs and truths shouldn't be treated as independent. Large chunks of evidence are mutual supporters. Data substantiates other data. Chains or networks of proof are the Pragmatist reality. Unconnected pillars or islands of proof are prone to suspicion; "Why is the truth value of this one statement excluded from the standards and substance of the rest?"
Sunday, July 31, 2011
a la carte TV has arrived
...but probably not in the form or at the price that consumers wish. A recurring complaint has always been, "Why must I pay for 'packages' full of channels that I don't want? I'd rather pay for only the stuff I want."
With iTunes or Amazon, that's possible. Customers truly pay for only stuff of their choosing: no subscription to a package, channel, or even a program. Rather, they buy episode by episode, like buying a periodical off the rack every week or month. This degree of granularity and inconvenience is the opposite extreme of a channel package. It might be just right for some, yet others likely would prefer a moderate option; they'd be willing to pay more in return for a larger conglomerate of programs. Already, specialized "channels" both produce and aggregate programs according to specific interests of the audience. Is there a reason why they couldn't fill similar roles in the domain of digital on-demand entertainment?
Some object to the episodic prices, at two dollars or more, for fresh and in high definition. Of course, the bytes that constitute the episodes are undeniably cheap to transmit/copy, so the "per-unit production" cost is far lesser than the retail price. But the per-unit production cost is always a baseline and never the entirety of the cost to produce and sell. Producers and sellers must cover miscellaneous other costs, e.g. fixed bills like the company payroll or investments like R&D. (Although some products sell at a "loss" in order to stimulate future purchases.) The final price is what the "market will bear"; sellers and buyers settle on an amount acceptable to both. It seems to me that the price level could have a few reasons:
With iTunes or Amazon, that's possible. Customers truly pay for only stuff of their choosing: no subscription to a package, channel, or even a program. Rather, they buy episode by episode, like buying a periodical off the rack every week or month. This degree of granularity and inconvenience is the opposite extreme of a channel package. It might be just right for some, yet others likely would prefer a moderate option; they'd be willing to pay more in return for a larger conglomerate of programs. Already, specialized "channels" both produce and aggregate programs according to specific interests of the audience. Is there a reason why they couldn't fill similar roles in the domain of digital on-demand entertainment?
Some object to the episodic prices, at two dollars or more, for fresh and in high definition. Of course, the bytes that constitute the episodes are undeniably cheap to transmit/copy, so the "per-unit production" cost is far lesser than the retail price. But the per-unit production cost is always a baseline and never the entirety of the cost to produce and sell. Producers and sellers must cover miscellaneous other costs, e.g. fixed bills like the company payroll or investments like R&D. (Although some products sell at a "loss" in order to stimulate future purchases.) The final price is what the "market will bear"; sellers and buyers settle on an amount acceptable to both. It seems to me that the price level could have a few reasons:
- The most obvious is that the price should be greater than zero to compensate everyone involved in the creation and distribution of the episode. If professionals are to create episodes, then they can expect to trade their work for money. If professionals are to create and manage technology to store and send the bytes, then they can expect to trade their work and fixed-costs for money. Since advertisers aren't contributing (as they typically do during "free" Web streaming), the revenue must come directly from the audience. As with any product, the gross revenue is in effect spread throughout the full "supply chain".
- Prices for fresh and well-encoded digital musical works, the experience of which lasts between three and six minutes, range from $0.69 to $1.25. At its cheapest this is $0.11 per minute of enjoyment ($0.69 divided by six minutes). At that rate, a 40-minute TV episode would be $4.40. This calculation ignores the fact that the TV episode is both aural and visual and therefore each minute is really superior to the song, whether in terms of raw quantity of data or in terms of the subjective impression.
- The episode is bought, not rented. Presumably the price to see an episode only during a limited time frame would be cheaper, which is the rule for the movie "rentals" on these stores. What's being sold is the permission to stream (or download to a scrambled file) the episode for all time; in any case, certainly more than one viewing. While this is different than the traditional meaning of "ownership" (what happens when the seller/provider goes out of business?), it's not too far off. Generally speaking, the usual market alternative that's closer to "true" ownership, a DVD or BD stored on a home shelf, is relatively more cumbersome and expensive (per episode) and also not as timely.
- Massive bundling in TV achieves large-scale economic efficiencies that wouldn't be possible if everything were unbundled. Hence the actual price per channel or program or episode will tend to be correspondingly greater. The supply and demand price curves for each item will vary. Items with greater production costs will cost more. Highly-desired items could cost more (because buyers are willing to pay more). Market competition will be more important than ever, assuming programs can act as substitutes for one another.
Saturday, July 30, 2011
omniduction formality
Formally, what I called "omniduction" seems to be universal instantiation running amok. The root cause is hastily asserting the universal qualifier over too large a set, thereby mistakenly asserting that one or more instance(s) are inside the set (and applicable to universal instantiation). Cramming the world into too limited of a logical domain.
omniduction breakdown
Apologies for flashing some political stripes, but here goes... Recently I read an angry online comment about the "threat of raising taxes by gradually reducing the mortgage interest deduction". 1) On one hand, the comment's writer believes in limited government that doesn't intervene in private markets. 2) On the other, the comment's writer believes that all rises in the effective tax rate are by definition terrible. So removing a tax cut, which is evil according to #2, would reduce government intervention in the housing market, which is good according to #1.
The attempt to derive an opinion by omniduction has produced a "SYSTEM ERROR". Perhaps policies in the real world have trade-offs and case-by-case factors to consider, and individual policies can't be attacked in isolation from the rest?
The attempt to derive an opinion by omniduction has produced a "SYSTEM ERROR". Perhaps policies in the real world have trade-offs and case-by-case factors to consider, and individual policies can't be attacked in isolation from the rest?
Friday, July 29, 2011
peeve no. 264 is omniduction
Two thinking processes generally receive attention: deduction and induction. Logical consequences and generalizations. Debaters construct their arguments from these basic blocks. Participants who rely on other tactics are subject to being systematically dismantled by opponents. (There's also abduction, but I'm ignoring it here.)
Au contraire! This is an incomplete account of actual reasoning in human society. Besides deduction and induction, I propose a third process: "omniduction". Omniduction is the creation of many small details from out of unassailable preconceptions. Its rallying cry is, "I don't need the data! I know what the data is already because I know that ___(preconception)___ could never ever be false!" The miracle of omniduction is akin to spinning gold from straw. Reality can be so simple just by producing facts through grandiose assumptions, not vice-versa.
Unfortunately, omniduction tends to be problematic when practitioners try to converse. Unless everyone happens to be applying omniduction identically, the produced universes could be in conflict. When facts follow directly from overall assumptions, discussion of evidence is futile.
Au contraire! This is an incomplete account of actual reasoning in human society. Besides deduction and induction, I propose a third process: "omniduction". Omniduction is the creation of many small details from out of unassailable preconceptions. Its rallying cry is, "I don't need the data! I know what the data is already because I know that ___(preconception)___ could never ever be false!" The miracle of omniduction is akin to spinning gold from straw. Reality can be so simple just by producing facts through grandiose assumptions, not vice-versa.
Unfortunately, omniduction tends to be problematic when practitioners try to converse. Unless everyone happens to be applying omniduction identically, the produced universes could be in conflict. When facts follow directly from overall assumptions, discussion of evidence is futile.
- "My theory is correct. Your information must be wrong."
- "The complex specifics of this situation must be irrelevant. My flawless system of beliefs doesn't require such minutiae to render a verdict."
- "Truths cannot be complicated. If you'd only consider the issue from my perspective, you'd realize that a small set of self-contained opinions can explain anything."
Wednesday, July 27, 2011
drinking game for Choices of One by Timothy Zahn
Take a swig whenever you read:
- a form of the verb "grimace"
- "said grimly"
- "mentally _____"
- a form of the verb "growl"
- a form of the verb "twitch"
- the reply "Point", but that's actually very rare
- Mara Jade deflecting a blaster shot back at the shooter
- Luke thinking about the fact that he isn't a "real" Jedi or that he's not very good at doing _____
- Han thinking about the fact that Leia's interested
- "Firekiln"
- "Threepio" or "Artoo"...psych! Those two are missing!
Sunday, July 24, 2011
placing the Roku 2 upgrade expense in perspective
I belong to the group of Roku customers who bought a Roku device in 2011 before the arrival of the Roku 2. According to this statement, the better Netflix streams cannot be made available to "older" Roku, despite those Roku having 1080p/surround sound hardware support. Therefore we get to buy Roku 2 if we feel like viewing Netflix with the full capabilities of our entertainment devices. Since I detest negativity, I'm choosing to consider this situation through Happy Goggles, which to my knowledge do more than nothing...
- Roku 2 are better appliances. I'm someone who connects it by HDMI and wireless, so no problems there for me. The new gaming features have no appeal. Technology changes; I for one never expected Roku's product line to stay the same as its competitors exploited advances in hardware. Or my Roku to have sufficient horsepower to handle every upcoming enhancement to the Roku "channels".
- Older Roku will continue to work normally. In fact, future free software updates will ensure that it shall improve. If the older Roku was worth the cost before, then surely that remains the case. There's no loss. Nothing is being discontinued or crippled retroactively.
- Prices for Roku 2 are hardly huge burdens. For HD video with comparable diversity and DVR-like controls to pause/rewind/skip, the dominant companies charge more per month than buying both two Roku in one year. $80 + $80, divided by 12 months, is $13.33. Add in the $7.99 monthly cost of a Netflix subscription and it's $21.32. I'll grant that a complete comparison is more complicated than this and not really "fair" due to differences in offered content, e.g. ESPN 8 "The Ocho" and the side-boob pay channels. Internet cost affects the overall calculation for a household budget.
- My less-than-$400 netbook has HDMI-out. Yes, it sends audio along with the video. I merely mention this to remind everyone again that, technologically speaking, it's not too difficult getting Internet content or indeed any computerized content into a typical entertainment setup. (That doesn't change my opinion that, for several reasons, the Roku is worth buying.)
Friday, July 22, 2011
a story about a dynamic programming language
Every once in a while, I like to pretend that I can reload past mindsets like Smalltalk images. Today's story is about a dynamic programming language.
It loaded libraries at run-time. It sat at a level of abstraction from the actual computing hardware: there were no pointers. Memory management happened during execution in an automatic mechanism. Characters weren't single bytes. Checks of all kinds took place as code ran, affording a minimum of "protection". For smaller programs, start-up time itself was a performance constraint. Method dispatch wasn't fully resolved until the time of the call. Commentators often complained about the costs of all this dynamism.
Naturally, I'm describing Java as it was viewed a handful of years ago. This is why it's so amusing to hear that Java has somehow turned into the static language standard-bearer. At least where I was working, the original competitor to Java Servlets in the Web domain was Perl CGI, not C or C++. It was a battle between two dynamic options, in which greater simplicity of string manipulation and memory management trumped other concerns. Java was the company-pushed "compromise" solution that had better threading, Unicode, fairly easy C-like syntax, and so on, yet without some of the traditional downsides of the static languages. In retrospect, other languages could have filled that niche quite well (especially with a tweak or two), but lacked comparable levels of publicity, support, education, and engineering. Regardless of accuracy, the negative perceptions of Java's potential opposition were sufficient to leave an opening. As much as academic and research programmers would prefer it to be the case, programming languages aren't chosen based solely on the sophistication and self-consistency of syntax and semantics.
I'm not seriously proposing that the possibilities for dynamism in Java are comparable to the languages usually labeled as "dynamic". I'm only reiterating the truism that all the aspects of a language and/or its execution "platform", including and beyond the type system of its variables, are on a continuum between static and dynamic.
It loaded libraries at run-time. It sat at a level of abstraction from the actual computing hardware: there were no pointers. Memory management happened during execution in an automatic mechanism. Characters weren't single bytes. Checks of all kinds took place as code ran, affording a minimum of "protection". For smaller programs, start-up time itself was a performance constraint. Method dispatch wasn't fully resolved until the time of the call. Commentators often complained about the costs of all this dynamism.
Naturally, I'm describing Java as it was viewed a handful of years ago. This is why it's so amusing to hear that Java has somehow turned into the static language standard-bearer. At least where I was working, the original competitor to Java Servlets in the Web domain was Perl CGI, not C or C++. It was a battle between two dynamic options, in which greater simplicity of string manipulation and memory management trumped other concerns. Java was the company-pushed "compromise" solution that had better threading, Unicode, fairly easy C-like syntax, and so on, yet without some of the traditional downsides of the static languages. In retrospect, other languages could have filled that niche quite well (especially with a tweak or two), but lacked comparable levels of publicity, support, education, and engineering. Regardless of accuracy, the negative perceptions of Java's potential opposition were sufficient to leave an opening. As much as academic and research programmers would prefer it to be the case, programming languages aren't chosen based solely on the sophistication and self-consistency of syntax and semantics.
I'm not seriously proposing that the possibilities for dynamism in Java are comparable to the languages usually labeled as "dynamic". I'm only reiterating the truism that all the aspects of a language and/or its execution "platform", including and beyond the type system of its variables, are on a continuum between static and dynamic.
Subscribe to:
Posts (Atom)