Friday, December 29, 2006

the ultimate end of personal growth

Serious disclaimer: As could be inferred from the title, the pretentiousness, preachiness, and all-around self-importance of this entry is far above-average. Its content is grandiose. Its topic is a timeless one as old as philosophy itself, so the mere fact that it attempts an answer is a possible sign of megalomania. Entire books have been written on the same or similar ideas, so this post will definitely be on the long side. Some may feel it to be too smug, although that is not the intent at all. Reader's discretion is advised.

A conjecture that has long jiggled in my noggin for a while, only recently coming into greater clarity with the reflections and resolutions of a new year, is that the concept of the ideal (or best) mind (or mental state) is a real, independent, objective concept, a concept which I will try to sketch out here. While none of the following will be truly original, this particular synthesis may be instructive.

What I mean by "ideal" is, believe it or not, basically pragmatic: that this mind functions well in a variety of situations. It considers its environment and goals, formulates decisions and plans, and puts into practice what it has decided. It is principled and rational, but also flexible. Simply put, this mind is ideal because it governs behavior as it should. To use more specific terms, this mind is likely to achieve success and (lasting) happiness or at the very least not be a hindrance.

The ideal mind exhibits these qualities:
  • Rationality. The ideal mind relates well to information. It is capable of recognizing the source, authority, and applicability of the information it considers. It realizes how powerful the right information can be, so it tries to maximize this power. For instance, it should be eager and able to learn, toss aside harmful preconceptions, and adopt new paradigms if necessary. Generally speaking, the too-closed mind cannot exploit information as much as the open mind, but the too-open mind fails to filter/distinguish bad information.
  • Self-Control. The ideal mind relates well to its impulses and values. At this point some might start saying "wait, you're talking about a soul attribute". To which my reply is "If you're able to precisely formulate distinct compartments of consciousness and diagram the interactions of those compartments, have at it. For my purpose here, I'm considering the ideal mind as a whole." Being rational and having self-control are separate qualities. Almost by definition, an addict may be rational, knowing his behavior is more self-destructive than not, but lack of self-control prevents this knowledge from its expected effect. Self-control is much harder to achieve than rationality, though both are lifelong journeys. To refer back to my post on Freakonomics, someone with self-control can take the (economically-enumerated) incentives in front of him or her and "rig" them with different relative weights. The ideal mind probably doesn't practice what is traditionally known as ascetism; rather, the ideal mind chooses what desires to obey and ignore. The dog wags the tail, not vice versa. One way to achieve the ignoring of a desire is to trump it with another desire. I have heard of a smoker who didn't quit until he saw his young boy pretending to smoke one of his crayons. Various media try to paint unbridled desire as highly dramatic and noble; don't be fooled, it's only trying to incite pathos, not give you life lessons. The ideal mind can and should have plenty of fun. The point of self-control is that the ideal mind can avoid the fun that has devastating side effects.
  • Empathy. The ideal mind relates well to its environment. One informal way to sum up this quality is "giving a damn". Some other names might be compassion, love (which may be the most overused word ever?), caring, tolerance. The crucial tipping point that marks this quality is the mind identifying with entities outside itself, hence the primary name "empathy". The ideal mind can appreciate goodness or Quality wherever it is, and strive to spread it through actions which vary wildly depending on context. However, since the ideal mind has needs and goals of its own, not least of all survival, it must maintain yet another balance in myriad situations. In the extreme case, the ideal mind's own existence is part of the equation, and the result may be reasoned self-sacrifice. If someone is not subjected to this kind of decision, he or she is lucky.
  • Timeliness. The ideal mind relates well to time. This quality is the hardest of the four to affix a one-word label onto. The abstract nature of time is not helpful, either. Anyhow, timeliness is the quality of applying mental energy primarily to the present, as opposed to expending it on regrets or unimplemented dreams for the future. Some might say that it consists of "just living in" each moment. The timeliness of the ideal mind does not imply that the past is denied or the future ignored; real timeliness recognizes the connectedness of time, and in fact takes this connectedness quite seriously. Without future goals, present action is aimless. Without taking the past into account, no learning occurs. Both the past and the future have no power over the ideal mind. The ideal mind uses other eras of time to serve and enhance the present. Part of forgiveness is refusing to permit another person's past actions to continue hurting you.
I'm going to stop here. There are many other fine mental ideals, I know, but I tried to choose four that were as uncontroversial and general as possible. Regardless of the many differences between the factions of the world, this concept of the ideal mind is common to all, even if it only takes concrete shape in a precious few individuals in a given society. It seems to me that any religion, belief system, morality, self-help regimen, or life principle worth listening to will foster development toward the ultimate end of personal growth outlined above: the ideal mind. Also keep in mind that in the vast majority of cases, strategies for the cultivation of the ideal mind fail primarily not because the strategies are defective, but because the person involved chose to fail. On the other hand, any specific strategy or system may overemphasize some attributes, leaving its acolytes to infer what counterbalances are necessary in practice of the ideal mind. And sooner or later the practitioner will discover complex unanswered questions and situations that must be confronted individually. Like some of the other vitally important intangibles of the human condition, the ideal mind is easier to sense "in the wild" than through words. That being said, the human search for the answer to Life, the Universe, and Everything has been ongoing for a looooong time, so it's foolish to disregard what others discovered before us.

Thursday, December 28, 2006

musings on Freakonomics

I admit to not reading a lot of nonfiction, especially if it is unrelated to my career. But recently, when I was dragged to a *-Mart, the cover, title, reviews, and introduction of Freakonomics pulled me in, hard. Merely the book's approach, applying the data mining and incentive-based reasoning of economics, intrigued me, let alone the colorfulness of the topics and conclusions. And I think it should go without saying that rethinking old assumptions through a high-level, objective analysis is a worthy goal, at least for anyone seeking the whole truth.

On the other hand, I can also see some weaknesses here. First of all, the reliance on statistics, while being the foundation of the book's claimed authority, can make for dubious evidence. There are three kinds of lies: lies, damned lies, and statistics. Dizzyingly intricate data manipulation can certainly illuminate patterns in a particular sample, but the more the data must be "massaged" and interpreted to reach an answer, the less convincing the evidence becomes. Second, the disagreements between economists when discussing even their own subject matter don't inspire confidence in bringing other topics under the same treatment. To be fair, their attempts to decipher the behavior of something as mind-bogglingly complex as the economy are admirable. It is unfortunate that they can't rely on direct experimentation the way that sciences, such as psychology, can.

Third, their picture of humanity is, of course, simplified. As others on the Web have commented, this book's subject matter is more like that of sociology and psychology than traditional economics. As I understand it, economics treats people as rational agents who balance their decisions in order to maximize profit and minimize cost. (I'm reminded of some comments that Yul, the last Survivor winner, made about one of his competitors, Jonathan. Yul said that he could predict Jonathon's actions simply because Jonathon was a rational player out to win the game for himself. In this sense, Yul was thinking like an economist.) This model assumes a competition is in place for the available payoffs, but there are other ways that people interact, such as sharing, collusion, or the extreme opposite: self-sacrifice. Sociologists would place greater emphasis on these other modes. The economic view would fit with the idea of government being a "social contract" in which free citizens cede some powers in order to have other needs filled.

The book acknowledges this third weakness, by describing social and moral "incentives" (so it's more blessed to give than receive only because the giving act benefits the giver in some way?). Essentially, the economic model of human behavior is not falsifiable. If people act to gain positive incentives and reject negative incentives, then an incomplete analysis of any given situation indicates that the analyst needs to consider more incentives. Funny thing is, I'm pretty sure I've encountered this idea before: radical behaviorism. Push a lever, get a fish biscuit. The fiction book Walden Two presented a utopia brought about with those methods. In case it's not clear, I find that philosophy highly distasteful. Higher brain functions are A Good Thing.

Freakonomics doesn't duck such objections. It invites them. In fact, one of the points made by the book is that the decision to cheat or steal, i.e. breaking rules in order to obtain a resource, is like any other (economic) decision: it simply has strongly negative social and moral incentives. Since everyone acts economically, therefore most everyone cheats, but only when it makes (economic) sense. For instance, when the negative social incentive is infinitesimal because of an infinitesimal chance of being caught, or the negative moral incentive is puny because the otherwise rightful resource holder can "afford" to not have it. Truly, someone must have a firm, crunchy moral center indeed to turn down those opportunities. Or someone can try to engineer additional incentives against cheating, like the designers of Walden Two. It may be true that most people have moral limits, and "every man has his price", but that doesn't mean I must like it or condone it. A utopia would work well, if we could only manage to get around the people problem.

Monday, December 18, 2006

relational database as inference engine

A short while ago, I was reworking an SQL query to ensure that it would yield the precise results the rest of the application expected. (Side observation: my conscience kept hollering at me to make the SQL simpler and just crunch the returned data as needed. Shut up, Jiminy! Go spell encyclopedia and leave me alone.) I noticed that I was adding, taking away, and rearranging WHERE conditions to match particular rows. But matching to specific data based on a generalized set of characteristics is also an activity performed by inference engines! It's not that far of a leap to equate database rows to facts, SQL queries to rules, and newly-inserted database rows to concluded facts or assertions. I wouldn't be surprised if the academics in these two camps have been cross-pollinating ideas for some time. If I knew substantially more than jack about real relational database theory, I could offer some insightful comments. Instead, here's an example.

Imagine a set of widgets available for sale. A white widget has no additional cost, a black widget costs an additional 400, and a green widget costs an additional 200. The base cost for round widgets are 100, square widgets are 200, and triangular widgets are 150. How much does a round, black widget cost? An inference engine might have the set of rules (color white) => add 0, (color black) => add 400, (color green) => add 200, (shape round) => set 100, (shape square) => set 200, (shape triangle) => set 150. Then one could add the facts (color black) and (shape round) and have the answer. Or at least the necessary addends.

A (SQL-compatible) database could do a similar operation. One particular combination (or should I say tuple?) of facts becomes a row in table "facts". The rules become: (select 0 from facts where color = "white"), (select 400 from facts where color = "black"), (select 200 from facts where color = "green"), (select 100 from facts where shape = "round"), (select 200 from facts where shape = "square"), (select 150 from facts where shape = "triangle"). Add a row to "facts" that has a "color" column of "black" and a "shape" column of "round", run the queries, and there's your answer. Or at least the necessary addends.

The similarities become more striking when you use a database table in the usual way, i.e., using a row to represent one member out of a collection of similar entities. To refer back to the previous example, this just means renaming the "facts" table to "widgets" and breaking non-widget facts out into separate tables (the usual normalization). SQL queries that match multiple entities at once will employ JOINs. In the inference engine, an entity would look like (widget (shape square) (color green)), and I assume that rules that match multiple entities would work about the same as rules that match multiple individual facts.

As to whether it makes practical sense to map a relational database to an inference engine or vice versa, I'm inclined to think not. If your problem domain is rigid enough to work in a database, then there's no gain in using an inference engine instead. If your problem domain is a classic example of AI, a rather fuzzy attempt to capture a variety of heuristics working on loosely-structured data, then the database details would bog you down. To say nothing of the limitations (and proper place) of SQL.

Thursday, December 14, 2006

arithmetic unplugged - the abacus

Somewhere in the intersection of the sets "unnecessary activies", "math-related stuff", "exotic devices", "cheap equipment", and "skills requiring long practice" is "the use of an abacus". How could a frugal ComSci-major/Math-minor with way too much time on his hands resist?

I bought a surprisingly little but quite usable 13-rod Japanese abacus or soroban on eBay for cheap, and also an old but lightly used copy of The Japanese Abacus: Its Use and Theory by Takashi Kojima (ISBN 0-8048-0278-5, although I later stumbled on a page with links to a verrrrry similar pdf). If I had the requisite desire and/or discipline I would probably be pretty good at using my soroban--I'm not. Part of the lack of motivation stems from the fact that I have no practical reason for improvement; after all my cell phone (the same one that once made me feel like I had the power of the sun in the palm of my hand) has a calculator function. Nevertheless, I'm progressing slowly.

On the soroban, each rod has one 5-unit bead and four 1-unit beads, with a separator between the fiver and the rest. This means that the soroban is primarily for regular decimal-based applications, although I suppose one could use just the 5-beads and do binary math if one wanted to (setting the 5-bead would represent a 1, and each rod would represent a power of two). Note that more than 6 or so items in a group could be hard to accurately recognize and distinguish at a glance, so more beads might actually make calculations slower, because it would force the operator to laboriously count bead-by-bead.

One exceedingly simple drill for beginners, that I haven't seen mentioned anywhere, is just to add a single-digit number to itself a set number of times. It's easy to tell when you've made a mistake: merely check to see if the result is a multiple of the number. You can also start with a high multiple and perform a series of subtractions. The point of this drill, which clearly would never be performed with an abacus in a real situation, is to increase your speed. In my opinion, it makes sense to get really good at this drill before adding and subtracting two digit numbers, which seem to be the usual starting exercises for the abacus.

As the little book explained, the abacus procedures simplify a calculation by breaking it up into lots of rapid, little single-digit calculations. The usual pencil-and-paper method is almost exactly like the abacus method (in spirit anyway), but with some significant differences. The first, of course, being that the paper method involves writing a problem out, while an abacus operator just flicks his fingers to shift some beads--a considerably simpler if not faster motion. The second difference is that numbers in the paper method are distinguished by having differently shaped symbols, while numbers on an abacus are actual quantities of beads--numbers you can feel. Hence the usefulness of the "toy" abacus in teaching children.

The third and most confusing difference is that the paper method is free-form or malleable, while an abacus cannot be "rewritten" to have more or less beads in any given case! On the abacus, "carrying" and "borrowing", meaning the overflow or underflow of one power of ten to another, are achieved by making a one-bead adjustment in the higher power and then offsetting that adjustment by moving beads in an inverse operation in the lower power. You undo the excess. For instance, to add 4 to 8, you:
  1. Check to see if you have enough beads on the rod under consideration. Since there is only one 1-bead left (8 is one 5-bead and three 1-beads), proceed to step 2.
  2. In step 3, you will add a 1-bead on the tens rod to the left, which will be too much by 10 - 4 = 6. So reset six beads, that is, the 5-bead and one 1-bead, leaving two 1-beads.
  3. Add the 1-bead in the tens rod. By not doing this until last, you can keep your attention on the unit rod for both steps 1 and 2. Otherwise, you would focus on the unit rod, switch to the tens rod, then switch back to the unit rod to undo the excess.
This could also be thought of as adding 10 and a -6 (or subtracting 6). According to the little book, the key to doing this quickly is to think of numbers in terms of "complementary" pairs: 9 and 1, 8 and 2, 7 and 3, 4 and 6, 5 and 5. If there is power-of-ten overflow or underflow involving one of the numbers in a pair, just do the opposite operation with the other number in the pair. It gets easier with practice, believe me. The same strategy applies to adding or subtracting a 5-bead when you run out of 1-beads (e.g., 3+3), except there are only two pairs: 4 and 1, 3 and 2. The real kicker is when you have a problem like 13 - 6, in which you need to add the "tens complement" of 6 (i.e.,4) to the unit rod, but you can't do that unless you move a 5-bead and subtract the "fives complement" (i.e.,1) from the unit rod. You convert a subtraction to an addition to a subtraction as far as the 1-beads on the unit rod are concerned. Have I mentioned that effective use of the abacus takes some concentration at first until it becomes "automatic"? Try not to overthink it. There are also techniques for multiplication, division, and roots, but I've only skimmed those so far. It appears that the comparison to the paper method is again apt, as the simplifying principle is the distributive property: reducing a complex multiplication to a sum of one-digit multiplications.

The freaky conclusion of abacus training is the operator becoming able to do abacus manipulations on an imaginary abacus, enabling savant-like mental calculation. I don't plan to reach that point for a looooong time, but here is an incredible account that I'm not sure I believe. If the story's completely true, I might like to hire him as my mentat (I'll get that Leto!). Here are some of the more useful links I've found.

Friday, December 01, 2006

If House visited Scrubs

I regularly watch House and Scrubs. Both shows take place mostly in hospitals, but Scrubs is more likely to have scenes elsewhere. Scrubs seems to have more minor and recurring characters than House, which mostly revolves around a few people (the Simpsons has far more than both put together, but since tremendously talented voice actors can do multiple cartoon characters this is no surprise). House is about complicated diagnoses; it's the medical equivalent of a mystery or "whodunit", while Scrubs is more about the daily and routine trials of hospital work. House is a drama that often has (acerbic) funny moments. Scrubs is more of a comedy with token dramatic moments. Scrubs has its share (or more) of shmaltz, but House is more stingy in that regard. Most prominent of all, in my opinion, is that House takes place in a realistic setting, and Scrubs takes place in a surreal setting (including J.D.'s imaginative mind).

What I find fascinating is the huge contrast in tone. Consider this thought experiment or scenario: J.D. can't figure out what's wrong with a patient. He calls Dr. Cox. After calling J.D. Rhonda and launching into a drawn-out tirade about how he isn't a genie to be summoned at will by the incompetent, Dr. Cox decides to bring in the renowned Dr. House. Dr. Kelso meets Dr. House at the door, smiling widely and generally trying to flex his political know-how. House nods politely until Kelso is done talking, then nails him with a surgically-precise insult. Dr. Kelso raises an eyebrow in surprise that someone stood up to him so well, then walks away in a huff. If J.D. watched this exchange, then there may have been a daydream sequence in which House and Kelso fought in a Western quick-draw gun duel. When House arrives at the patient's bed to deliver his first cynical comment about the root cause, the patient immediately starts convulsing and coughing up blood, because that's what House's patients do. Carla stabilizes the patient with her usual quick action. When House orders her to perform a painful test, she flatly refuses before giving House unsolicited advice on his lack of caring. House gives the standard retort that "all he's trying to do is save the patient's life", to which Carla responds by telling House to do it himself. If the test is highly expensive, House may also need to cajole Dr. Kelso into allowing it at all. Later, House concludes that some surgery is needed, so Turk comes by to receive instructions. In the ensuing conversation, House makes a racial joke, but it flies completely over Turk's head. At meal time, House sits with Dr. Cox to fill him in on his diagnosis so far. He starts by making a poor analogy, then waiting for Cox to ponder what he's saying. Cox makes his own poor analogy between doctors who don't say what they mean and women with deceptive breast implants. House mocks Cox for being so slow, and Cox mocks House for treating his work like a game. Eventually they manage to communicate. Towards the end, Jordan joins Cox at the table. House includes her by indulging in some thinly-veiled innuendo. Jordan not only fails to be embarrased by this, but follows it up by insulting House's masculinity in some way. As all this is going on, there is a subplot involving J.D. and Elliot, but nobody cares because the guest star is more fun. In the end, the patient may or may not die because this is Scrubs, but in any case it won't be because House was wrong. With well-hidden enthusiasm, House will leave, overjoyed to return to a hospital where people take things, such as Dr. House, more seriously.

Sunday, November 26, 2006

peeve no. 244 is breaking upgrades

Preface: I'm a little hesitant of embarking on this rant, because I'm fully aware that I may come out looking like an imbecile, n00b, or [insert preferred term of mockery here]. But then I remember that a written rant is the thinking's man version of a gorilla tantrum, so any ranting doesn't flatter me anyway.

Until a few days ago, I was running Debian testing ("etch"--kudos on trying to get the next release out sooner). Well...more or less. My installation started out quite a while ago as Mepis, but over time I replaced almost everything with plain Debian. I used some packages from Debian unstable (Sid) here and there. Then I read that many users run the unstable distribution, it doesn't break as much as you might think, it always has the latest and greatest, it's not quite as frozen as testing close to a release...so I pointed my apt to pure unstable (can you guess where this is going?).

I upgraded a few of the packages I care about most and enjoyed the new features. Then I saw an upgrade that required udev...one of those "program A depends on framework B depends on infrastructure C" cases that make me glad for automatic dependency handling. I was running hotplug, because that was part of the initial install (the kernel was a distro-patched version 2.6.12, I believe). I waited until I had a lengthy, contiguous block of time, and let the upgrade happen. Oooops. The udev package made it very clear that it wanted to run on a later kernel. This might faze some folks, but kernel shlepping is not new to me considering I ran Gentoo back before it offered any precompiled kernel at all, so I installed the Debian meta-package that depends on the latest kernel, made sure the necessary bits were in my grub config, and rebooted.

This wouldn't be a rant if it had worked. The kernel ran through the boot process, appeared to load the modules it should, and started executing items in the initrd. End result: something similar to "can't mount /dev/hdb3 on /". The filesystem on that partition is ext3! I started to feel the primal rage that comes on whenever, from my perspective, software refuses to do as it's told. I rebooted back into the other kernel. I searched the Web for similar troubles. I'm inclined to blame udev for not doing its job, or maybe the initrd creation tool. I fell into a cycle of fiddling with the packager or rerunning a tool, rebooting, calling out dirty names, and repeating the process. After I had enough, I decided to admit I had no clue what the cause was. Some unknown magic smoke on my system wasn't magically smoking like it should. I was desperate. My new kernel couldn't even get me to a real shell. I humbled myself before apt, and upgraded the world.

Afterwards, the new kernel still wouldn't boot, the old kernel booted and initialized fine, and pretty much everything in userland was down for the count. No X, no network access...now a broken guy with a broken system, I copied the important stuff on my / partition to my home partition, tossed in one of my LiveCDs long enough to download and burn an iso, and then installed Ubuntu. I have vowed to confine my repository list to the basics, because breaking updates suck huge donkey balls. Apologies for any unpleasant mental images.

Bonus rants: It's been an interesting change going from Konquerer to Nautilus (is it still called Nautilus?), amaroK to rhythmbox, kwm to Metacity. Here are my complaints:
  • The volume/mixer applet for the KDE panel showed levels for several outputs, and I could adjust one of the levels just by clicking or dragging the level bar in the panel. The Gnome applet appears to require clicking on the icon, waiting until a slider pops up, then moving it. And if you want to change something other than Master, double-click it to get a complete set of sliders.
  • amaroK's panel icon dynamically changed as the current track progressed. So a quick glance at the icon was sufficient to tell how far in the track was. You don't miss this feature until you've had it and then been without it. It would also be nice if rhythmbox had the "stop after playing the current track" command that amaroK has.
  • In KDE, I could assign a permanent keyboard shortcut to a window. For instance, I could assign a shortcut to my browser window on desktop three, a different shortcut to amaroK on desktop one, and switch from one to the other with a single multi-key combination. (I like to use the "Win" keys for this purpose). Gnome has a window-selector panel applet that pops up a list of all open windows (Alt-Tab merely cycles between windows on the current desktop) on click, but that requires mousing over and clicking on the applet, scanning the window list, and clicking on the desired window. When I was going through my minimalist phase (on Gentoo), I could use and assign keyboard shortcuts for what seemed like everything in IceWM (don't mention Fluxbox, I'll slug you). Surely Metacity can do the same?

Tuesday, November 21, 2006

subjective impressions of Objective-C

After I read a comment on some blog expressing high praise for the features of Objective-C, I've been going through some introductory materials found by Google. I must say that the experience has been eerie. It's almost as if someone was storing C and Smalltalk on the same volume, there was some corruption, and the recovered file(s) ended up mashed together. I confess to knowing about as much about Smalltalk as Objective-C (that is, close to nil - HA, I made a funny!), but the resemblance is obvious. I think I have a clearer understanding of how people can assert() that Java is C++ going back to its Smalltalk roots, but not quite getting there.

First, the parts I like. The Categories capability is intriguing. It reminds me of the roles that are part of Perl 6. I also like the choices the language offers: either use objects with a type of "id" and send messages to whatever object you like, or use static object types and Protocols when that level of dynamic behavior is undesireable or unnecessary. Even better, Objective-C inherits the high-performing "close-to-the-metal" compilation of C, with an extra library that handles the fancy object tricks at runtime. I kept wondering why I hadn't heard more about this language, and why it didn't seem to be in more widespread use (by the way, I own nothing made by Apple).

Then my reading uncovered several justifiable reasons why Objective-C didn't hit the big time, at least on the scale of Java or C++. It doesn't have a true standard, which means that the chance of multiple entities implementing it is correspondingly lower. On the other hand, one open implementation can act as a de facto standard, so this criticism may not apply to GNUstep. Another problem for me is the syntax. Punctuation ([ : - +) is used in ways that I've never seen before. Perl is worse in this way, and I suppose that programmers who seriously use Objective-C become accustomed. Something else that bugs me is Objective-C's strong association with specific framework(s). A standard library is fine, of course, in order for a language to be useful, but I expect there to be competing libraries or toolkits for anything more complicated. I also wish that Objective-C was implemented on a common platform (Parrot, CLR, JVM), which would get rid of this issue. But then Objective-C would be competing with languages that have dynamically-typed OO as well as convenient syntax niceties that elevate the programmer above C's level. Frankly, although Objective-C is fascinating to study, I don't think it fits any important niches anymore, except possibly the niche currently occupied by C++. If you need Smalltalk-like abilities, use Smalltalk. If you just want to crunch numbers or strings at a high level of abstraction, use OCaml or Haskell. If you deeply need code that performs well, use C with a good compiler or even assembly. If you just need to solve a common problem, use a scripting language.

Here are some of the links I found:

Saturday, November 18, 2006

the existence of mathematical entities

I've started reading The Road to Reality by Roger Penrose. I'm not far into the book at all, which stands to reason since I'm reading it slowly and in small doses. I was surprised to read that mathematical entities exist as Platonic ideals, which in effect seems to mean that their existence is neither physical nor subjective. (It reminded me of the three Worlds described by Karl Popper). In contrast, I believe that mathematics is a human creation having no independent being. That is, mathematics exists in the minds of people. It has the same claim to reality as Arda.

But you may argue, "Numbers are self-evidently real, at least because we use them to count objects and then accurately operate on those quantities". Say that I accept your argument for the time being and grant you all the whole numbers. I can do this because the whole numbers and simple 'rithmetic make up only a small sliver of modern math. Consider complex or "imaginary" numbers, which involve the square root of -1 (i), or even just negative numbers or 0. Historically speaking, these concepts did not come easily to humanity. Are these more abstract math concepts still applicable and useful, in spite of perhaps not having direct analogues to normal experience? Sure, but that's not the point. The point is to realize that from the logical standpoint of the entire math system, all numbers, whether whole or zero or negative or complex, as well as all equations and functions, have the same degree of "existence"; all belong to a system of axioms, definitions, postulates, theorems, etc. And this system was not discovered by people. It was invented, bit by bit, by starting with elementary ideas and then extending those ideas. 1 did not exist until a person wanted to represent how many noses he had.

One of the important ways in which the human creation of mathematics differs from the human creation of Rumpelstiltskin is that math proceeds logically (whereas Rumpelstiltskin isn't that logical at all). This means that mathematicians assume the truth of as few statements as they can get away with, and then show that putting those statements together leads to everything else they need. Hence the obsession of mathematicians for proofs--proofs mean that someone can rely on the truthfulness of conclusion Z13(w) provided that same someone relies on the truthfulness of some definitions and axioms. Proofs make the implicit truth explicit. The discoveries of mathematics, because the discoveries come about logically, consist of finding out what was implicitly accepted as true all along. So pi, or e, or i, are not ideal entities that eternally exist until someone finds them. That is, not like the speed of light, c. Mathematical entities, even constants, are just consequences of certain definitions. In effect, no humans created or fully comprehended the infinitely deep Mandelbrot set; they simply defined a set by defining a function that operates on complex numbers as previously defined. No human has counted every number even in the set of integers, either, but the set is infinitely large because it is defined that way.

Math does not originate or exist in the (completely objective) physical world, although the physical world displays enough order that mathematical models can correspond to it with varying degrees of accuracy. Math also does not originate or exist in a (completely objective) hypothetical world of Platonic ideals. Math is all the logical consequences of specific definitions, and the definitions in turn are/were developed by humans who used their powers of abstraction to create logical ideas out of physical reality. The same thought processes that enable generalized language (the word and representational concept of "chair" as opposed to the specific instance I'm sitting on at the moment) also enable the creation of mathematical entities. And logic, the process of combining true (abstract) statements into new true (abstract) statements, enables anyone to discover what they believed all along.

blog updates

I switched this blog over to the new "beta", and then I applied some labels (or tags?) to the posts and tried out the blog customization. Not too shabby. Now the WordPress blogs don't make me feel quite as inferior.

Here's some further explanation about the subject each label represents, with the caveat that this list will soon be out of date:
  • .Net/F#. (ASP).Net or F#
  • Blog Responses. A response to a post on another blog
  • Metaposts. The blog
  • Philosophical Observations. Sincere but amateurish remarks about (what passes here as) deep philosophical topics
  • Rants. Rants that serve no constructive purpose
  • Reviews. Reviews of movies, books, TV
  • Software Development. Techniques, debates, and general observations of software development and programming languages
  • Software Explorations. My investigations into existing software, or any original (to me) ideas for/about software
  • Trivial Observations. Miscellaneous topics and comments

Thursday, November 16, 2006

the power of the sun in the palm of my hand

I use my mobile phone pretty much just for phone calls. Color me odd. But when I saw that a mobile SimCity was available, I couldn't resist giving it a whirl, if nothing else for the sake of nostalgia.

I bought SimCity Classic off the store shelf a while ago, copyright 1989, 1991. The box I still have is the same one pictured in the wikipedia article. In the lower left corner it lists the system requirements: "IBM PC/XT/AT/PS2, Compatibles", "Supports EGA, CGA, Hercules mono and Tandy Graphics", "Requires 512K (EGA 640K)", "Mouse and Printer Optional". I ran it on an XT-compatible with CGA graphics and 512K, and no mouse. It was DOS, of course, so the game had its own windowing system, along with a menu option to only animate the top window to allow better game performance. Here's a line from the manual: "Simulator reaction time is also greatly affected by your computer's clock speed and type of microprocessor. If you have an XT-compatible running at 4.77 MHz, life in SimCity will be much slower than on a 386 running at 33 MHz." Its copy protection was a red card with cities and population data. On startup, the game would ask for a statistic off the card. Any product I purchased from Maxis seemed to be reasonable on the issue of backup copies and copy protection. Considering the number of times I experienced a floppy disk failure, it was only right. Later versions of SimCity lost my interest, since the family's computer never kept current enough to play them.

Enough reminiscing. The mobile version I played had similarities to what I remembered, but actually was more complex in a few ways (to start with the obvious, a much wider range of colors). And the graphics weren't top down, but perspective-drawn, as if looking down at the city from a high oblique angle - the kind of view from a helicopter. The game was just as responsive running on my cell phone as it had on my XT-compatible, maybe more. My point is that technological advances happen so gradually that it can sometimes take a common point of reference--in this case, the game Simcity--to illustrate the great difference. My puny phone can run a game that is very similar to a game that ran on a desktop, and the phone version is even enhanced. It feels like I have the power of the sun in the palm of my hand! Almost makes you wonder if all that speculation about the Singularity isn't a huge load.

Along the same line, I ran Fractint when my family finally had a 386 (no coprocessor, which sometimes was a pain when running this program). It was an outstanding program created through the collaboration of many coders working over the Internet, or maybe it started in Compuserve for all I know; either way, Fractinct must have been the first time I'd heard of people doing that. By the time I came around to it, Fractint's feature list was incredible. In addition to many fractals and even fractal subtypes, the display parameters could be tweaked to do a slew of wild effects. It also offered a long list of video modes, any combination of resolution and colors one might want. Rather than sharing the generated images with others, the user could simply use a menu option to save the Fractint parameters that resulted in a given image and send the parameter file to another Fractint user, who could then use the corresponding menu option to load the parameters into Fractint and regenerate the image. Many of the fractals have informative documentation screens that explain the origin of the fractal and its calculation method. I could just go on and on.

As you may guess, I've kept my DOS Fractint files around just like the way I kept my SimCity materials. Any sentimental coot knows that Linux has a project, DOSEMU, that can run (Free)DOS excellently. DOSBox may be better for games, but I have had no trouble running Fractint with DOSEMU. After adding the line "$_hogthreshold = (0)" to my /etc/dosemu/dosemu.conf, which seems to be the equivalent of saying "sure, you can have my CPU, I don't need it for anything!", DOSEMU generates even the worst, most-complicated fractals of Fractint so quickly it's not even funny, in spite of the layer of indirection. Having to wait to see your fractal was such a large part of the experience...if all programs were written to perform as well as Fractint, latency in software would be a laughable concern. Here in my room is a computer that has so much more processing power than that 386, and it mostly sits idle, or whips through a web page render, or decodes a media file. It's a criminal case of waste.

Tuesday, November 14, 2006

First rule of proselytization is to respect the infidel

The idea behind the clash of civilizations post was that the Java (and C#, etc.) folks have different values than the dynamic languages camp, so arguments that are completely convincing to one side are considered irrelevant by the other side. They talk over each other's heads.

However, if someone wants to try to talk across that gap, and possibly help someone cross, I think that insults are not a good start. In defense of J2EE's complexity addresses the layered, many-pieced architecture of J2EE through a historical point of view. As a general rule, technology (especially software) is invented, popularized, and evolved to serve a specific need, even if that need may be quite broad. J2EE wasn't an accident! The problem with this blog entry is the argument at the end: that software engineering should be complex enough to keep out BOZOs. chromatic called him on it over on his O'Reilly blog, because, obviously, there are many reasons for a non- or anti-BOZO to use a simpler technology.

An example of a good way to communicate with your intellectual opponents is this post by Sam Griffith. He's the same writer who previously expounded on a Java macro language, which I linked to from my MOP post a while ago. He gives Java its credit, but at the same time explains how Java could learn some things from other languages/platforms that had similar goals. Another good example is the Crossing Borders series over at IBM developerWorks (even if it exhibits the annoying trait of non-Java content in a Java-centered context). Each page in the series demonstrates an alternative approach to the common Java way of accomplishing a task, and then it compares and contrasts. Some of the articles, like one on Haskell, honestly don't seem to offer much for the Java developer, and one on web development seems to basically suggest that we should go back to embedding straight Java code in our page templates. But the one on the "secret" sauce of RoR is enlightening.

Personally, I often read these articles with a half-smirk, because the pendulum swing here is clear: we started out writing web stuff in C, then switched to Perl, then Java, and now back to Python/Ruby/PHP or what have you. The other reason for my smirk is because I'm now forced to work in ASP.Net anyway. But if Microsoft can do anything, it's copy and clone, so there's work afoot to use IronPython for more rapid web work.

Saturday, November 11, 2006

the SEP field of social interaction

Disclaimer: I am not a serious student of sociology, although I have had a spiffy liberal arts education. Moving along...

A post about mockery and now this. When I started this blog, I assumed it would mostly revolve around my chosen specialty/vocation. What can I say? I haven't had much to write about lately, tech-wise. I do have the following tidbit of commentary. The agreement between Microsoft and Novel about SUSE has inspired me to rewrite the well-known Gandhi quote "First they ignore you, then they laugh at you, then they fight you, then you win" to "First Microsoft ignores you, then Microsoft spreads FUD about you, then Microsoft pretends to peacefully coexist with you, then Microsoft or you crushes the other". Not as poetic, but accurate.

Enough preamble ramble. My experiences in society have reminded me of the HitchHiker's series once more. This time, it's the Somebody Else's Problem field. The SEP field is a field that accomplishes the same effect as a typical invisibility field, but by masking everything in the field as "somebody else's problem". It exploits the natural human (and all aliens?) impulse to not take responsibility for something unless necessary.

An SEP field-like effect can occur in social interactions, too. An SEP field social interaction consists of someone not acknowledging his counterpart's very existence. As I once heard someone joke, "His existence is not pertinent to my reality". The SEP field effect is not the same as ignoring someone else, because that would imply that the other person was noticed and then disregarded. It is also not the same as indifference, because that also implies an uncaring awareness of the other person. Within the SEP field, people may as well not exist at all, because in either case the impact is precisely null. In fact, this could be a useful detection method for social SEP fields. If someone's absence is not noted in any way, then clearly an SEP field may be masking that individual when he/she is present.

I should further note that the SEP field of social interaction is not a reliable indicator of cruelty or inhumanity. Simply put, someone would have to acknowledge someone else exists in order for an act of cruelty or inhumanity to occur. Those within the SEP field are not necessarily hated, ignored, low-class, etc. They're just nonexistent to others. I don't know if the SEP field has been explored in sociology, but I'm reminded of the categories of Gemeinschaft and Gesellschaft, where I think of Gemeinschaft as "society in which people interact based on relationships" and Gesellschaft as "society in which people interact based on purpose". I'm more inclined to put the SEP field of social interaction in Gesellschaft, though I suppose it really fits in neither, given that SEP fields negate normal society.

How can one defeat an SEP field? Hell if I know. Oh, wait: Hell is other people. Aw, now I'm just confused.

Monday, November 06, 2006

how I met your Whedon

So, I've been hooked on "How I Met Your Mother" since the first episode. It's just a good show. And it happens to star Alyson Hannigan, who played Willow from Buffy. Then Alexis Denisof, who played Wesley on Buffy and then on Angel, guest-starred in a few episodes. Fun times, but relatively unsurprising considering Alyson and Alexis are married.

The trend continued when Amy Acker, who played "Fred" on Angel, managed to slip into the very last episode of season one. That makes three alumni from Whedon shows. Until tonight, when Morena Baccarin, who played Inara on Firefly, was on as, go figure, a gorgeous rival to Alyson's character. Even the most skeptical viewers would have to admit that these casting choices have gone beyond coincidental. The ultimate topper is that a miniscule role in tonight's episode, an unnamed "Guy", was played by none other than Tom Lenk, who played Andrew on Buffy! It's a conspiracy, I tell you, a conspiracy! Is anyone else seeing this? I feel like I'm taking crazy pills!

On Wednesday Nathan Fillion, who played some sort of superdemon thing on Buffy and then as Capt. Mal in Firefly, will be on Lost this week. It will also be the last episode before Lost breaks for a while so its production schedule can get caught up again. I'm thinking it will be a good one. I resisted the urge to comment on the last episode. I read online that what transpired was planned from the start, I've learned to trust the writers, and I appreciated the grand significance of it all.

For those who are keeping track, Neil Patrick Harris is gay. Not that there's--oh, I can't finish the line, it's so old and moldy.

Sunday, November 05, 2006

you are who you mock

Disclaimer: I am not a serious student of pop culture studies, media or film studies, sociology, anthropology, etc., etc. Moving along...

Recently I've been paying special attention to mockery in media. Anyone with a passing acquaintance with human nature would guess correctly that the trigger was when I experienced one of the groups or subcultures I identify with being mocked. (The applicable truism is that people find it hard to even care about something unless it affects them personally).

It was on a TV show some weeks ago. I won't delve into that with any greater detail, since this is one of those cases in which specific examples might distract from considering mockery as a general concept. I will add that the remark was well-aimed and well-executed, and clearly hilarious to anyone not taking it seriously. I'm not considering here whether mockery is ethical, justifiable, effective, enjoyable, or necessary (IMNSHO: the context and degree and intention are all pivotal factors). I'm also not considering self-mockery or lighthearted mockery between friends.

For me, the interesting insight I've reached after observing the mockery of one group by another is how it defines the group doing the mocking. Even casual or off-the-cuff mockery broadcasts a group's values, mores, and norms on a huge megaphone, in fact, and in more than one sense. First, a sincere mocking reinforces the identity of the group by indicating who the group is not. We are not Them. There is no intersection between our groups, because if that were true then we would be talking trash about ourselves! Second, mockery can indicate what the mocking group fears, hates, or doesn't understand. One of the primary uses for humor is coping. Before mocking another group, I may know full well that they are not all idiots or degenerates; I may possibly be on good terms with someone in that group. But if something about that group disturbs me, I can defang it by laughing at it as if it were a characteristic of an idiot or degenerate. I simply declare it unworthy of uncomfortable consideration. Third, mockery may illustrate how a group measures the status of its members, because mockery consists of bestowing an uncomplimentary status on another group. If my group equates status with intelligence, then you could reasonably expect me to mock the intelligence of other groups. A group that values fashion may mock the clothing of others. And so on and so forth. You don't mock what you don't measure, because the funny part is how low he/she/they measure up. Fourth, the method and manner of mockery can betray a group's concepts of humor or disgust. If a group mocks another group by calling them a bunch of grues, then clearly the group believes that grues are either funny or distasteful. If I mock you by calling you a jackass or an even more colorful metaphor, then I must not think much of jackasses.

You are who you mock. More practically, even mockery can be a possible vector of understanding between groups.

Sunday, October 29, 2006

the Prestige

I saw The Prestige a few days ago, which itself is a somewhat momentous event because I seldom see movies in the theater unless: 1) I honestly can't stand to wait to see it, and/or 2) the movie obviously benefits from huge screen and sound. The Prestige falls in category 1, although it has a handful of impressive scenes that fit 2. The three Lord of the Rings movies, in contrast, were off the scale in both categories--just had to see those in theaters. In the case of the Prestige, the combination of a reunion between some of the people behind Batman Begins, a string of intriguing previews/ads explaining the movie concept, and a set of other people who are fun to watch on screen, Hugh Jackman and Scarlett Johansen and David Bowie and Andy Serkis, proved too irrestible.

All things considered, the Prestige is the most subdued movie I've seen in a while, if you don't count Proof, which was a DVD rental for me. This movie has deaths and injuries, but each one is staggered throughout the movie, so the usual desensitization doesn't occur. These dueling magicians don't have it out in some dark alley, at least not for a long time; they're after public embarrasment, complete career ruination, and eye-for-eye retribution. Although the two share a passion for glory and fame, they have different ways of pursuing it. Minor characters get dragged into the story as pawns, including an astoundingly eccentric Tesla, but thankfully the minors also have their own independent agendas. Michael Caine's character tries to lend some levelheadedness to those around him, but his effort is squandered. Everyone schemes and deceives so much that, fittingly enough for a movie about magicians, it's hard to tell when anyone is telling the truth.

They do tell skilled half-truths. The movie's surprises, praised and denounced by critics, don't happen like deus ex machina, but come after oft-repeated clues. The clues are important, because otherwise the solutions at the end would seem like cheats. I mostly figured out what was happening before the big reveals, but not everything. And the parts I didn't think of were related to clues that had made me think "Something must be significant about X, but what?" The end satisfied me.

Apart from the interpersonal struggles and plot machinations, the magic tricks in the movie are fun to watch, either from the perspective of the unwitting audience or the (dis)ingenious magician. Of course, at a time when movie technology has advanced to the point that animals routinely morph into people on screen, audiences are more jaded about what they see. In any recent all-talents-accepted competitions, the magicians are usually the first to go. We're accustomed to seeing fake realities on our screens, and aren't impressed. It's nice to have a movie to celebrate a simpler time when magic wasn't taken for granted.

Saturday, October 28, 2006

timelessness of humor in "Great Pumpkin"

I was flipping through the channels, saw "It's the Great Pumpkin, Charlie Brown", and to my surprise, watched almost the entire thing. Apparently, the annual showing of this special is not a fluke. It's good.

You may be thinking, "No, ArtVandalay, you're just easily amused by animation". And, well...I guess my three volumes of Looney Tunes Golden Collection DVDs would agree. Still, I don't watch cheap, crappy animation, unless it has excellent writing. C'mon, you can't sit there and say that having a cartoon character remark "Pronoun trouble" and having it make sense is only intended for kids.

Back to my main point. The only way a special that originally aired in 1966 could keep my attention would be if the humor is timeless. I'll expound on this if I can just switch into Hyper-Analytic mode...(grinding gears) (clunk). There. Here is how the humor is timeless, aside from the mere fact that it's Peanuts:
  • Humor of repetition. One of the rules of comedy is that if you can find something that makes people laugh, you may as well try repeating it. Don't repeat jokes that primarily rely on surprise; it won't work, and you'll appear to be trying too hard. Most people have a finite tolerance for nonlinear humor. For instance, Aqua Teen Hunger Force cartoons start to annoy me after a couple of minutes. Charlie Brown getting a rock at each house, on the other hand, works so well that the four words "I got a rock" are now funny even out of context. Breaking up the repetitions with other kids exclaiming what they got (not rocks) effectively makes Charlie Brown's repetition stand out more.
  • Visual humor. It pretty much goes without saying that references to current events can be mined for funny lines or even skits. Just as obviously, such references aren't funny out of context. Jokes about current events become jokes about historical events in an alarmingly short time. Actually, any jokes that rely on a shared cultural context don't even make sense in other cultures in the same time period. However, humor based on what the audience is seeing seems to be more universal. Dancing Snoopy, other children using Charlie Brown's head as a pumpkin model, or Charlie Brown having trouble with the scissors doesn't require anything from the audience except the capability to react to sounds and sights. This also means that visual humor can amuse people of any intelligence level.
  • Tragic humor. Anyone with a shred of empathy probably feels some guilt about enjoying someone else's misfortune. If you want a fun word for this, you can refer to it as "schadenfreude", although "ferklempt" remains my personal favorite word borrowing from another language. The pettiness inside every person snickers whenever someone else endures tragedy. I think there's also some of this humor in "I got a rock". Charlie Brown is overjoyed after receiving a party invitation. Lucy explains that his name must have been on the wrong list. Linus persists in his rather self-destructive belief in the Great Pumpkin, so everyone else can make jokes at his expense. On a lighter note, Charlie Brown's recurring attempt to kick the football always seems to result in him falling on his back. Seeing people punished for false hope somehow just never gets old--ha ha, what a fool! Peanuts has to be the most depressing comic strip to ever hit the big time. Those who say that real kids aren't that cruel haven't seen enough groups of kids.
  • Humor about childhood. Kids don't think things through logically, and therefore stumble naturally into comical situations. Even better, they think they know more than they do. Having Linus say "I didn't know you were going to kill it!" when Lucy slices into their pumpkin is a childish thing to say (given how deeply Linus thinks on other occasions, I think there's some inconsistency here). Not necessarily because the pumpkin wasn't a living thing (it was), but because once the pumpkin is out of the patch, it's already dead, and in any case it doesn't experience pain. Jumping into a pile of leaves while eating sticky candy falls in the same category. Lucy's overreaction to accidentally kissing Snoopy also is a case of unwarranted childhood zeal. Snoopy's vivid journey into his own imagination is another childhood trait. This humor doesn't appeal to me as much as it might to others, seeing as how I didn't relate to kids even when I technically was one, but it's certainly timeless, at least in parts of the world where a peaceful childhood is still possible and until the damn dirty apes take over.
  • Reapplying old sayings. This kind of humor seems to be one of the identifying characteristics of Peanuts. For me at least, an overused saying is an overused saying, and none of the instances of this humor provoke more than a momentary smirk from me. It's even somewhat eerie to hear a kid say statements similar to "clearly, we are separated by denominational differences" or "the fury of a woman scorned is nothing compared to the fury of a woman has been cheated out of tricks-or-treats". Hearing kids complain about Christmas being too commercial, in another famous holiday special, also smacks of turning little kids into mouthpieces for adults. Creeeepy.
Maybe it's the nostalgia talking, but I have deep respect for any humor that manages to be timeless, in any medium. We can do far worse than introducing such works to future generations. Should probably wait until they reach a certain age before showing them Some Like It Hot, though.

Thursday, October 19, 2006

impressions of the Lost episode Further Instructions

First of all: whiskey-tango-foxtrot. Double whiskey-tango-foxtrot.

Locke is back. To be specific, the taking-out-local-wildlife, take-charge, island-mystic Locke, as opposed to the button-pressing, film-watching, hatch-dwelling Locke of last season. It seemed like the "faith" torch had passed from Locke to Eko (note that both of these guys have stared into the hyperactive chimney smoke). Er...not quite. At this point, I'm not seeing how the writers are going to explain, in plain natural terms, how Locke et al keep receiving visions that tell them what to do. As I mentioned in a previous Lost-related post, there must be some mind control mechanism at work on this island. Don't forget, Charlie had vivid hallucinations in "Fire and Water" ordering him around, Eko got some guidance from his brother from beyond the grave, Hurley had a convincing series of interactions with someone who never existed, and a previous vision caused Locke to find the airplane that killed Boone. If you want to trace this pattern back even further, recall that Jack's dead father led him to the caves in an early episode in season one.

On a smaller level, I eagerly await an explanation of how people inside an imploding hatch end up outdoors. Why is only Desmond nekkid? Why is he naked at all? Why is Locke mute? Keep asking those questions for us, Hurley and Charlie.

You know, I'm not sure the show needs more characters. Prove me wrong. I wonder how Rose has been lately?

My prediction was that the next time Locke appeared, he would have lost control of his legs because of the hatch implosion. And the corresponding backstory would have shown how he lost control of his legs in the first place. I'm not a writer, but I thought it would have worked fine that way. This show is so kooky. People who get frustrated with the lack of conclusive answers are missing the point. The fascination is in watching the show toss in dumbfounding mysteries all the time, and wondering when the entire tangle will either be straightened out or finally snap into a ruined heap of dangling threads. First season, Locke pounded on the unopened hatch until a bright beam shone out to answer him; second season, we eventually discovered that someone living in the hatch had heard him and merely switched on a light. Here's hoping that the underlying Answers all make as much sense.

Monday, October 16, 2006

update on F#

I haven't used F# much recently, but I've tried to keep myself apprised of it. Don Syme's blog seems like the de-facto news site; many of the following links go there. Let the bullet shooting commence.
  • F# has a new "light" syntax option (put #light at the top of the file). With #light, various end-of-line constructs like semicolons or the "in" at the end of a "let" are no longer necessary. Also, groups of statements are delimited by indentation (Python-style). I like the convenience of this option, of course, but my snobbish side feels offended by how it makes ML-style functional code feel even more like the plain imperative kind. So, points for style, maintainability, conciseness, shallower learning curve, etc., but negative points for reducing the tangy esoteric/baroque flavor.
  • Syme has made a draft chapter of his upcoming F# book available for anyone to read. I recommend it as a supplement or even a replacement to the tutorial information floating around the Web. It's a kinder, gentler approach to learning F#. After reading it, I realized that I could simplify parts of my code by using object methods where possible rather than always calling functions from modules (functions in which the first parameter was the object!).
  • There are a couple F#-related video clips that were on Channel 9. One is an interview, and the other is a flashy demo. If you already know all about F#, neither of these clips will provide new information. I repeatedly paused the demo clip so I could examine the code on screen. In the interview, Syme is quite diplomatic about the possible real-world uses of F#. While F# may shine at data crunching, it also is a general-purpose language that shares many features with C#.
  • I went to TIOBE after reading that Python had overtaken C#, and I found that, although F# isn't in the top 20 or even 50, it has a specific mention in the "October Newsflash - brought to you by Paul Jansen" section. To quote:
    Another remarkable language is F#. The first official beta of this Microsoft programming language has been released 3 months ago. This variant of C# with many functional aspects is already at position 56 of the TIOBE index. James Huddlestone (Apress) and Robert Pickering draw my attention to F# via e-mail. Later Ralf Herbrich from the X-box team of Microsoft wrote to me "After working with F# for the last 9 months, I firmly believe that F# has the potential to be the scientific computing language of the future." [...] In summary, I think both Lua and F# have a great future.
Regardless of a programming language's merits, qualitative factors like its buzz level, ease of learning, and PR savvy affect its popularity. As others have noted, Python and Ruby had a long history before achieving notoriety. In spite of still being just a continually-evolving "research project", F# is well on its way.

Monday, October 09, 2006

again too good not to share

Apparently the Drunken Blog Rants guy has been complaining about overzealous Agile folks. And he continued the discussion with another post, Egomania Itself. (The title is an anagram of Agile Manifesto.) I'm not commenting about the "debate" because I frankly don't care. As long as managment doesn't force a rigid methodology on me, and I can get my work done well and on time, and the users are overjoyed, I think that everything is fine. Oh, and pigs fly.

No, the real reason I bring this up is because of the Far Side cartoon reference. Behold! The ultimate example of, as Yegge says, "non-inferring static type systems"!

My stance, if I consistently have one, on the merits of static vs. dynamic typing is as follows:
  • To the computer, all data is just a sequence of bits. The necessity of performing number operations on one sequence and date operations on another pretty much implies that the first sequence is a number and the second sequence is a more abstract Date. Data has a (strong) type so it can make sense and we can meaningfully manipulate it. Don't let that bother you.
  • An interface with data types is more descriptive than it otherwise would be. Especially if your code intends for the third argument to be an object with the method scratchensniff. I shouldn't have to read your code in order to use it. Admittedly, data types still aren't good enough for telling you that, for instance, an integer argument must be less than 108. Where the language fails, convention and documentation and, most importantly, discipline can certainly suffice...like an identifier in ALLCAPS or beginning with an _underscore.
  • Speaking optimistically, we programmers can see the big picture of what code is doing. We can tell exactly what will happen and to what, and for what reason. Compilers and computers don't, because compilers and computers can't read. However, when data is typed, the compiler can use that additional information to optimize.
  • Leaving type unspecified automatically forces the code to work more generically, because the type is unknown. The code can be set loose on anything. It may throw a runtime error because of unsuitable data, but on the other hand it can handle cases the original writer never even thought of. This can also be done with generics or an explicitly simple interface, but not as conveniently.
  • Not having the compiler be so picky about types and interfaces means that mock objects for unit testing are trivial.
  • There is a significant overhead to keeping track of often-complex object hierarchies and data types. Knowing that function frizzerdrap is a part of object class plunky is hardly intuitive, to say nothing of static methods that may have been shoehorned who-knows-where. Thanks be to Bob, IDEs have gotten good at assisting programmers with this, but it's also nice to not have to search as many namespaces.
  • Dynamic typing goes well with full-featured collections. Processing data collections is a huge part of what programming is. The more your collection enables you to focus on the processing, and not the mechanics of using the collection, the better. With dynamic typing, a collection can be designed for usability and then reused to hold anything. Some of the most confusing parts of a statically-typed language are its collections, whether the collection uses generics or the collection uses ML-style type constructors.
Is static typing or dynamic typing better? Silly rabbit, simplistic answers are for kids!

Wednesday, October 04, 2006

too good not to share

A funny quote from this page which quoted it from this page.
When we have clients who are thinking about Flash splash pages, we tell them to go to their local supermarket and bring a mime with them. Have the mime stand in front of the supermarket, and, as each customer tries to enter, do a little show that lasts two minutes, welcoming them to the supermarket and trying to explain the bread is on aisle six and milk is on sale today.
Hilarious, but also a great lesson.

Sunday, October 01, 2006

prisoner's dilemma for P2P

Surely I can't be the first person to notice this, but the Prisoner's Dilemma has some applications to P2P. If you want a more thorough treatment of the Dilemma, look elsewhere. Here's the shorthand. There are two independent entities called A and B (or Alice and Bob, if you like). Each has the same choice between two possibilities that are generally called cooperate and defect. A and B don't know what the other will choose. The interesting part comes in the payoffs or profit margin of the four possible outcomes:
  1. If A cooperates and B cooperates, A and B each receive medium payoffs.
  2. If A cooperates and B defects, A receives a horribly low payoff and B receives a huge payoff.
  3. Similarly, if B cooperates and A defects, B receives a horribly low payoff and A receives a huge payoff.
  4. If A and B defect, A and B each receive low (but not horribly low) payoffs.
To summarize, A and B as a whole would do better if A and B cooperated, but the great risk of the other entity defecting means that the better choice (the choice with the better average payoff) for A as well as B is defecting. The Prisoner's Dilemma simply illustrates any "game" in which cooperation would be good for both players but a cooperation-defection situation would be disastrous for the cooperating player. If it helps, think of two hostile nations who must choose whether to disarm their nuclear weapons. Analytically, the more interesting case is if there are an unknown number of successive turns and payoffs, because the result of a cooperation-defection can be recovered from.

The Prisoner's Dilemma applies to P2P transactions in a limited way, if you consider the payoff to be the difference between downloads (positive) and uploads (negative). Assume A and B are two peers who wish to download something of equal value, normalized to be 1, that the other has (and everything's legal, yahda yahda yahda).
  1. If A uploads what B wants and B uploads what A wants, A and B both upload (-1) and download (1), so the payoff for each is 0.
  2. If A uploads what B wants and B does not upload, then A ends up with a payoff of -1, and B ends up with a payoff of 1.
  3. If A does not upload and B uploads what A wants, then A ends up with a payoff of 1, and B ends up with a payoff -1.
  4. If A does not upload and B does not upload, then no transaction occurs at all, so the payoff is 0.
Now it's clear how the Prisoner's Dilemma begins to fall apart as an analogy. For instance, the payoff for each entity in outcome 1 should be greater than the payoff in outcome 4. More importantly, we've only considered the "net transaction of value", and not the end result of the transaction, which is that either you have what you wanted or you don't, so the "result payoff" for each peer is as follows (assuming you didn't upload one half of an unobserved, entangled quantum pair, in which case your copy collapsed into an undesirable state when your downloader observed his copy):
  1. A and B both have what they wanted, in addition to what they uploaded, so each have a payoff of 1.
  2. A did not get what it wanted, but B did, so A's payoff is 0 and B's payoff is 1.
  3. A got what it wanted, but B did not, so A's payoff is 1 and B's payoff is 0.
  4. A and B both got nothing, a payoff of 0.
This scenario doesn't cover the situation of a peer wanting something but not having something that the other peers want. Also, unless any analysis can take into account an arbitrary number of peers and files, it won't directly map onto a real P2P network.

Nevertheless, the Prisoner's Dilemma shows that, assuming uploading or sharing has a cost, "leeches" are merely acting in a rational manner to maximize their individual payoff. So P2P systems that make it more rational, or even mandatory, to share stand a better chance of thriving. Specifically, if a peer downloads more than one item, the iterated or repeating Prisoner's Dilemma can come into play. A peer that acted as a leech for the last item could have some sort of penalty applied for the next item the peer downloads. Through this simple mechanism, known as Tit-for-Tat, cooperation becomes more attractive. Call it filesharing karma, only with a succession of files instead of lives. Any peer downloading its very last item would rationally defect on that item, because there would be no chance to receive a future penalty.

Friday, September 29, 2006

peeve no. 243 is use of the acronym AJAX

Yeah, I know acronyms, being symbols, can't hurt me. (Just to be clear up front, I'm talking about the acronym standing for Asynchronous Javascript And XML, not the Ajax cleanser I recall seeing in my elementary school). When I hear or read AJAX, I sometimes get the impulse to whip out my Dilbert-inspired Fist O' Death or maybe my Wolverine adamantium claws. I find this impulse has been increasing in strength and frequency.

As with most violent rages in my experience, the reason behind it is simple: "AJAX" means Javascript just like DHTML meant Javascript. Why use a new term for the same thing? Why obscure what AJAX really is? Why force my brain to mentally translate AJAX to Javascript each time, and also force me to explain the same to managers that read rather sensationalist tech mags?

My inner angry midget whispers that the reason everybody says "AJAX" and not "Javascript" is deceptive marketing, a well-known root of myriad evils. Remember when the prevailing opinion among serious enterprise-y types was that Javascript should be condemned for enabling annoying browser tricks? Popups, window resizes, and obnoxious uses of the status bar? Eh? Not to mention rampant cross-browser incompatibilities.

The situation is better now. Even if browsers still have important implementation differences, at least each one supports a reasonable, standard-based core of impressive features. For years Javascript has gotten more respect as a "real" language. It even has crashed through the browser walls and onto Java. Javascript was a naughty, naughty brat once, but not any more. Call it by its real name, for Bob's sake! I utterly despise when language is perverted for the sake of rank marketing. Words should communicate, not mislead. Doubleplusgood political correctness can go take a flying leap as well.

Postscript: OK, I admit AJAX is not synonymous with Javascript, so the term does communicate additional meaning. After all, if it only stood for Javascript, it would be "J", right? I guess I wish that AJAX had gone by a different name that emphasized the central tech involved, Javascript. Perhaps one could go X-treme and call it JavascriptX? Or JavascriptRPC? I confess that AJAX is an efficient, memorable way to get across the idea of "browser Javascript getting data without a page refresh by retrieving XML". As I explained, "AJAX" doesn't bug me nearly as much as the fact that the term is deployed daily, with no sense of contradiction, as the next Answer to a Rich Web by the same folks who maligned Javascript.

Monday, September 25, 2006

you are in a twisty maze of little bookmark folders

My problem is well-known. Like anyone else who spends a generous chunk of time following hyperlinks across the Web, I have a long yet ever-growing list of bookmarks in Firefox. To make sense out of the list, I keep the bookmarks in a menagerie of folders and folders-within-folders. It's easier to find a bookmark this way, but now I must potentially go through multiple folder levels before arriving at the actual bookmark. Moreover, I regularly visit only a handful or two of the bookmarks. I want to keep my organization scheme but also keep bookmark access convenient, especially for the most-visited bookmarks.

There's some great options available. The bookmark sidebar, opened and closed with Ctrl-B, has a search input that searches as you type. Any bookmark has an assignable Keyword property that will take you to that bookmark when you enter it into the address field. Either technique works well if you know exactly what bookmark you want to access. What I wanted was a way to group all of my most-visited sites in one folder, preferably automatically, then just use that folder for my common browsing. Desktops already provide something like this in startup menus for frequently-used programs. Interestingly, the general idea of "virtual bookmark folders" was in the planning for Firefox 2, but postponed. A way to accomplish this in the mean time, as well as affording ultimate (manual) control, would be to open up all of your commonly visited sites in a set of tabs, then Bookmark All Tabs.

OK, next I turned to the legion Firefox extensions. I tried Sort Bookmarks first. It lived up to its name. I'll keep this extension around. I liked that the bookmarks I cared for most were now at the top of each folder's list, but the problem of too many folders was still present. I started to go through the code for this extension to gauge how I could create my own extension for an automatically-updating "Frequently Visited Bookmarks" folder. Then I found the Autocomplete Manager extension. With this, bookmarks can appear in the address field's autocomplete list matching against the bookmark URL or title, and the bookmarks can be sorted most often visited first. Sure, it's not the most efficient search in the world, but it beats having to pop open the bookmarks sidebar or enter and memorize bookmark keywords. My remaining concern is that Firefox seems to have trouble sometimes keeping track of when I visit a particular address, resulting in low visit counts.

EDIT: It seems that restarting Firefox somehow updates the visit counts. I'd say to go figure, but I'm not going to; why should you? I have kept Firefox open for more than a week at a time, but starting it anew each day isn't that burdensome, I suppose.

Saturday, September 23, 2006

hackers and...musicians?

I recently found the Choon programming language page. To quote:

Its special features are:

  • Output is in the form of music - a wav file in the reference interpreter
  • There are no variables or alterable storage as such
  • It is Turing complete

Choon's output is music - you can listen to it. And Choon gets away without having any conventional variable storage by being able to access any note that has been played on its output. One feature of musical performance is that once you have played a note then that's it, it's gone, you can't change it. And it's the same in Choon. Every value is a musical note, and every time a value is encountered in a Choon program it is played immediately on the output.


I haven't used Choon to do anything useful, of course (even if you want to write down music in text form in Linux, there are much better options available). But I have used Choon to think of an absurd mental picture: an office full of programmers humming in harmony over the cubicle walls, perhaps to create the next version of TurboTax. I'm reminded of a scene in the educational film from the Pinky and the Brain episode "Your Friend Global Domination". Brain proposes a new language for the UN, Brainish, in which each speaker says either "pondering" or "yes" each time. By varying the tone and rhythm of their one-word statements, they can have a conversation about several topics simultaneously. I say Kupo! to that.

Unrelated observations from killing time by watching old Smallville shows in syndication: 1) crazy Joe Davola is one of the producers, 2) Evangelline Lilly has one of those blink-and-you-might-miss-it moments guess-starring in the episode "Kinetic" as a ladyfriend of one of the episode meanies.

Tuesday, September 19, 2006

impressions after reading the Betrayal Star Wars novel

I say "impressions" because I don't want to do a full-blown review. I'm satisfied with the book. Some of the scenes dragged a little for me, but I admit that I have a weakness for winding, ever-developing plots made up of a lot of little chunks...in short, I wish it was more Zahn-esque. I did like that each character showed off his or her own unique voice, behavior, and mentality. The humorous dialogue sprinkled throughout the book was great, too, and it didn't feel forced, because the characters are known for the ability to make wisecracks in any situation. I appreciate the references to past events in the sprawling Star Wars timeline, although I get annoyed by how much I need to be caught up. Skipping all of the "New Jedi Order" books except two will do that.

I kept being distracted by the inconsistent application of technology in the Star Wars universe. On the one hand, there are starships, laser guns, "transparisteel", and droids, but on the other hand, the people seem to live a lot like us. They drink "caf" to stay alert. They wash dishes. They use switches to turn lights on and off. They wear physical armor, which appears to be mostly useless against any firearms. Any piloting controls appear to operate about the same as an airplane. Well, maybe you press more buttons if you're about to make a faster-than-light jump. There are vehicles that float above the ground, sure, but where's the hoverboard that Michael J. Fox taught us to ride? Hmm? Food seems to be prepared and consumed just like it is now. Health care seems pretty primitive, apart from the cool bionics. In general, there's a surprising lack of automation present, considering how advanced the computers are. At least in Dune there was an explanation for the same occurence: the legendary rage-against-the-machine jihad!

I won't discuss any real spoilers here, but speaking of parallels to Dune, the ability to see the future raises some intriguing ethical, end-justifies-the-means dilemmas at the end of the book. If I somehow know that your life will cause something bad to happen, is it right for me to kill you? How sure would I have to be? How could I be sure that I had considered all the alternatives? Does "destiny" become a consideration? Since the future isn't set in stone, would any vision be a reliable basis for making important decisions? Could one all-seeing, all-knowing person take it upon himself to redirect the course of history? Considered over a long enough time range, is any action only good or only bad? And the HUGE kicker: is the Sith way of embracing attachment and passion necessarily evil, or merely different? RotJ would suggest that it is precisely Luke's stubborn attachment to his father that leads to his redemption. Don't forget, Yoda and Obi both told Luke to cut that attachment loose and make with the patricide.

Friday, September 15, 2006

webapp state continuations

I went through some of the Rife web site pages recently. Rife is one of those projects whose name keeps popping up, especially in places that talk about applying the Rails viewpoint to Java. This interview with the founder was enlightening. I like being reminded that no technology is created in a vacuum. The motivations behind the technology influence its shape.

The Rife framework has several interesting ideas. The feature that stood out most to me was integrated web continuations. I had always assumed that continuations have one practical use that we developers who don't write compilers (or new program control structures) care about: backtracking. But after going over documentation for continuations in Cocoon, I understand why people have gotten excited. HTTP is a stateless series of independent requests and responses. The concept of a session is something our apps use in spite of HTTP to present a coherent, step-by-step experience to each user.

Sessions must be maintained manually without continuations. That is, the framework probably manages the session object for you, but it's up to you to figure out what to stuff inside the session object as a side effect of generating a specific HTTP response. When handling another request, the code activated by the URL must make sense out of the data in the session object at that point, and likely use it to branch in any of many different code paths. If the user tries to hit Back, something unintended may occur because he or she is accessing a previous page but going against the current session data. Business as usual, right?

The way I see it, a continuation is just a "frozen" copy of a running program that can be unfrozen at any time. For a webapp, if I'm interpreting this right, continuations can replace sessions. That is, each time the program flow returns a response (and control) back to the user, it suspends into continuation form. When the user sends the next request, the web server gets the continuation and resumes the program right where it left off, all parts of its execution intact except for the request object, which of course now represents a different request. The big payoff with continuations is that the Back button can actually reverse state, presumably by taking an earlier continuation instead. Naturally, this feature doesn't come without paying a price of greater sophistication in the code.

My work doesn't involve Rife, so integrated web continuations is not a possibility for me. Or is it? The generators (you know, the "yield" keyword) in .Net 2.0 got a lot of people thinking about continuations and coroutines. No surprise there; when Python got generators, articles like this made the same mental connection. I'm not holding my breath. My limited experience learning ASP.Net has shown that its event-driven and therefore essentially asynchronous nature is what defines it. Once you understand that friggin' page lifecycle, it's not as bad. Would it be better if there was one big page function that took a bunch of continuations for each event, and persisted between requests, rather than a bunch of small event handling methods that may or may not run on every request? Eh, maybe. If that was the case, it might make it easier to refactor code into whatever units you like rather than enforced event-handling units. On the other hand, a continuation model for GUI programming could be much trickier to learn than the event-driven callback model.

Wednesday, September 13, 2006

commentary on The Problem Is Choice

The Problem Is Choice considers how choice, or change, makes a DSL inappropriate for many situations. Each section starts with an apt quote from the Architect's scene in Matrix Reloaded (underrated movie, in my opinion, but then I'm probably the kind of pseudo-intellectual weenie the movie was meant for).

It's always seemed to me that, most of the time, modeling a specific problem with a DSL, then using the DSL to solve it, was looking at the whole thing backwards. The problem may be easy inside the DSL, but if you had to implement the DSL first, have you actually saved any labor? This is much less of an issue if you implement the DSL in a programming language that makes it easy (Lisp being the usual example). I think DSLs are more for reusing code to solve a group of very similar problems. For instance, a DSL seems to work well as part of a "framework" that regular, general programs can call into for particular purposes. Consider HTML templating languages, or Ant, which is the exact example that defmacro uses to illustrate to Java programmers the utility of a DSL.

Any program has to model its problem domain in order to solve it, because the problem domain has information which must map onto computer data. This is the ultimate "impedance mismatch", you might say--the necessity of translating an algorithm which is both theroretically and practically computable into an equivalent algorithm, for a Turing Machine, but expressed in your language of choice. In some problem domains a DSL may work well as a model, but according to this blog entry, apparently many problem domains evolve in too messy a fashion to fit into a formal grammar without repeatedly breaking it, requiring new constructs, etc. As the AI guys have discovered, not even human languages are a good fit for DSL modeling.

All this talk about modeling is not the real point, because no matter how a program models the problem domain, it should not mimic the domain too much. Remeber the Flyweight pattern. Code entities should match the problem domain just enough to enable effective, realistic processing with true separation of concerns. As The Problem of Choice says, the effectiveness of a programming language is not how well it can model a problem. I would say that the important criterion is how well it can flexibly but simply reuse/modularize code when achieving that model, whether with a DSL, a set of objects, or the higher-order manipulation of functions. Other languages, or even mini-languages like SQL, may not have as much of this Power, but that may be on purpose. (And no business logic in your stored procedures, buster!).