Sunday, October 29, 2006

the Prestige

I saw The Prestige a few days ago, which itself is a somewhat momentous event because I seldom see movies in the theater unless: 1) I honestly can't stand to wait to see it, and/or 2) the movie obviously benefits from huge screen and sound. The Prestige falls in category 1, although it has a handful of impressive scenes that fit 2. The three Lord of the Rings movies, in contrast, were off the scale in both categories--just had to see those in theaters. In the case of the Prestige, the combination of a reunion between some of the people behind Batman Begins, a string of intriguing previews/ads explaining the movie concept, and a set of other people who are fun to watch on screen, Hugh Jackman and Scarlett Johansen and David Bowie and Andy Serkis, proved too irrestible.

All things considered, the Prestige is the most subdued movie I've seen in a while, if you don't count Proof, which was a DVD rental for me. This movie has deaths and injuries, but each one is staggered throughout the movie, so the usual desensitization doesn't occur. These dueling magicians don't have it out in some dark alley, at least not for a long time; they're after public embarrasment, complete career ruination, and eye-for-eye retribution. Although the two share a passion for glory and fame, they have different ways of pursuing it. Minor characters get dragged into the story as pawns, including an astoundingly eccentric Tesla, but thankfully the minors also have their own independent agendas. Michael Caine's character tries to lend some levelheadedness to those around him, but his effort is squandered. Everyone schemes and deceives so much that, fittingly enough for a movie about magicians, it's hard to tell when anyone is telling the truth.

They do tell skilled half-truths. The movie's surprises, praised and denounced by critics, don't happen like deus ex machina, but come after oft-repeated clues. The clues are important, because otherwise the solutions at the end would seem like cheats. I mostly figured out what was happening before the big reveals, but not everything. And the parts I didn't think of were related to clues that had made me think "Something must be significant about X, but what?" The end satisfied me.

Apart from the interpersonal struggles and plot machinations, the magic tricks in the movie are fun to watch, either from the perspective of the unwitting audience or the (dis)ingenious magician. Of course, at a time when movie technology has advanced to the point that animals routinely morph into people on screen, audiences are more jaded about what they see. In any recent all-talents-accepted competitions, the magicians are usually the first to go. We're accustomed to seeing fake realities on our screens, and aren't impressed. It's nice to have a movie to celebrate a simpler time when magic wasn't taken for granted.

Saturday, October 28, 2006

timelessness of humor in "Great Pumpkin"

I was flipping through the channels, saw "It's the Great Pumpkin, Charlie Brown", and to my surprise, watched almost the entire thing. Apparently, the annual showing of this special is not a fluke. It's good.

You may be thinking, "No, ArtVandalay, you're just easily amused by animation". And, well...I guess my three volumes of Looney Tunes Golden Collection DVDs would agree. Still, I don't watch cheap, crappy animation, unless it has excellent writing. C'mon, you can't sit there and say that having a cartoon character remark "Pronoun trouble" and having it make sense is only intended for kids.

Back to my main point. The only way a special that originally aired in 1966 could keep my attention would be if the humor is timeless. I'll expound on this if I can just switch into Hyper-Analytic mode...(grinding gears) (clunk). There. Here is how the humor is timeless, aside from the mere fact that it's Peanuts:
  • Humor of repetition. One of the rules of comedy is that if you can find something that makes people laugh, you may as well try repeating it. Don't repeat jokes that primarily rely on surprise; it won't work, and you'll appear to be trying too hard. Most people have a finite tolerance for nonlinear humor. For instance, Aqua Teen Hunger Force cartoons start to annoy me after a couple of minutes. Charlie Brown getting a rock at each house, on the other hand, works so well that the four words "I got a rock" are now funny even out of context. Breaking up the repetitions with other kids exclaiming what they got (not rocks) effectively makes Charlie Brown's repetition stand out more.
  • Visual humor. It pretty much goes without saying that references to current events can be mined for funny lines or even skits. Just as obviously, such references aren't funny out of context. Jokes about current events become jokes about historical events in an alarmingly short time. Actually, any jokes that rely on a shared cultural context don't even make sense in other cultures in the same time period. However, humor based on what the audience is seeing seems to be more universal. Dancing Snoopy, other children using Charlie Brown's head as a pumpkin model, or Charlie Brown having trouble with the scissors doesn't require anything from the audience except the capability to react to sounds and sights. This also means that visual humor can amuse people of any intelligence level.
  • Tragic humor. Anyone with a shred of empathy probably feels some guilt about enjoying someone else's misfortune. If you want a fun word for this, you can refer to it as "schadenfreude", although "ferklempt" remains my personal favorite word borrowing from another language. The pettiness inside every person snickers whenever someone else endures tragedy. I think there's also some of this humor in "I got a rock". Charlie Brown is overjoyed after receiving a party invitation. Lucy explains that his name must have been on the wrong list. Linus persists in his rather self-destructive belief in the Great Pumpkin, so everyone else can make jokes at his expense. On a lighter note, Charlie Brown's recurring attempt to kick the football always seems to result in him falling on his back. Seeing people punished for false hope somehow just never gets old--ha ha, what a fool! Peanuts has to be the most depressing comic strip to ever hit the big time. Those who say that real kids aren't that cruel haven't seen enough groups of kids.
  • Humor about childhood. Kids don't think things through logically, and therefore stumble naturally into comical situations. Even better, they think they know more than they do. Having Linus say "I didn't know you were going to kill it!" when Lucy slices into their pumpkin is a childish thing to say (given how deeply Linus thinks on other occasions, I think there's some inconsistency here). Not necessarily because the pumpkin wasn't a living thing (it was), but because once the pumpkin is out of the patch, it's already dead, and in any case it doesn't experience pain. Jumping into a pile of leaves while eating sticky candy falls in the same category. Lucy's overreaction to accidentally kissing Snoopy also is a case of unwarranted childhood zeal. Snoopy's vivid journey into his own imagination is another childhood trait. This humor doesn't appeal to me as much as it might to others, seeing as how I didn't relate to kids even when I technically was one, but it's certainly timeless, at least in parts of the world where a peaceful childhood is still possible and until the damn dirty apes take over.
  • Reapplying old sayings. This kind of humor seems to be one of the identifying characteristics of Peanuts. For me at least, an overused saying is an overused saying, and none of the instances of this humor provoke more than a momentary smirk from me. It's even somewhat eerie to hear a kid say statements similar to "clearly, we are separated by denominational differences" or "the fury of a woman scorned is nothing compared to the fury of a woman has been cheated out of tricks-or-treats". Hearing kids complain about Christmas being too commercial, in another famous holiday special, also smacks of turning little kids into mouthpieces for adults. Creeeepy.
Maybe it's the nostalgia talking, but I have deep respect for any humor that manages to be timeless, in any medium. We can do far worse than introducing such works to future generations. Should probably wait until they reach a certain age before showing them Some Like It Hot, though.

Thursday, October 19, 2006

impressions of the Lost episode Further Instructions

First of all: whiskey-tango-foxtrot. Double whiskey-tango-foxtrot.

Locke is back. To be specific, the taking-out-local-wildlife, take-charge, island-mystic Locke, as opposed to the button-pressing, film-watching, hatch-dwelling Locke of last season. It seemed like the "faith" torch had passed from Locke to Eko (note that both of these guys have stared into the hyperactive chimney smoke). Er...not quite. At this point, I'm not seeing how the writers are going to explain, in plain natural terms, how Locke et al keep receiving visions that tell them what to do. As I mentioned in a previous Lost-related post, there must be some mind control mechanism at work on this island. Don't forget, Charlie had vivid hallucinations in "Fire and Water" ordering him around, Eko got some guidance from his brother from beyond the grave, Hurley had a convincing series of interactions with someone who never existed, and a previous vision caused Locke to find the airplane that killed Boone. If you want to trace this pattern back even further, recall that Jack's dead father led him to the caves in an early episode in season one.

On a smaller level, I eagerly await an explanation of how people inside an imploding hatch end up outdoors. Why is only Desmond nekkid? Why is he naked at all? Why is Locke mute? Keep asking those questions for us, Hurley and Charlie.

You know, I'm not sure the show needs more characters. Prove me wrong. I wonder how Rose has been lately?

My prediction was that the next time Locke appeared, he would have lost control of his legs because of the hatch implosion. And the corresponding backstory would have shown how he lost control of his legs in the first place. I'm not a writer, but I thought it would have worked fine that way. This show is so kooky. People who get frustrated with the lack of conclusive answers are missing the point. The fascination is in watching the show toss in dumbfounding mysteries all the time, and wondering when the entire tangle will either be straightened out or finally snap into a ruined heap of dangling threads. First season, Locke pounded on the unopened hatch until a bright beam shone out to answer him; second season, we eventually discovered that someone living in the hatch had heard him and merely switched on a light. Here's hoping that the underlying Answers all make as much sense.

Monday, October 16, 2006

update on F#

I haven't used F# much recently, but I've tried to keep myself apprised of it. Don Syme's blog seems like the de-facto news site; many of the following links go there. Let the bullet shooting commence.
  • F# has a new "light" syntax option (put #light at the top of the file). With #light, various end-of-line constructs like semicolons or the "in" at the end of a "let" are no longer necessary. Also, groups of statements are delimited by indentation (Python-style). I like the convenience of this option, of course, but my snobbish side feels offended by how it makes ML-style functional code feel even more like the plain imperative kind. So, points for style, maintainability, conciseness, shallower learning curve, etc., but negative points for reducing the tangy esoteric/baroque flavor.
  • Syme has made a draft chapter of his upcoming F# book available for anyone to read. I recommend it as a supplement or even a replacement to the tutorial information floating around the Web. It's a kinder, gentler approach to learning F#. After reading it, I realized that I could simplify parts of my code by using object methods where possible rather than always calling functions from modules (functions in which the first parameter was the object!).
  • There are a couple F#-related video clips that were on Channel 9. One is an interview, and the other is a flashy demo. If you already know all about F#, neither of these clips will provide new information. I repeatedly paused the demo clip so I could examine the code on screen. In the interview, Syme is quite diplomatic about the possible real-world uses of F#. While F# may shine at data crunching, it also is a general-purpose language that shares many features with C#.
  • I went to TIOBE after reading that Python had overtaken C#, and I found that, although F# isn't in the top 20 or even 50, it has a specific mention in the "October Newsflash - brought to you by Paul Jansen" section. To quote:
    Another remarkable language is F#. The first official beta of this Microsoft programming language has been released 3 months ago. This variant of C# with many functional aspects is already at position 56 of the TIOBE index. James Huddlestone (Apress) and Robert Pickering draw my attention to F# via e-mail. Later Ralf Herbrich from the X-box team of Microsoft wrote to me "After working with F# for the last 9 months, I firmly believe that F# has the potential to be the scientific computing language of the future." [...] In summary, I think both Lua and F# have a great future.
Regardless of a programming language's merits, qualitative factors like its buzz level, ease of learning, and PR savvy affect its popularity. As others have noted, Python and Ruby had a long history before achieving notoriety. In spite of still being just a continually-evolving "research project", F# is well on its way.

Monday, October 09, 2006

again too good not to share

Apparently the Drunken Blog Rants guy has been complaining about overzealous Agile folks. And he continued the discussion with another post, Egomania Itself. (The title is an anagram of Agile Manifesto.) I'm not commenting about the "debate" because I frankly don't care. As long as managment doesn't force a rigid methodology on me, and I can get my work done well and on time, and the users are overjoyed, I think that everything is fine. Oh, and pigs fly.

No, the real reason I bring this up is because of the Far Side cartoon reference. Behold! The ultimate example of, as Yegge says, "non-inferring static type systems"!

My stance, if I consistently have one, on the merits of static vs. dynamic typing is as follows:
  • To the computer, all data is just a sequence of bits. The necessity of performing number operations on one sequence and date operations on another pretty much implies that the first sequence is a number and the second sequence is a more abstract Date. Data has a (strong) type so it can make sense and we can meaningfully manipulate it. Don't let that bother you.
  • An interface with data types is more descriptive than it otherwise would be. Especially if your code intends for the third argument to be an object with the method scratchensniff. I shouldn't have to read your code in order to use it. Admittedly, data types still aren't good enough for telling you that, for instance, an integer argument must be less than 108. Where the language fails, convention and documentation and, most importantly, discipline can certainly suffice...like an identifier in ALLCAPS or beginning with an _underscore.
  • Speaking optimistically, we programmers can see the big picture of what code is doing. We can tell exactly what will happen and to what, and for what reason. Compilers and computers don't, because compilers and computers can't read. However, when data is typed, the compiler can use that additional information to optimize.
  • Leaving type unspecified automatically forces the code to work more generically, because the type is unknown. The code can be set loose on anything. It may throw a runtime error because of unsuitable data, but on the other hand it can handle cases the original writer never even thought of. This can also be done with generics or an explicitly simple interface, but not as conveniently.
  • Not having the compiler be so picky about types and interfaces means that mock objects for unit testing are trivial.
  • There is a significant overhead to keeping track of often-complex object hierarchies and data types. Knowing that function frizzerdrap is a part of object class plunky is hardly intuitive, to say nothing of static methods that may have been shoehorned who-knows-where. Thanks be to Bob, IDEs have gotten good at assisting programmers with this, but it's also nice to not have to search as many namespaces.
  • Dynamic typing goes well with full-featured collections. Processing data collections is a huge part of what programming is. The more your collection enables you to focus on the processing, and not the mechanics of using the collection, the better. With dynamic typing, a collection can be designed for usability and then reused to hold anything. Some of the most confusing parts of a statically-typed language are its collections, whether the collection uses generics or the collection uses ML-style type constructors.
Is static typing or dynamic typing better? Silly rabbit, simplistic answers are for kids!

Wednesday, October 04, 2006

too good not to share

A funny quote from this page which quoted it from this page.
When we have clients who are thinking about Flash splash pages, we tell them to go to their local supermarket and bring a mime with them. Have the mime stand in front of the supermarket, and, as each customer tries to enter, do a little show that lasts two minutes, welcoming them to the supermarket and trying to explain the bread is on aisle six and milk is on sale today.
Hilarious, but also a great lesson.

Sunday, October 01, 2006

prisoner's dilemma for P2P

Surely I can't be the first person to notice this, but the Prisoner's Dilemma has some applications to P2P. If you want a more thorough treatment of the Dilemma, look elsewhere. Here's the shorthand. There are two independent entities called A and B (or Alice and Bob, if you like). Each has the same choice between two possibilities that are generally called cooperate and defect. A and B don't know what the other will choose. The interesting part comes in the payoffs or profit margin of the four possible outcomes:
  1. If A cooperates and B cooperates, A and B each receive medium payoffs.
  2. If A cooperates and B defects, A receives a horribly low payoff and B receives a huge payoff.
  3. Similarly, if B cooperates and A defects, B receives a horribly low payoff and A receives a huge payoff.
  4. If A and B defect, A and B each receive low (but not horribly low) payoffs.
To summarize, A and B as a whole would do better if A and B cooperated, but the great risk of the other entity defecting means that the better choice (the choice with the better average payoff) for A as well as B is defecting. The Prisoner's Dilemma simply illustrates any "game" in which cooperation would be good for both players but a cooperation-defection situation would be disastrous for the cooperating player. If it helps, think of two hostile nations who must choose whether to disarm their nuclear weapons. Analytically, the more interesting case is if there are an unknown number of successive turns and payoffs, because the result of a cooperation-defection can be recovered from.

The Prisoner's Dilemma applies to P2P transactions in a limited way, if you consider the payoff to be the difference between downloads (positive) and uploads (negative). Assume A and B are two peers who wish to download something of equal value, normalized to be 1, that the other has (and everything's legal, yahda yahda yahda).
  1. If A uploads what B wants and B uploads what A wants, A and B both upload (-1) and download (1), so the payoff for each is 0.
  2. If A uploads what B wants and B does not upload, then A ends up with a payoff of -1, and B ends up with a payoff of 1.
  3. If A does not upload and B uploads what A wants, then A ends up with a payoff of 1, and B ends up with a payoff -1.
  4. If A does not upload and B does not upload, then no transaction occurs at all, so the payoff is 0.
Now it's clear how the Prisoner's Dilemma begins to fall apart as an analogy. For instance, the payoff for each entity in outcome 1 should be greater than the payoff in outcome 4. More importantly, we've only considered the "net transaction of value", and not the end result of the transaction, which is that either you have what you wanted or you don't, so the "result payoff" for each peer is as follows (assuming you didn't upload one half of an unobserved, entangled quantum pair, in which case your copy collapsed into an undesirable state when your downloader observed his copy):
  1. A and B both have what they wanted, in addition to what they uploaded, so each have a payoff of 1.
  2. A did not get what it wanted, but B did, so A's payoff is 0 and B's payoff is 1.
  3. A got what it wanted, but B did not, so A's payoff is 1 and B's payoff is 0.
  4. A and B both got nothing, a payoff of 0.
This scenario doesn't cover the situation of a peer wanting something but not having something that the other peers want. Also, unless any analysis can take into account an arbitrary number of peers and files, it won't directly map onto a real P2P network.

Nevertheless, the Prisoner's Dilemma shows that, assuming uploading or sharing has a cost, "leeches" are merely acting in a rational manner to maximize their individual payoff. So P2P systems that make it more rational, or even mandatory, to share stand a better chance of thriving. Specifically, if a peer downloads more than one item, the iterated or repeating Prisoner's Dilemma can come into play. A peer that acted as a leech for the last item could have some sort of penalty applied for the next item the peer downloads. Through this simple mechanism, known as Tit-for-Tat, cooperation becomes more attractive. Call it filesharing karma, only with a succession of files instead of lives. Any peer downloading its very last item would rationally defect on that item, because there would be no chance to receive a future penalty.