Friday, September 29, 2006

peeve no. 243 is use of the acronym AJAX

Yeah, I know acronyms, being symbols, can't hurt me. (Just to be clear up front, I'm talking about the acronym standing for Asynchronous Javascript And XML, not the Ajax cleanser I recall seeing in my elementary school). When I hear or read AJAX, I sometimes get the impulse to whip out my Dilbert-inspired Fist O' Death or maybe my Wolverine adamantium claws. I find this impulse has been increasing in strength and frequency.

As with most violent rages in my experience, the reason behind it is simple: "AJAX" means Javascript just like DHTML meant Javascript. Why use a new term for the same thing? Why obscure what AJAX really is? Why force my brain to mentally translate AJAX to Javascript each time, and also force me to explain the same to managers that read rather sensationalist tech mags?

My inner angry midget whispers that the reason everybody says "AJAX" and not "Javascript" is deceptive marketing, a well-known root of myriad evils. Remember when the prevailing opinion among serious enterprise-y types was that Javascript should be condemned for enabling annoying browser tricks? Popups, window resizes, and obnoxious uses of the status bar? Eh? Not to mention rampant cross-browser incompatibilities.

The situation is better now. Even if browsers still have important implementation differences, at least each one supports a reasonable, standard-based core of impressive features. For years Javascript has gotten more respect as a "real" language. It even has crashed through the browser walls and onto Java. Javascript was a naughty, naughty brat once, but not any more. Call it by its real name, for Bob's sake! I utterly despise when language is perverted for the sake of rank marketing. Words should communicate, not mislead. Doubleplusgood political correctness can go take a flying leap as well.

Postscript: OK, I admit AJAX is not synonymous with Javascript, so the term does communicate additional meaning. After all, if it only stood for Javascript, it would be "J", right? I guess I wish that AJAX had gone by a different name that emphasized the central tech involved, Javascript. Perhaps one could go X-treme and call it JavascriptX? Or JavascriptRPC? I confess that AJAX is an efficient, memorable way to get across the idea of "browser Javascript getting data without a page refresh by retrieving XML". As I explained, "AJAX" doesn't bug me nearly as much as the fact that the term is deployed daily, with no sense of contradiction, as the next Answer to a Rich Web by the same folks who maligned Javascript.

Monday, September 25, 2006

you are in a twisty maze of little bookmark folders

My problem is well-known. Like anyone else who spends a generous chunk of time following hyperlinks across the Web, I have a long yet ever-growing list of bookmarks in Firefox. To make sense out of the list, I keep the bookmarks in a menagerie of folders and folders-within-folders. It's easier to find a bookmark this way, but now I must potentially go through multiple folder levels before arriving at the actual bookmark. Moreover, I regularly visit only a handful or two of the bookmarks. I want to keep my organization scheme but also keep bookmark access convenient, especially for the most-visited bookmarks.

There's some great options available. The bookmark sidebar, opened and closed with Ctrl-B, has a search input that searches as you type. Any bookmark has an assignable Keyword property that will take you to that bookmark when you enter it into the address field. Either technique works well if you know exactly what bookmark you want to access. What I wanted was a way to group all of my most-visited sites in one folder, preferably automatically, then just use that folder for my common browsing. Desktops already provide something like this in startup menus for frequently-used programs. Interestingly, the general idea of "virtual bookmark folders" was in the planning for Firefox 2, but postponed. A way to accomplish this in the mean time, as well as affording ultimate (manual) control, would be to open up all of your commonly visited sites in a set of tabs, then Bookmark All Tabs.

OK, next I turned to the legion Firefox extensions. I tried Sort Bookmarks first. It lived up to its name. I'll keep this extension around. I liked that the bookmarks I cared for most were now at the top of each folder's list, but the problem of too many folders was still present. I started to go through the code for this extension to gauge how I could create my own extension for an automatically-updating "Frequently Visited Bookmarks" folder. Then I found the Autocomplete Manager extension. With this, bookmarks can appear in the address field's autocomplete list matching against the bookmark URL or title, and the bookmarks can be sorted most often visited first. Sure, it's not the most efficient search in the world, but it beats having to pop open the bookmarks sidebar or enter and memorize bookmark keywords. My remaining concern is that Firefox seems to have trouble sometimes keeping track of when I visit a particular address, resulting in low visit counts.

EDIT: It seems that restarting Firefox somehow updates the visit counts. I'd say to go figure, but I'm not going to; why should you? I have kept Firefox open for more than a week at a time, but starting it anew each day isn't that burdensome, I suppose.

Saturday, September 23, 2006

hackers and...musicians?

I recently found the Choon programming language page. To quote:

Its special features are:

  • Output is in the form of music - a wav file in the reference interpreter
  • There are no variables or alterable storage as such
  • It is Turing complete

Choon's output is music - you can listen to it. And Choon gets away without having any conventional variable storage by being able to access any note that has been played on its output. One feature of musical performance is that once you have played a note then that's it, it's gone, you can't change it. And it's the same in Choon. Every value is a musical note, and every time a value is encountered in a Choon program it is played immediately on the output.


I haven't used Choon to do anything useful, of course (even if you want to write down music in text form in Linux, there are much better options available). But I have used Choon to think of an absurd mental picture: an office full of programmers humming in harmony over the cubicle walls, perhaps to create the next version of TurboTax. I'm reminded of a scene in the educational film from the Pinky and the Brain episode "Your Friend Global Domination". Brain proposes a new language for the UN, Brainish, in which each speaker says either "pondering" or "yes" each time. By varying the tone and rhythm of their one-word statements, they can have a conversation about several topics simultaneously. I say Kupo! to that.

Unrelated observations from killing time by watching old Smallville shows in syndication: 1) crazy Joe Davola is one of the producers, 2) Evangelline Lilly has one of those blink-and-you-might-miss-it moments guess-starring in the episode "Kinetic" as a ladyfriend of one of the episode meanies.

Tuesday, September 19, 2006

impressions after reading the Betrayal Star Wars novel

I say "impressions" because I don't want to do a full-blown review. I'm satisfied with the book. Some of the scenes dragged a little for me, but I admit that I have a weakness for winding, ever-developing plots made up of a lot of little chunks...in short, I wish it was more Zahn-esque. I did like that each character showed off his or her own unique voice, behavior, and mentality. The humorous dialogue sprinkled throughout the book was great, too, and it didn't feel forced, because the characters are known for the ability to make wisecracks in any situation. I appreciate the references to past events in the sprawling Star Wars timeline, although I get annoyed by how much I need to be caught up. Skipping all of the "New Jedi Order" books except two will do that.

I kept being distracted by the inconsistent application of technology in the Star Wars universe. On the one hand, there are starships, laser guns, "transparisteel", and droids, but on the other hand, the people seem to live a lot like us. They drink "caf" to stay alert. They wash dishes. They use switches to turn lights on and off. They wear physical armor, which appears to be mostly useless against any firearms. Any piloting controls appear to operate about the same as an airplane. Well, maybe you press more buttons if you're about to make a faster-than-light jump. There are vehicles that float above the ground, sure, but where's the hoverboard that Michael J. Fox taught us to ride? Hmm? Food seems to be prepared and consumed just like it is now. Health care seems pretty primitive, apart from the cool bionics. In general, there's a surprising lack of automation present, considering how advanced the computers are. At least in Dune there was an explanation for the same occurence: the legendary rage-against-the-machine jihad!

I won't discuss any real spoilers here, but speaking of parallels to Dune, the ability to see the future raises some intriguing ethical, end-justifies-the-means dilemmas at the end of the book. If I somehow know that your life will cause something bad to happen, is it right for me to kill you? How sure would I have to be? How could I be sure that I had considered all the alternatives? Does "destiny" become a consideration? Since the future isn't set in stone, would any vision be a reliable basis for making important decisions? Could one all-seeing, all-knowing person take it upon himself to redirect the course of history? Considered over a long enough time range, is any action only good or only bad? And the HUGE kicker: is the Sith way of embracing attachment and passion necessarily evil, or merely different? RotJ would suggest that it is precisely Luke's stubborn attachment to his father that leads to his redemption. Don't forget, Yoda and Obi both told Luke to cut that attachment loose and make with the patricide.

Friday, September 15, 2006

webapp state continuations

I went through some of the Rife web site pages recently. Rife is one of those projects whose name keeps popping up, especially in places that talk about applying the Rails viewpoint to Java. This interview with the founder was enlightening. I like being reminded that no technology is created in a vacuum. The motivations behind the technology influence its shape.

The Rife framework has several interesting ideas. The feature that stood out most to me was integrated web continuations. I had always assumed that continuations have one practical use that we developers who don't write compilers (or new program control structures) care about: backtracking. But after going over documentation for continuations in Cocoon, I understand why people have gotten excited. HTTP is a stateless series of independent requests and responses. The concept of a session is something our apps use in spite of HTTP to present a coherent, step-by-step experience to each user.

Sessions must be maintained manually without continuations. That is, the framework probably manages the session object for you, but it's up to you to figure out what to stuff inside the session object as a side effect of generating a specific HTTP response. When handling another request, the code activated by the URL must make sense out of the data in the session object at that point, and likely use it to branch in any of many different code paths. If the user tries to hit Back, something unintended may occur because he or she is accessing a previous page but going against the current session data. Business as usual, right?

The way I see it, a continuation is just a "frozen" copy of a running program that can be unfrozen at any time. For a webapp, if I'm interpreting this right, continuations can replace sessions. That is, each time the program flow returns a response (and control) back to the user, it suspends into continuation form. When the user sends the next request, the web server gets the continuation and resumes the program right where it left off, all parts of its execution intact except for the request object, which of course now represents a different request. The big payoff with continuations is that the Back button can actually reverse state, presumably by taking an earlier continuation instead. Naturally, this feature doesn't come without paying a price of greater sophistication in the code.

My work doesn't involve Rife, so integrated web continuations is not a possibility for me. Or is it? The generators (you know, the "yield" keyword) in .Net 2.0 got a lot of people thinking about continuations and coroutines. No surprise there; when Python got generators, articles like this made the same mental connection. I'm not holding my breath. My limited experience learning ASP.Net has shown that its event-driven and therefore essentially asynchronous nature is what defines it. Once you understand that friggin' page lifecycle, it's not as bad. Would it be better if there was one big page function that took a bunch of continuations for each event, and persisted between requests, rather than a bunch of small event handling methods that may or may not run on every request? Eh, maybe. If that was the case, it might make it easier to refactor code into whatever units you like rather than enforced event-handling units. On the other hand, a continuation model for GUI programming could be much trickier to learn than the event-driven callback model.

Wednesday, September 13, 2006

commentary on The Problem Is Choice

The Problem Is Choice considers how choice, or change, makes a DSL inappropriate for many situations. Each section starts with an apt quote from the Architect's scene in Matrix Reloaded (underrated movie, in my opinion, but then I'm probably the kind of pseudo-intellectual weenie the movie was meant for).

It's always seemed to me that, most of the time, modeling a specific problem with a DSL, then using the DSL to solve it, was looking at the whole thing backwards. The problem may be easy inside the DSL, but if you had to implement the DSL first, have you actually saved any labor? This is much less of an issue if you implement the DSL in a programming language that makes it easy (Lisp being the usual example). I think DSLs are more for reusing code to solve a group of very similar problems. For instance, a DSL seems to work well as part of a "framework" that regular, general programs can call into for particular purposes. Consider HTML templating languages, or Ant, which is the exact example that defmacro uses to illustrate to Java programmers the utility of a DSL.

Any program has to model its problem domain in order to solve it, because the problem domain has information which must map onto computer data. This is the ultimate "impedance mismatch", you might say--the necessity of translating an algorithm which is both theroretically and practically computable into an equivalent algorithm, for a Turing Machine, but expressed in your language of choice. In some problem domains a DSL may work well as a model, but according to this blog entry, apparently many problem domains evolve in too messy a fashion to fit into a formal grammar without repeatedly breaking it, requiring new constructs, etc. As the AI guys have discovered, not even human languages are a good fit for DSL modeling.

All this talk about modeling is not the real point, because no matter how a program models the problem domain, it should not mimic the domain too much. Remeber the Flyweight pattern. Code entities should match the problem domain just enough to enable effective, realistic processing with true separation of concerns. As The Problem of Choice says, the effectiveness of a programming language is not how well it can model a problem. I would say that the important criterion is how well it can flexibly but simply reuse/modularize code when achieving that model, whether with a DSL, a set of objects, or the higher-order manipulation of functions. Other languages, or even mini-languages like SQL, may not have as much of this Power, but that may be on purpose. (And no business logic in your stored procedures, buster!).

Tuesday, September 12, 2006

revenge of multiple inheritance

I count myself fortunate to not know much about C++, in spite of that being the language I learned in college (4-yr degree). One of the many details I was blissfully unaware of was multiple inheritance. In fact, I didn't know that C++ supported multiple inheritance until I read about Java interfaces. A statement similar to "interfaces are Java's response to multiple inheritance in C++" usually appeared in the introduction. That inspired me to read about multiple inheritance in C++, but when I encountered the term "virtual base class", I decided to give up on that. The moral I learned was that, as with biology, tangled-up inheritance trees don't work well in practice.

When I started to pick up Python, I was startled to find multiple inheritance sitting right there in plain view, and no one was complaining. Actually, I could have found multiple inheritance in Perl, but that would have never happened because I only used objects in Perl; I never implemented them (is that like saying, "I never inhaled"?). This is a clear instance of the oft-observed difference in values between these languages and Java. In Java, you make the language simple enough that programmers, especially those on teams other than yours, won't mess the code up by abusing complicated features. In these other languages born from scripting, you make the language capable of anything, or at least extendible, so the programmer can do whatever he wants to get his job done. The relevant quote is "easy things easy and hard things possible".

The reason I bring up multiple inheritance now is because of a great blog post by chromatic about Roles: Composable Units of Object Behavior. The concept of object roles is slated for Perl 6. chromatic goes into great detail about how roles basically work as a middle ground between Java interfaces and C++ multiple inheritance. Squint enough and one might even think of it as AOP for classes. There's no reason someone couldn't accomplish a subset of the same effects in existing OO languages using a design pattern or two, but as chromatic explains, there are benefits to having roles built into the type system. Having roles built-in means that an interface and a default implementation don't need to be a matched pair, as in Java, but a single unit that can be applied to a class and checked at compile time or at run time. I still think I would rather have one class, parameterized by constructors into specific objects, and perhaps combined with another class using the decorator pattern, rather than one class parameterized by many roles into many classes. But maybe I'm stuck in the wrong paradigm. Roles seem like a great answer to the quandary of multiple inheritance. Leveraging the compiler is good. Code reuse is good.

Sunday, September 10, 2006

spoilers ahoy over at the Lost Blog

Not much to add here, just wanted to point over to the latest entry over at the Lost Blog. If any page ever deserved the disgustingly cute adjective "spoilerific", it would be this one. Not only does it have a list of informative tidbits from all over, but it either has or links to the information from the "Lost experience" game.

On a more general note, I've been stuck on Lost since seeing the "Walkabout" episode. I purposefully avoided the first few episodes because of my violent aversion to the herd mentality. (One of the surest ways to keep me from doing something is to tell me that "everybody's doing it"). The great work they do on the show, from the camera work to the acting to the writing, mixed with fascinating backstories and a huge list of unexplained phenomena, has kept me riveted. Not every episode is good, but for me, at least, there's something inside each one that makes it redeemable. I only care to see an episode of 24 once, but episodes of Lost hold up incredibly to repeated viewing.

Oh, and my personal take on the island is that there's some serious mind-control experiments going on. Too many of the characters have had eerie visions, dreams, or hallucinations for there to be any other explanation. The show does have its trippy parts...

Friday, September 08, 2006

peeve no. 242 is free pdf printing in hp-ux

I had a lot of masochistic fun over the past few days trying to find the right combination of program and arguments to perform free pdf printing in the HP-UX system at work. I tried acroread first, obviously, but any attempts to run it resulted in complaints about the missing libgfk_pixbuf shared library.

Believe it or not, the HP-UX machine is not used for graphical tasks. At all. Any GUI programs are immediately out, not merely because I, the grunt programmer, have no direct access to the servers, but especially because I was looking for how to print a batch of pdfs from a script. Perhaps we could use one of the remote X session utilities, but it's definitely not a priority; it's a server, for Bob's sake.

I remembered ghostscript at this point, and happily enough the HP people have a binary package for it. I installed it, looked over the man page for a little bit, and started with the device/output file combination that seemed the most applicable. No good. So I attempted some other combinations, pulled up the ghostscript online documentation for my version, and began a ritual of 1. entering a command, 2. going to the printer, 3. viciously crumpling the useless printer output, 4. going back to the cube.

After too much of this, I admitted to myself that I knew nothing. I looked up the spec for the printer. PCL is good, but PostScript is not. That explained some of my trouble...I went back to the ghostscript online docs, but this time I searched for PCL. Oh joy. For this version of ghostscript, I read this: "Ghostscript as distributed includes the PostScript interpreter; there are also interpreters for PCL 5e, PCL 5c, and PCL XL, which are not currently freely redistributable and are not included in the standard Ghostscript package." But I chose to ignore this, so I started a new general web search. Finally, I came upon a SUSE manual that gave the magic combination: use device "ljet4" and output to a temp file, then print the temp file. What a productive way to spend my time...

Friday, September 01, 2006

.Net: He will join us or die

Have I become a mere link blog? One of those newsy blogs that's constantly linking elsewhere and making snide comments, with no original content? How can I justify my existence if I'm just one more "talking head" poring over RSS feeds? What--ah, screw it.

Somehow I ended up at Lisp is sin while I was following a thread along the warp and woof of the Web. I'm glad I did. Sriram's main point is the same one that many others make: over time, everyone is rediscovering the features of Lisp. Sriram's unique spin is that the .Net side of the house may get there sooner. Some evidence: Linq, lsharp, fsharp (which I've reported on in other posts), anonymous delegates, iterators (what the rest of the world calls "generators"). In the example from the blog, the C# 2.0 delegate keyword appears to be the stand-in for lambda. Does a C# 2.0 anonymous delegate form a true closure, with its own lexically-scoped copy of the variables in effect at the time of definition? I don't know. At work we're not yet using 2.0, because our vendor isn't.

Once you finish digesting that link, which is long and informative, read this pdf document by Erik Meijer about how functional programming has reached (will reach) the masses in the form of Visual Basic. I don't know what to believe in anymore...

For Sriram, Lisp is sin because it tempts him to flirt with it over and over again, but without leading to deep, lasting satisfaction. According to him, because of obstacles like a steep learning curve and a lack of libraries, Lisp in its current state cannot thrive in the mainstream. In fact, people have foretold its death for years. A .Net advocate far more militant than Sriram might say, "Lisp and its brethen, or at least the features therein, will join .Net or die." (By the way, I personally believe that popular programming languages never die; they just get reincarnated).

Bonus link: It's not as if Joel on Software needs me linking to it, but in Language Wars he takes a pretty definite stance: for serious, scalable, maintainable, productive Web development, you have merely a handful of choices, but as long as you choose from that group, you can just use whatever you're most comfortable with since the alternatives are about equally capable. Python is "half" on his list, but Ruby and Lisp and Perl are not. Admittedly, I'm not in a cutting-edge startup (far from it), but I don't think he's giving the up-and-comers enough credit. This conservative, enterprise, entrenched approach to Web development has a great deal of inertia, so we shouldn't be surprised if something else shiny and new sneaks up from behind and takes over, like Java did to C++.