- The point of completion of another solar revolution could be set anywhere in a cyclical calendar. A year is an additional numerical designation applied to the calendar to preserve a linear (or sinusoidal, because each year is a cycle?) reckoning of time. In other words, the switching of the year is like going from 19 to 20. Oooo, the "1" changed to a "2". How fascinating. Who are the people obsessed with "boring numbers" now? (Of course, time isn't perfectly linear, but under the circumstances most of us operate in, this assumption is accurate enough.) Just as a new year could be started at any time in the calendar, new resolutions could be started at any time in the calendar.
- Similarly, the passage of time which occurs as one year ends and another begins isn't at all unique. Someone looks about the same a day before and a day after his or her age changes, tattoos or hangovers notwithstanding. When the "calendar's birthday" happens, the interval of time that has passed isn't a year; it's a day. Days change...daily. Celebrating the passage of mere time is an odd exercise when time is always passing. We're always waving farewell to the past and planning for the future. If a person thinks of a new resolution on Dec. 22 and waits until Jan. 1 to do something, which is more significant and concrete: the action the person took on the "magical" first day of the new year, or more than one week's accumulation of days of inaction?
- From the standpoint of results ("metrics", if you want to annoy people), resolutions are not enough. This isn't news, but that won't stop the news from producing several fluff segments about it. Traditionally, people make resolutions, attempt to follow the resolutions, fail, and perhaps try again the next time the digits of the year increment. For claiming to be a civilized society of knowledge workers in the information age, this approach is remarkably careless. Although an inspiring goal is vital for the sake of motivation, resolutions should include other items: sub-goals/milestones, plans, contingency anticipation (you don't expect the universe to cheer you on, do you?), rewards, punishments, support, recognition of difficulty. Moreover, if someone isn't willing to "bother" with such details, that may be a sign he or she would truly prefer for the resolution to remain in daydream-land, where valuables have no cost.
- Some excellent character-based humor has come out of the tendency for people to overlook their greatest flaws as they form resolutions about comparatively trivial minutiae. We shouldn't be surprised. Personal qualities often serve as both strengths and weaknesses, leading people to think the quality is no problem, or to think the people around them are the ones with problems (isn't marriage challenging?). And as for the flaws people are oblivious to...would they still have them if they were aware of them? A pivotal change to one's path requires a clear view of where someone currently is. Before drawing up new year resolutions, perhaps one should spend time considering Who is responsible for past failures? Who is standing in the way? Who is the saboteur? One might find the unpleasant answer by looking himself or herself in the I.
- Furthermore, after deciding on a resolution, the plausibility of the resolution bears evaluation. Without extreme tactics (think of those who talk about "breaking down newbies to rebuild them"), sustainable behavior modification happens slowly and gradually. In my opinion, not everyone is capable of every behavior, doing every action, developing every habit. Bodies differ. Genes exert limits and tendencies. Within those parameters, dedication can accomplish a lot, yet some resolutions are, realistically-speaking, improbable to impossible for some people. Don't let failure to reach an implausible or impossible goal steal the satisfaction of doing something worthwhile. Readjusting the resolution is still preferable to trashing it.
Friday, December 28, 2007
resolving the question of new year resolutions
Thursday, December 20, 2007
peeve no. 254 is "argument from common sense"
In the midst of an argument, the most satisfying retort is exposing the opponent's reasoning as a fallacy, such as one of these. But the most frustrating reasoning to deflate is reasoning that's not even wrong: its truth value is meaningless, mu, zero (and not the way C conditionals judge zero). The "argument from common sense" is in this category of fallacy. When someone advances an idea on the basis of, or defended by, what he/she calls common sense, no recourse is available. Possibly one could gain some satisfaction by responding with "I find your abundance of faith in unexamined assumptions disturbing", but that still wouldn't be a real counterargument.
Real counterarguments to the argument from common sense would be meaningless, because the argument from common sense is meaningless, because common sense is meaningless in an objective debate. I don't mean that the concept of "common sense" is meaningless, but that its very usefulness is its function as a catchall term for implicit knowledge. It doesn't have a fixed definition. Without sharp definitions, debates can endlessly spin in confusion purely from misunderstanding of what each participant is communicating. (Not incidentally, this is why the legal documents of a society under the rule of law are careful to define all important nouns. If you use the GPL, you really should know exactly what a "derivative work" is.)
Moreover, a bare argument from common sense exudes sheer ignorance of observer bias and disregard for rigorously evaluating truth via proof--how much of common sense is the result of investigation, as opposed to culture and mindless conformity? As I've heard some people say, with tongue-in-cheek, "common sense isn't common". A subset of common sense knowledge for one person, or even an entire group, might not be a subset of common sense knowledge for another person, or even another entire group. Is it common sense to use global variables sparingly, if at all? Is it common sense to indent code in a consistent manner? Is it common sense to change the default/root password? Is it common sense to finally cease using <font> tags, for Bob's sake? Maybe for some people, those procedures are so blindingly obvious that using the description "common sense" is pretty much an insult to the words. For other people, perhaps not...
The argument from common sense is not an argument. If trying to justify a statement by referring to common sense, find some other way. Look closer at what "common sense" really means in that specific case. At least make your assumptions explicit, so everyone knows you aren't using common sense as a logical hand-wave (no need to look here, it's common sense!). Common sense has done a lot for you. Don't push it into being something it isn't.
Tuesday, December 18, 2007
seriously, two movies based on the Hobbit?
Overall, this seems like good news to me. A good adaptation of Hobbit has been a long time coming. But I never thought two movies would be necessary. LOTR is usually sold as three books, and each book is dense. I've always considered Hobbit to be the "gateway drug" for LOTR: less intimidating, generally not as potent/complicated (look, fewer subplots!), and hints of what the heavier stuff is like. Looking back at how the adaptation worked for LOTR, massively cutting here and adding there until we had three long movies (longer if you opt for the extended DVDs), I'm worried what will happen this time around, when the problem is reversed: not enough story for two movies. Will it turn out like King Kong?
Yet another problem will repeat itself for the screenwriters: the choice of which scenes belong in which movie. In retrospect, Two Towers may have been diminished because of the parts that shifted to Return of the King. I'm curious where the dividing line will fall for two Hobbit movies. The book's sense of scale shifts toward the end; onward from when Smaug appears, the book is suddenly no longer about just Bilbo and dwarves, but disagreements between peoples culminating in a Battle of Five Armies. Cut the story off too soon, and the first movie won't have much of a climax to speak of. Cut the story off too late, and the second movie won't have much plot left to cover at all. This should be interesting to observe.
Monday, December 17, 2007
why "Art Vandalay" writes Rippling Brainwaves
However, the third item in the list, "No Information on the Author", needs further mention. The item's word choices are emphatic, so I'll quote in entirety.
When I find well-written articles on blogs that I want to cite, I take great pains to get the author's name right in my citation. If you've written something worth reading on the internet, you've joined a rare club indeed, and you deserve proper attribution. It's the least I can do.
That's assuming I can find your name.
To be fair, this doesn't happen often. But it shouldn't ever happen. The lack of an "About Me" page-- or a simple name to attach to the author's writing -- is unforgivable. But it's still a problem today. Every time a reader encounters a blog with no name in the byline, no background on the author, and no simple way to click through to find out anything about the author, it strains credulity to the breaking point. It devalues not only the author's writing, but the credibility of blogging in general.
Maintaining a blog of any kind takes quite a bit of effort. It's irrational to expend that kind of effort without putting your name on it so you can benefit from it. And so we can too. It's a win-win scenario for you, Mr. Anonymous.
These are excellent points excellently expressed. I hope the rebuttals are worthy:
- Like any blogger, I'm always glad to discover that my precious words are connecting to someone rather than going softly into the Void. However, I think it's up to me to decide what I "deserve". Attribution to my proper name isn't one of my goals, because becoming a celebrity is not one of my goals. Crediting the blog and linking back is thanks enough, because that action has the potential to prevent future precious words from going softly into the Void. And by all means, be ethical enough to not plagiarize.
- This blog is of my ideas, the titular Rippling Brainwaves. It isn't about (original) news. It isn't about (original) research or data, unless you count the anecdotal set of personal experiences I use for reference. The information in this blog, which is mostly opinions anyway, can and should be evaluated on its own merits. In this context lack of a byline should have no bearing on the credulity of the author, the writing itself, or "blogging in general". Here, the ideas reign or fall apart from the author. Think of it like a Turing Test: if a computer was writing this blog, or a roomful of monkeys at typewriters, etc., would that make any difference, in this context? Part of what makes the Web special, in my opinion, is the possibility that here, as in the World of Warcraft, everyone can interact apart from the prejudices and spacetime limitations implicit in face-to-face/"first life". Objectivity++.
- Given that credibility is not in question, and I function as the writer of this blog, not the topic (item 8, "This Ain't Your Diary"), I'm not convinced that my identity is of interest. I don't mind stating that I am not a famous individual, I do not work at an innovative software start-up or even at one of the huge software corporations, I am not a consistent contributor to any FLOSS projects, and in almost all respects my often-uneventful but quietly-satisfying life is not intriguing. More to the point, I don't want anyone who reads to "color" my words with knowledge of the writer: don't you dare belittle my ideas because of who I am.
- Probably the most childish or indefensible reason I have for being a "Mr. Anonymous" is anxiety about real-life repercussions from what I write, to both career and personal relationships. I also appreciate the freedom of expression which comes from disregarding such repercussions, and I believe uninhibitedness benefits the readers, too. Who wouldn't like the chance to say whatever one wants? I'm nicer in person, yet I'm guessing that's the typical case. It's rare to hear someone comment, "I don't have road rage. I only have hallway rage."
- Lastly, I won't dispute the assertion that "Maintaining a blog of any kind takes quite a bit of effort". Perhaps I should be putting in more, at least in order to achieve more frequent updates. Currently, and for the foreseeable future, I will post as I wish, and not until I judge the post is worthy and ready. I started blogging for the same reasons as many others: 1) I enjoy doing it, 2) I feel that I (at times) have valuable thoughts to share, 3) I hunger for greater significance, 4) I wish to give back to the Web, 5) I like communicating with people who have common interests, 6) I relish correcting and counterbalancing what others write. Those reasons are how I "benefit" from the blog. Are those reasons "irrational"? Eh, if you say so. Either way, I can't stop/help myself. As I keep repeating, this blog isn't about me, my career, or my finances. Ad income wouldn't earn much anyhow, and even it did, I wouldn't want to look like someone who's trying to drive Web traffic for profit. The blog is about what I'm compelled to publish. (And inserting ads would morally obligate me to use my real name. Hey, if you're planning to potentially profit off people, you should at least have the decency to tell them where the money's going.)
Friday, December 14, 2007
peeve no. 253 is overuse of the word 'satire'
Regardless of the above, satire does have an objective definition, and that definition doesn't match up with some of the commoner usages of "satire" I've been reading. I bet some of you same offenders have been misusing "irony" and "decimated". As I understand it (he yelled from his padded cell), satire is the timeless technique of applying humor of exaggeration in public discourse to starkly emphasize the ridiculousness of an opposing viewpoint. Satire is a part of language; it predates telecommunication, mass communication, photography, and practical electricity. Effective satire, especially satire which stands a minute chance of influencing others, requires a corresponding degree of intelligence and wit. A Modest Proposal is an excellent example. In politics, which is a subject I attempt to studiously avoid in this blog even in the face of an imminent presidential election year, satire is a welcome diversion from the typical "shout louder and more often" tactic. I respect satire.
Having described satire, it should be clearer what satire is not. Admittedly, my idealization of satire may not perfectly fit the official/academic category of satire.
- Satire is not synonymous with political humor, because not all political humor is satire. A joke which has a political subject might not be a satirical joke.
- Satire is not synonymous with sarcasm, although both include the idea of statements whose intended purpose is to express the opposite of its meaning. Both include the idea of statements that attack something. (Side comment: some comments I've heard called "sarcastic" should be called "facetious".) However, sarcasm is the more general of the two. Satire may be an extended application of sarcasm to a particular viewpoint.
- Satire is not an attack on a politician's personal appearance, personality, mannerisms, etc. It's an attack on his or her beliefs or actions.
- Satire is more than a hodgepodge of one or two line quips on political issues. Satire employs greater depth in narrower focus, as it attempts to illustrate that a viewpoint is not merely funny, but ludicrous.
- Satire is more than creating unflattering fictional caricatures of an opponent or viewpoint--perhaps that technique is satire, but no more than the lowest form of it. Satire's goal is to make a point. Having the fictional caricature say something like "rustle me up some ding-dongs" may be worth a chuckle (but hardly more), yet it neither discredits follies nor imparts wisdom.
Tuesday, December 04, 2007
peeve no. 252 is artless programming
However, I can't deny the existence of programming style, nor can I ignore its manifestation or absence in any particular piece of code. The level of style exhibited by code has been analyzed and quantified in a few ways: lists of code "smells", program-structure calculations such as cyclomatic complexity, lint-like tools for flagging potential problems (tried JSLint, by the way?), and, of course, automated style-enforcement programs. The sense of style I am describing includes and goes beyond all those.
It also goes beyond bug reduction, the province of 251. Code listings A and B can be equally bug-free, but the level of style in A can still be unmistakably superior to B. Style is part of what Code Complete tries to communicate. Duplicated code has less style than modular code. Cryptic variable names have less style than informative variable names; so do verbose variable names. Objects and functions that overlap and do too much have less style than objects and functions with clear-cut responsibilities. On the other hand, style indicators are more what you'd call guidelines than actual rules, because style is an elusive concept which escapes hard definition.
One example of the difference between code with style and code without style has been burned into my memory. In one of my first few undergraduate classes, the professor divided the students into three groups to write a pseudo-code algorithm for translating a day of the month into a day of the week, but only for one specified month. Thanks to my influence (I was the first one to suggest it, anyway), the group I was in used the modulus operator (%) as part of the algorithm. One or both of the other groups made an algorithm without the modulus operator, a patchwork algorithm having several simple "if" statements my group's algorithm did not. All the algorithms solved the (trivial) problem of producing the correct output for each input; the benefit of several people to mentally trace/check the code was probably why the students broke into groups in the first place. Need I state which of the algorithms had the most style, regardless of my all-too-obvious bias?
Here's another quick example, from a few days ago. Out of curiosity, I was looking over some checked-in code. A utility class contained a handful of methods. One of those methods had many complicated steps. In defense of the code's style level, the method's steps were sensibly grouped into a series of "helper" methods, called in succession. The gaffe was the visibility of those helper methods: public. Is it likely that other code would try to call any of those helper methods individually? Is it even likely that other code would use that narrowly-applicable utility class? No on both counts. The public visibility of the helper methods causes no bugs, and realistically speaking will never result in bugs. Nevertheless, objects that "overshare" a contract/interface to other code represent poor style.
The programmer who checked in this code is not dumb, yet he had a style "blind spot". Style is subjective, but it can be learned. Constructively-minded code reviews should put an emphasis on improving the style of everyone involved.
If you've read Zen and the Art of Motorcycle Maintenance, you may also have noticed that its conclusions about language rhetoric resemble the above conclusions about programming style. Quality craftsmanship can arise from the mental melding of craftsman and material. It's neither objective nor subjective, neither art nor science. A common expression of programmers is a language or framework "fitting my brain". Artless programming proceeds without the attempt to achieve this deep awareness.
Wednesday, November 28, 2007
drop the superiority complex
Neither career nor talents grant anyone the authority to degrade everyone else. On a website aimed toward computer technology professionals but on a non-tech topic, I just read an ugly comment stating (in different words) the unlikelihood of a previous commenter having anything valuable to say to "people who think for a living". I'm not surprised or bothered by attacks on basic credibility; disagreement without acrimony would be the real anomaly. What makes me cringe are the haughty assumptions behind the statement:
- People who work on computer technology think for a living.
- Others, like the statement's victim, don't think for a living.
- "Thinking for a living" is a qualification for having worthy opinions to express.
However, people who don't fall into those categories can still make coherent, valid points. They may be smart, clever, and accomplished. Their problem-solving skills and ingenuity may be impressive. They may have a wealth of skills for human interaction (which would be foolish not to learn from). The striking creativity they may exhibit is partly why they want PCs at all. "Thinking for a living", as opposed to dumb repetitive labor like ditch-digging, encompasses many options beside working with computer technology.
I know people who either can't do my chosen career, or don't want to. That doesn't make them inferior. After all, the fact that I may not be able to succeed in their careers, or want to, doesn't make me inferior to them...
Friday, November 23, 2007
remembering Beast Wars
- Memorable characters. I'm inclined to think this is common to all Transformers series. Admittedly, those looking for subtlety and balanced characterization should perhaps keep looking. Those who want to be entertained by a rich variety of strongly-defined personalities (so strongly-defined that it borders on overexaggeration) should be satisfied. Of course, defining characters, especially the minor ones, in broad, easily-grasped strokes is the norm for TV shows. Beast Wars had some great ones, and it seems unfair to only list seven: Optimus Primal the just leader, Megatron (different robot, same name) the megalomaniacal manipulator, Tarantalus the insane genius, Black Arachnia the slippery schemer, Rattrap the wisecracking infiltrator, Waspinator the bullied underling, Dinobot the conscientious warrior. The cast of the show changed incredibly often, yet each addition and subtraction made an impact, particularly in the second season.
- Malevolent aliens. This element of the show is not as corny as it sounds; it acts more as a bubbling undercurrent of anxiety or a convenient plot driver/game-changer than a central focus. An advanced race of alien beings has an intense preexisting interest in the planet of the series' setting, and to them the crash-landed embattled transformers are an intolerable disruption. Yet the sophisticated alien race always acts indirectly through the use of proxy devices of incredible power (only at the very end do they send a single representative). The episodes that actually do revolve around the aliens' actions all have two-word titles starting with the letters "O" and "V". These episodes make up no more than a handful, but include the two episodes which make up the exhilarating cliffhanger finale of the first season.
- Advanced but limited robots. Some fictional stories strive to be plausible by trying to match, or at least not massacre, the unwritten rules of reality: excessive coincidences can't happen, people can't act without motivation, human heroes and villains can't be entirely good or entirely bad. Other fictional stories are set in a fictional reality with a fictional set of rules, but the story must still be "plausible" to those rules. The Beast Wars "reality" has its own set. Although a transformer can be endlessly repaired from devastating injuries, its true mystical core, known as a spark, must not be lost, else it ceases to be the same animate object. Too much exposure to too much "energon" in its surroundings will cause it to overload and malfunction, but transforming to a beast form nullifies the effect. A transformer in suspended animation as a "protoform" is vulnerable to reprogramming even to the extent of changing allegiances. Transwarp technology operates through space and time. However, the attention to internal consistency is tempered by...
- A sense of fun. Beast Wars was filled with humor. Robots that can take a beating are ideal for slapstick gags (arguably, the show takes this freedom too far on a few occasions). A few of the characters have bizarre and unique eccentricities, such as speech patterns or personality quirks, which are mined for amusement. For instance, one transformer has been damaged into confusing his consciousness with his ant form, so one of his titles for Megatron is "queen". The transformers' dialog is peppered with high- and lowbrow jokes, mostly at each others' expense. Between Rattrap, Dinobot, Megatron, and Black Arachnia, sarcasm isn't in short supply. When one transformer claims time spent in a secretive relationship with an enemy to be "scout patrol", Rattrap comments, "Find any new positions?"
- Battles galore. As befits a show whose title contains "wars", almost every episode contains one or more conflicts. The most common type is shootouts, naturally, but the wily transformers on either side employ diverse strategies and tactics to try to gain a decisive advantage: hand-to-hand combat, communication jamming, viruses/"venom", ambushes, stalking, feints, double-crosses, gadgets made by Tarantalus, etc. Surprisingly, Megatron stages a fake "defeat" of his forces in one episode (so a spaceworthy craft will be forged by combining everyone's equipment), and calls a truce that lasts for a few episodes (so he can refocus his attention to the imminent threat from the aliens). It's probably unnecessary to note that these battles are bloodless, even in the minority of battles that happen while in beast form, however beheading and dismembering (and denting) are common results of warfare. In fact, if Waspinator wasn't so often assigned to collect up the pieces for later repair, he would have very few functioning comrades.
- Connections to the original series. Foremost, that the transformers of this series are descendants of the original transformers, whose battles in the original series are collectively called the Great War. Autobots preceded the basically-decent maximals. Decepticons preceded the basically-aggressive predacons. Intriguingly, during the Beast Wars the maximals and predacons are officially at peace, although the maximals apparently exert greater control than the predacons. Megatron is a rogue who was openly pursuing resources for war, before the maximal research vessel was diverted from its mission to chase Megatron to the planet, ultimately causing each spacecraft to crash. To state more of the connections between Beast Wars and the original series would spoil too much, because over time the writers increasingly intertwined the two series.
- Vintage CGI. You may think that a CGI TV series that started in 1996 wouldn't be as visually impressive now. You're right. Fortunately, the people who worked on the show were all-too-familiar with the limitations, which means they generally did what they could do well. They avoided (or perhaps cut in editing?) what they couldn't. Apart from a few often-used locations, the "sets" are minimal. The planet has life-forms that aren't usually seen, sometimes giving the impression that the transformers are the only objects in the world that move around. On the other hand, the "shots" are creative and effective in composition, angle, zoom, etc., conveying mood and expressing information such that viewers are neither confused nor bored. The transformers, thanks partly to astute body movements and voice acting, are more emotive than one might guess (really, their faces shouldn't be able to move as much as they do--this is an easy little detail to forget about). Each season's CGI noticeably improved on the previous'. Just as Toy Story (1995) benefited by rendering toys instead of people, Beast Wars rendered robots. Nevertheless, the unconvincing fur and hair textures on the beast forms are best ignored.
- Not Beast Machines. This is an excellent reason to cover after CGI quality. Beast Machines was the series that followed Beast Wars. It was pretty, angst-y, stark, and altogether dazzling in design. It had a built-in audience of Beast Wars fans...who stopped watching. Beast Machines was just a drag. The comparison to Beast Wars was night and day, even in the literal sense of being much blacker. Beast Machines had the unintentional side effect of illustrating how special Beast Wars was.
Friday, November 16, 2007
has MVC in webapps been over-emphasized?
- The Model is the data structures and data processing. Ultimately speaking, the Model consists of the information manipulation which is the prime purpose of the webapp, even if it merely accomplishes database lookup.
- The View is the components, templates, and formatting code that somehow transforms the Model into Web-standard output to be sent to the client.
- The Controller processes HTTP requests by running the relevant 'action' code, which interacts with the Model and the View. Part of the Controller is a pre-built framework, since the Controller's tasks, like parsing the URL, are so common from webapp to webapp.
Nevertheless, I wonder if the zeal for MVC has misled some people (me included) into the fool's bargain of trading a minor uptick in readability/propriety for a substantial drag in productivity. For instance, consider the many template solutions for implementing the View in JEE. Crossing borders: Web development strategies in dynamically typed languages serves as a minimal overview of some alternative ways. I firmly believe that embedding display-oriented, output-generation code into a template is better than trying to avoid it at all costs through "code-like" tags; markup is a dreadful programming language. Any CFML backers in the audience? I don't mind excellent component tag libraries that save time, but using a "foreach" tag in preference to a foreach statement just seems wonky.
Indeed, it is true that including code inside a template may make it harder to distinguish static from dynamic content, and the burden of switching between mental parsing of HTML and programming language X falls upon the reader. Developers won't mind much, but what about the guy with the Queer Eye for Visual Design, whose job is to make the pages look phat? In my experience, the workflow between graphic artist and web developer is never as easy as everyone wishes. Page templates don't round-trip well, no matter how the template was originally composed in the chosen View technology. A similar phenomenon arises for new output formats for the same data. When one wishes to produce a syndication feed in addition to a display page, the amount of intermingled code will seem like a minor concern relative to the scale of the entire template conversion.
Consider another example of possibly-overdone MVC practices, this time for the Model rather than the View. One Model approach that has grown in popularity is ActiveRecord. Recently my dzone.com feed showed a kerfuffle over the question of whether or not ActiveRecord smells like OO spirit. The resulting rumble of the echo chamber stemmed more from confusion over definitions than irreconcilable differences, I think; in any case, I hope I grasp the main thrust of the original blog entry, despite the unfortunate choice of using data structures as a metaphor for ActiveRecord objects. Quick summary, with a preemptive apology if I mess this up:
- In imperative-style programming, a program has a set of data structures of arbitrary complexity and a set of subroutines. The subroutines act on the data structures to get the work done.
- In object-oriented-style programming, a program has a set of (reusable) objects. Objects are units having data structures and subroutines. As the program runs, objects send messages to get the work done. Objects don't interact or share data except by the messages. The advantage is that now a particular object's implementation can be anything, such as an instance of a descendant class, as long as the messages are still handled.
- Since ActiveRecord objects represent a database record by freely exposing data, other code in the program can treat the ActiveRecord like an imperative data structure. To quote the blog: "Almost all ActiveRecord derivatives export the database columns through accessors and mutators. Indeed, the Active Record is meant to be used like a data structure." You know how people sometimes complain about objects having public "getters" and "setters" even when the getters/setters aren't strictly necessary for the object to fulfill its responsibility? Roughly the same complaint here.
- The blog entry ends by reiterating that ActiveRecord is still a handy technique, as long as its messy lack of information hiding (leaky encapsulation) is isolated from other code to 1) avoid the creation of brittle dependencies on the associated table's schema, and 2) avoid the database design dictating the application design.
In contrast, I'm steadily less convinced of the need for doing that. The number of times I've 1) completely swapped out the Model data source, or 2) significantly tweaked the Model without also somehow exposing that tweak in the View, is minuscule. Similarly, the majority of webapps I work on are almost all View, anyway. Therefore, as long as the View is separate from the Model, meaning no program entity executing both Model and View responsibilities, inserting an abstraction or indirection layer between the two probably is unnecessary extra work and complexity. When the coupling chiefly consists of feeding a dynamic data structure, like a list of hashes of Objects, into the View, the weakness of the contract implies that the Model could change in numerous ways without breaking the call--the parameter is its own abstraction layer. Of course, that flexibility comes at the cost of the View needing to "know" more about its argument to successfully use it. That information has in effect shifted from the call to the body of the call.
Lastly, after retooling in ASP.Net, the importance of a separate Controller has also decreased for me. I can appreciate the necessity of subtler URL-to-code mapping, and the capability to configure it on the application level in addition to the web server level. Yet the concept of each dynamic server page consisting of a page template and an event-driven code-behind class is breathtakingly simple and potent as a unifying principle. Moreover, the very loose MVC enforcement in ASP.Net (although explicit MVC is coming) is counterbalanced by the strong focus on components. To be reusable across Views, components must be not tied too closely to a specific Model or Controller. The Controller code for handling a post-back or other events is assigned to the relevant individual components as delegates. The Model...er, the Model may be the weak point, until the .Net Entity Framework comes around. As with any other webapp framework, developers should have the discipline to separate out the bulk of the nitty-gritty information processing from the page when it makes sense--fortunately, if that code is to be reused in multiple pages, the awkwardness acts as a nudge.
The practical upshot is to use MVC to separate the concerns of the webapp, while ensuring that separation is proportionate to the gain.
Wednesday, November 14, 2007
another case of unintentionally funny
Comparing those two the first one increases the stack to a point until it starts to pop the stack. The tail recursive version does not use the stack to keep a “memory” what it has still to do, but calculates the result imediatly. Comparing the source though, in my view the tail recursive version takes all beaty out of the recursive implementation. It nearly looks iterative.
Hmm...tail recursion making a recursive function "nearly look" iterative. Go figure.
(What makes this funny, of course, is the fact that in situations without mutable variables, tail-recursive functions are the way to achieve iteration. And converting recursion into iteration through an explicit stack is another well-known technique...)
Tuesday, November 06, 2007
DIY expletives
I've fallen into the habit of saying DIY (do-it-yourself) expletives. Making up expletives isn't difficult. With practice, driven by the frustrations of everyday life, the original sound combinations just flow off the lips like, uh, spittle. Observe these helpful tips, which are more guidelines than rules:
- Keep each individual "word" short. Expletives beyond two syllables can be unwieldy and unsatisfying. The goal is acting as an embodiment of anger and rage--emotions of violent aggression.
- The limitation on the length and complexity of each word doesn't apply to the "sentence" as a whole. Speaking multiple words in rapid succession is great for overall effect, even more so in situations when loudness is prohibited.
- To start a "word", I like to use the consonants k, t, d, m, s. To a lesser degree I also sprinkle in usage of p, h, b, n, g (hard or soft pronunciation, but more often hard), and f. More complicated consonantal sounds can be handy for adding flavor, such as "st","ch", "sp". By all means include gutturals, if you can without hurting yourself or working too hard.
- I like to use the vowel sounds "ah", "oo", "oh". Other vowel sounds can help to break up monotony, perhaps in the middle or end of a statement, but in any case shouldn't draw attention too far away from the consonants. Following up a vowel sound with another vowel sound ("oo-ee" for example) is probably not conducive to maintaining peppy rhythm and tempo.
- "Words" with multiple syllables (not more than two or three per word, recall!) are simply constructed by combining monosyllabic words.
- Emphasize whichever syllable you like. Variety is what I prefer.
- The last tip is counterintuitive: don't let the preceding tips slow you down or stop you from creating a word that feels right. Practice and developing the habit are more important.
- "Mah-TOO-choh!"
- "Fah soo poh NOOST!"
- "KOH-dahb hoom BAYSH-tah!"
Tuesday, October 30, 2007
stop comparing TV shows to Buffy
- As near as I can tell, it doesn't do fledgling shows any favors.
- Buffy ended years ago. Years. At what point will this obsession with crowning a successor end?
- If you want to watch a series like Buffy, why not, er, watch Buffy? It's available for home viewing, don't-cha-know.
- The people who ramble on and on about postmodernism (excuse me, pomo) will remind you that it's flourishing these days, Buffy or not. Mixing genres is common, and so is flipping audience expectations upside-down/inside-out.
- Continuing in the same vein, Buffy's uniqueness was not in its elements but in the combination of those elements. A show with hellish stuff in it is not enough to be the "next Buffy". Neither is a show with people who name-drop a wide range of artistic references and generally say comebacks that sound pre-scripted. Neither is a show that compares everyday life to extraordinary premises. Even if a show appears to have parts of the Buffy Secret Sauce, it isn't therefore just like Buffy.
- For TV show comparisons in general, remember that Buffy was (and is) probably compared to shows that aired before it. Comparisons don't communicate what a show is, only its similarities. The point is that, stating the obvious, new shows don't need to be Buffy to be good. Evaluate a show on its own qualities.
Monday, October 22, 2007
peeve no. 251 is equating programming and art
Oh, right. The visual distribution of whitespace is entirely beside the point in code appreciation. Know why?
Code has a specific reason to exist. It has a purpose. (When the sentient programs in Matrix Reloaded, whether agents or exiles or Smith, talk about purpose, they're showing a laudable degree of self-awareness.) It does something. It operates. It executes. Data isn't merely one of the concerns addressed by a particular piece of code; data is in a fundamental sense the only real concern. Every cruddy program in existence performs one or more of the universal tasks of creating, reading, updating, and deleting data. True, some programs may do no more than yank pieces of data from here and there, transform or summarize the pieces, and dump out a transitory result. However, what came out (a display screen, a printout) still differs significantly from the data that came in, so data creation has happened. Even games process data: the data is a singularly trivial but highly-lucrative stream resulting from the iteration of a loop of the abstract form "Game State += UI input".
The definite purpose of every snippet of programming code, data processing, is what separates it from art. There is no inherently common purpose of art. Well, besides being the expression of the artist's vision, but a work's success in fulfilling this purpose is necessarily a subjective judgment by the artist, and once the work is finished that purpose is as completed as it will ever be. (Remaking and revising a work to better express the artist's original vision is the creation of a new work. Hi, George Lucas!) This difference in essence between code and art underpins several important distinctions. Although comparing programming and art may be instructional, these are reasons why equating programming and art is not quite right.
- Primarily, programming is objective in a way that art isn't. Code has a specific purpose, which is processing data in a specific way. If a machine executes the code, but the data processing has not occurred correctly, the cause may be complicated and subtle (or so simple as to be overlooked), but something has undeniably failed. The purpose(s) of art are not so explicit and obvious, which means the failure(s) of an artwork are not so explicit and obvious. Horrible art may be "ugly", but horrible programming is borked.
- For computers and code to feasibly/acceptably automate data processing, the objectively-evaluated accuracy just described must be consistently reliable. Therefore, computers and code must be deterministic. Parts of the data processing may be "indeterminate", like a string of pseudo-random numbers, or parts of the execution order itself may be indeterminate, like a set of parallel threads, but taken as a whole, the guaranteeing of accurate results implies determinism. Determinism, in turn, implies that inaccuracies do not "happen spontaneously", but have a root cause. The root cause in any particular situation may in fact be an erratic one, due to flaky memory for instance, but nevertheless the inaccuracy has a determinate cause. More to the point, if the programming is at fault, then this cause is known as a bug. Since all parts of the code are intertwined, and code, while it can be fault-tolerant, certainly isn't self-conscious and self-correcting (that's what programmers are for!), it follows that, speaking in general of code from the metaphor of a "black box", an inaccuracy could potentially be due to any part of the code. When the code as a whole can objectively fail, then every part of the code can objectively fail, too. An artwork is much more forgiving of its parts. Critics of any art form will comment that a work had some "regrettable qualities", but "by and large, succeeded". Programming has a much more menacing antagonist, the bug, and no matter how artistically or aesthetically correct the code is, one lil' bug means the code, at least under some conditions, is fatally flawed.
- Continuing the bug discussion, the dirty little non-secret of programming is that, with few exceptions, no sufficiently complex code is entered without mistakes the first time, or the second time, or the third time. The wizards and geniuses of programming confess to regularly forgetting about some minor detail (is it array.length or array.length()?), and mistyping identifiers (what was the name of the variable for the number of combinations?). Speaking out of my ignorance and lack of artistic talent, the mental exertion/concentration required to perfectly perform the concrete actions of programming--obeying syntax, tracking semantics, remembering the general flow of the design/algorithm, error-free typing--outweighs the similar demand on the artist. In response, helpful programming-oriented editors arose, built for relieving some of the strain with auto-completion and other code assists.
- Moreover, bugs in the manual transcription of a design/algorithm are some of the easier ones to catch. Programming is an excellent illustration of how easy it is to gingerly proceed through a canned series of steps, appearing to conscientiously complete each one, and end up creating complete horsepucky (or, if you prefer, bulldooky). As people swiftly discover after exiting school, the unwritten assumptions and the errors in reasoning about the data domain can yield code which processes the wrong data, processes data irrelevant to the user's needs, processes the data insufficiently for easy use. Consequently, transactions with programming clients differ from transactions with art patrons. The art patron (editor, supervisor, commissioning official) has a set of loosely-defined outlines/parameters for the artist to follow, and the art should be acceptable if it is within those boundaries. The programming client quite likely also has a set of loosely-defined requirements, but, because code is not art, code is strictly defined: this input transformed to that output. Loose definitions of the purpose are insufficient for code because loose results from code are insufficient. Take a stupid example, a crayon illustration. Is the purpose of "calculating sales tax" met by code which assumes a tax rate for California? Yes--but just if the input and output is Californian! A programming client who loosely states he or she wants code that "calculates sales tax", receives that code, and runs it in Wyoming, is not happy.
- A greater amount of precision distinguishes programming requirements from artistic. Time does, as well. Artworks can be unchanging, and remain sublime expressions of the human spirit--this "requirement" of the artwork doesn't shift. In contrast, as the turn of the century reminded everyone, a shift in requirements around pristine code can make it useless. "Functioning as designed" turns to "bug". It's a cascading effect: the world changes, the data that represents the world changes, the processing of the data changes, the code that implements the data processing changes. No doubt, certain codebases are long-lived, but only by embodying the most general of concepts and nurturing areas of change. emacs continues both on account of meeting the continuing need to edit text and supporting arbitrary extensions.
The practical conclusion is that the degree of beauty exhibited by code, as appreciated by discriminating programmers, is beside the point if it doesn't imply fewer short-term and long-term bugs. Evaluating and debating code prettiness is not something programmers need to do in order to have substantive discussions about software development. Why is using a popular, well-supported framework likely better than making your own? Yours probably has bugs you don't know about yet. Why is code modularization good? It cuts down duplication, enabling easier and more effective bug fixes. Why is uncluttered syntax good? It affords the programmer more time and attention for the subtler bugs of analysis and design. Why is increasing development agility (whether or not ye gods have deigned you Agile-Approved) good? It minimizes the possibilities for bugs to creep in. Why is understandable code good? It makes the tasks of finding and fixing bugs more productive later.
Don't show and tell me how shiny my code will be, thanks to your silver bullet. Show and tell me how bug-free, how effortlessly correct my code will be, thanks to your silver bullet. I am not an artist.
Thursday, October 18, 2007
looking forward to completing your training
Monday, October 15, 2007
advanced playlists in amarok
A dynamic playlist is really a configurable playback mode that automatically enqueues random tracks for playing and removes played tracks from view as each track ends. Each dynamic playlist uses one or more playlists, including smart playlists, as the source of tracks. (The built-in dynamic playlist "Random Mix" just uses the entire collection.) The user can add or remove or reorder tracks while the dynamic playlist is running, but the dynamic playlist will only add or remove tracks as it needs to, according to its settings. The aim of this feature is to keep the resource use of the player down, of course, and the visible list of tracks to a manageable level. It reminds me of the "stream" or "lazy sequence" concept--if one is only processing one or a small subset of members of the input at once, it's more efficient to sequentially retrieve or produce no more than that input portion for the algorithm, as it proceeds. In actuality, the input stream may be internally buffered for efficiency, though this doesn't matter to the consuming algorithm.
I'm more impressed by smart playlists, which aptly provide another demonstration of the value in storing audio metadata in a database. Smart playlists are defined by setting desired track characteristics. The smart playlist then automatically includes tracks with those characteristics. This is a straightforward idea for relieving users of the daunting alternative task: managing a custom playlist's contents through repetitive interface manipulations (for small playlists and collections, this is a chore, for larger playlists and collections, this is nigh-unworkable). What turns smart playlists from a helpful aid to a swoon-inducing beloved companion is the comprehensiveness of the create/edit window. The query-building capability is dreamy. Its power is intoxicating. A competing Ajax application (is there one?) would need to work hard to convince a user to stray.
The set of characteristics to query on includes the usual standbys, Title and Artist and Album and Genre. The additional options, that make one go week in the knees, are Length, Play Count, Year, Comment, First Play and Last Play date, Score, Rating, Mount Point, BPM, Bitrate, and Label. Labels, also known as tags in other contexts, are words associated with tracks in a many-to-many relationship. As usual, the value lies in the ability to create ad-hoc groups using the labels/tags. One smart playlist could contain all tracks with one label, another smart playlist could contain all tracks with another label, etc. This way of creating playlists by label may not seem superior to the technique of creating custom playlists. However, when a new track should be in five different custom playlists, it's easier to drag the five corresponding previously-used labels onto the new track and let the playlists do the grouping work.
To make the process of entering a filtered characteristic still more alluring, after choosing a particular characteristic from the drop-down list, a set of controls to the right of the list change to only offer the choices that make sense for that characteristic. For instance, a numerical characteristic leads to a drop-down list of connectives such as "is", "is not", "is greater than", "is less than", followed by an input box for numbers (and little increment/decrement arrows to change the value by clicking, if you swing that way). A textual characteristic switches the list of connectives to regexp-like choices such as "contains", "starts with", "does not start with", followed by a combobox whose assisting drop-down portion contains all the actual examples of the chosen characteristic in the collection. After choosing Genre, it might contain Reggae. This is more than "user-friendliness". This is admirable consideration in the partnership between interface and user. The fewer mistakes the user is likely to make, the less aggravation is involved in getting what he (or she) really really wants.
So the depth of each filtering characteristic reaches areas of need the user wasn't aware was there, but further enamoring is the possible breadth. On the far right of each characteristic are two buttons labeled + and -. Clicking + adds a new, identical row of controls for filtering based on an additional characteristic. As you might expect, clicking - on a row of controls removes it and its associated characteristic filter. A group of any number of filtering characteristics, that can grow or shrink on command (I've never craved more than three), is fine, but it lacks some flexibility: this assumes that all the filters must be satisfied simultaneously. What about when a user merely wants one or more of the filters to be satisfied by any one track? The filters the user wants to treat this way are separately grouped on screen under a header that starts with the words "Match Any" (the other grouping's header starts with the words "Match All").
Underneath, the window has some options for organizing the resulting list of tracks. The user may order the tracks according to a particular characteristic in ascending or descending order. At the time the playlist is loaded by the player, the tracks can be resorted easily, and a dynamic playlist based on the smart playlist would play tracks randomly in any case, so the order option isn't of much interest apart from its use in conjunction with the limit option. In the same way, using the limit option to keep a playlist small is not necessary if playback proceeds in a dynamic playlist. But put together, the order and limit options could enable the creation of a playlist of the "twenty top-rated tracks less than two minutes long", for instance. (Ratings and scores are different things in amarok, but that's a whole 'nother blog entry.) The "expand" option selects a characteristic to automatically group tracks into subsidiary playlists, in each of which all the tracks have that same characteristic in common. In the playlist choosing pane, the subsidiary playlists appear as "tree children" of the smart playlist--expanding the smart playlist like a folder shows all of its subsidiary playlists indented underneath it as separate playlists. Choosing an expand option is probably not an appreciable gain for most smart playlists, but for huge lists it's valuable.
Did you know that clicking the middle button (or possibly the scroll wheel, depending on the mouse) on the amarok tray icon pauses and resumes? If the devil's in the details, then amarok is my Dark Lord.
Thursday, October 11, 2007
hey Microsoft - fix THIS bug
Obviously, IE isn't the only browser that contains bugs, and its CSS rendering alone should be addressed first, but my teeth hurt a little from grinding over an obtrusive security warning from the browser about normal execution of code on the page.
Tuesday, October 09, 2007
link: the agile memeplex
The article also mentions in passing that the name "Extreme Development" may have incited both interest and disinterest among developers. Speaking as someone who's generally annoyed by X-treme usage of "extreme", I must agree. (Perhaps "extreme" is the new "awesome"?)
Saturday, October 06, 2007
another Lost coincidence?
Jack Shephard, Lost character.
Coincidence? Yeah, probably. The mere fact I'm wondering about it at all is a consequence of the writers embedding so many other miscellaneous references into the show. In any case, it's still more noteworthy speculation fodder that one of the characters shares the exact name of an extremely well-known philosopher.
Friday, October 05, 2007
db4o articles on developerWorks
I'm still skeptical about how it compares to the usual SQL databases in several ways: storage efficiency/data duplication, performance, security, scalability. It was also interesting to note in the later articles that db4o faces some of the identical difficulties as ORM tools: lazy loading, inheritance hierarchies, collections of collections (of collections of collections), refactoring, mutually-referenced objects. As an object database closely integrated into code, db4o has distinct advantages when confronting these difficulties, though. The fact that every object has a unique ID automatically probably helps a lot.
For a new application (greenfield), db4o might be fun to try. Where I work, all the data that matters is stored in "legacy" keep' em-running-til-they-croak language-agnostic databases, so db4o isn't an option. Not without a messy bridging/replication layer, in any case. Software that makes data integration much easier is software that can conquer the world.
Friday, September 28, 2007
human brain as VM
But just a few days ago I attended a training presentation in which the source code on the slide had no chance at all of accomplishing what the presenter said it did. I'm pretty sure the presenter wasn't stupid or purposefully careless. My conclusion was that he had composed the code by copying it incompletely from elsewhere or incorrectly translating an abstract algorithm into its concrete counterpart. Either way, he had most certainly not parsed and traced the program like a "computer" (in quotes because every medium or high level language technically is an abstraction from the actual computer's execution of its instruction set and memory).
How common is it for developers to read, write, and otherwise think about code without putting it through the mental lexer, the mental parser, or even the mental Turing machine as a last resort? I ask because I'm genuinely curious. I don't feel that I understand code unless I can visualize how the individual character elements fit together (syntax) and how the data flows through the program as it runs (semantics); is this not the case for others? Is this another instance of my rapport with languages and grammar and logic? Am I wrong to read through not only the prose of a training book, but also the code examples, and then not continue until I can mentally process that code in my mental VM without tossing compile-time syntax exceptions?
I know some (ahem, unnamed) people whose primary mode of development is tracking down examples, copying them, and modifying them as little as possible to get them to work. Example code is an excellent aid to understanding as well as an excellent shortcut in writing one's first program; I'm not disputing that. Patterns and techniques and algorithms should be publicized and implemented as needed; I'm not disputing that either. And I understand that the scary corners and edges of a programming language (static initializers in Java?) are probably better left alone until really needed, to avoid excessive hassles and complications.
However, if someone wants to save development time and effort, especially over the long run (small programs tend to morph into HUGE programs), isn't it better to take the time to learn and understand a language or API and instead reuse code through a code library mechanism, rather than copying code like a blob of text and forfeiting one's responsibility to, well, know what's going on? Don't forget that by their very nature copy-and-paste code examples fall out of date obscenely fast. Often, it seems like people write code one way because they soaked it up one day and now they cling to it like a life preserver. Multiline string literals created by concatenating individual lines and "\n", in one of the languages with features for multiline string literals, never ceases to amaze me. As a last plea, keep in mind that as good APIs evolve, common use cases may become "embedded" into the API. Congratulations, your obsolete copy-and-paste code example written against version OldAndBusted is both inferior and harder to maintain than the one or two line API call in version NewHotness. Try to keep up.
Thursday, September 27, 2007
Schedules Direct is my hero
...thank you.
Friday, September 21, 2007
good-bye Alex
Sunday, September 09, 2007
R5F27 release of knoppmyth
In other breaking news: impersonating a deity should have been made against Roger Federer's programming. Eh, there's always next year.
Friday, September 07, 2007
peeve no. 250 is unnecessary surrogate keys
As with the other peeves, my irritation has subjective and objective aspects. Objectively, a surrogate key has the downside of being fairly useless for queries for multiple rows--which also makes its index a fairly useless optimization. This is tempered by the syntactical ease of retrieving one known row, but that case is really more of a "lookup" than a "query", don't you think? (I would contrast the selection and projection of tuples, but why bother...)
Another objective danger with unnecessary surrogate keys, perhaps the worst, is the duplication of data which shouldn't be duplicated. By definition the surrogate key has no relationship with the rest of the columns in the row, so it isn't a proper primary key for the purpose of database normalization. If the cable company's database identifies me by an auto-generated number, and I sign up again as a new customer paying promotional rates after I move, the database won't complain at all. In practice, of course, the entry program should aid the user in flagging possible duplicates, but clearly that lies outside the database and the data model. Admittedly, judging whether "Joe Caruthers in Apt B3" is a duplicate of "Joseph Caruthers in Building B Apartment #3" is a decision best left up to an ape descendant rather than the database.
The objective tradeoffs of a surrogate key are worthy of consideration, but my subjective disdain for unnecessary surrogate keys goes beyond that. I feel violated on behalf on the data model. Its relational purity has been sullied. Lumping in an unrelated number (or value) to a row of real data columns feels out of place, like adding a complex "correction factor" to a theoretically-derived equation so it fits reality. But the unnecessary surrogate key has the opposite effect: it causes the table to have less resemblance to what it models. An unnecessary surrogate key leads someone to wonder if the table was the victim of a careless and/or thoughtless individual who went around haphazardly numbering entities to shut the database up about primary keys. Normalization, what's that?
I've seen enough to realize the convenience of a surrogate key, especially for semi-automated software like ORM mappers. It's also clear that sometimes a surrogate key is the only feasible way of uniquely identifying a set of entities, particularly when those entities are the "central hubs" of the data model and therefore the corresponding primary keys must act as the foreign keys for a large quantity of other tables. The technical downsides of a surrogate key can be mitigated by also including a real primary key for the table.
If the overall design doesn't make obvious the choice of each table's primary key, that's a clue the design should be reevaluated. (You did design the tables before creating them, right?) Don't forget that sometimes the primary key consists of every column. The table that serves as the "connecting" or "mediating" table in a many-to-many join is a good example, because it's essentially a table of valid combinations.
Wednesday, August 22, 2007
knowledge labor is a great racket
One good analogy to my situation is an old story I heard once (so I'll probably tell it wrong) about a customer and his auto mechanic. The mechanic looked closely at the customer's engine, fiddled with a few of the parts to check them out, then finally used one of his tools to make a single adjustment. The customer balked at the repair cost...the mechanic had barely done anything! The mechanic remarked that the cost wasn't just for making the adjustment, but for knowing where to make the adjustment.
How much do code monkeys and grease monkeys have in common?
Tuesday, August 21, 2007
lives effected by Hurricane Dean
But reading "whose lives were effected by Hurricane Dean" in a real news article makes me chuckle. If you don't know why, search for "affect effect" on the Web to learn the distinction between the verbs "affect" and "effect".
If your life was effected by Hurricane Dean, your father is a blow-hard. (KA-ching!)
Tuesday, August 14, 2007
the typing debate is a red herring
I've hinted at this previously, but my discovery of Jomo Fisher's blog on the more cutting-edge aspects of .Net reminded me of it. The casually-readable entry The Least You Need to Know about C# 3.0 describes multiple features which put the dynamic-static (compilation-runtime) distinction in a new light: the keyword "var" (for type inferenced or "anonymous type" variables), extension methods, and expression trees. These additions help make LINQ possible.
As .Net increases its explicit support for dynamic code (as opposed to dynamic code support through a Reflection API), the barrier between the capabilities of a "static" and a "dynamic" language keeps shrinking. If "expression tree" objects in .Net 3.5 allow someone to generate and execute a customized switch statement at runtime, then what we have is a solution with the flexibility of a dynamic language and the performance of a static language. Ideally, the smart folks working on porting dynamically-typed languages to statically-typed VM platforms would accomplish something similar in the implementation innards. The code that accomplishes this feat is a bit daunting, but it is cutting-edge, after all.
Some of those irascible Lisp users may be snickering at the "cutting-edge" label. As they should. I've read that Lisp implementations have had the ability to employ either static or dynamic typing for many years. Moreover, Lisp uses lists for both data and program structures, so it doesn't need a special expression tree object. It also has had a REPL loop that made the compilation-runtime distinction mushy before .Net was a gleam in Microsoft's eye. On the other hand Lisp is, well, Lisp. Moving along...
The way I see it (and echoing/elaborating what I have written before now), there are three reasons why the question of static and dynamic variable typing has always been, if not a red herring, at best a flawed litmus-test for language comparisons.
- The time when a variable's data type is set doesn't affect the clear fact that the data the variable refers to has one definite data type at runtime. Ruby and Python cheerleaders are fond of reminding others that their variables are strongly typed, thankyouverymuch! Where "strongly typed" means that the code doesn't attempt to perform inappropriate operations on data by applying coercion rules to one or more operands. The timeless example is how to evaluate 1 + "1". Should it be "11", 2, or Exception? Strongly-typed languages are more likely than weakly-typed languages to evaluate it as Exception (whether a static CompileException or a dynamic RuntimeException). So dynamic typing is separate from strong typing, precisely because variable typing, a part of the code, is different from data typing, which is what the code receives and processes in one particular way at runtime. Data is typed--even null values, for which the type is also null. Regardless of language, the next question after "what is the name and scope of a variable?" is "what can I do with the variable?", and its answer is tied to the type of data in the variable. In fact, this connection is how ML-like languages can infer the data type of a variable from what the code does with it. Similarly, Haskell's type classes appear to define a data type precisely by the operations it supports. No matter how strict a variable type is at compile time or run time, when the code executes such considerations are distractions from what the actual data and type really is.
- Programming languages are intangible entities until someone tries to use them, and therefore publicity (ideas about ideas) is of prime importance. One example is the stubborn insistence on calling a language "dynamic" instead of "scripting"; with one word programmers are working on active and powerful code (it's dynamite, it's like a dynamo!) while with the other word programmers are merely "writing scripts". Unfortunately, applying the word "dynamic" to an entire language/platform can also be misleading. Languages/platforms with static variable typing are by no means necessarily excluded from a degree of dynamism, apart from support for reflection or expression tree objects. Consider varargs, templates, casting (both up and down the hierarchy), runtime object composition/dependency injection, delegates, dynamically-generated proxies, DSL "little languages" (in XML or Lua or BeanShell or Javascript or...) parsed and executed at runtime by an interpreter written in the "big" language, map data structures, even linked lists of shifting size. The capabilities available to the programmer for creating dynamic, or perhaps true meta-programmatic, programs can be vital in some situations, but in any case it's too simplistic to assume static variable typing precludes dynamic programs. I don't seem to hear the Haskell cheerleaders often complaining about a static-typing straightjacket (or is that because they're too busy trying to force their lazy expressions to lend a hand solving the problem?).
- OOP has been in mainstream use for a long time. I think it's uncontroversial to note the benefits (in maintainability and reuse) of breaking a program into separate units of self-contained functions and data called objects, and then keeping the interactions between the objects minimal and well-defined, especially for large, complex programs. This fundamental idea behind OOP is independent of variable typing, and also independent of object inheritance. Anyone with a passing familiarity of Smalltalk or Objective-C would agree. A language might allow one object to send a message to, or call a method on, any other object, with a defined fallback behavior if the object isn't able to handle the message. Or, it might not allow message-passing to be that open-ended. Maybe, for reasons of performance or error-checking, it has a mechanism to enforce the fact that an object must be able to handle all messages passed to it. This "message-passing contract" may be explicit in the code or inferred by the compiler. Most likely, if it has object inheritance it supports using a descendant object directly in place of one of its ancestor objects (Liskov substitution principle). My point is that OOP support may be intimately connected to a language's scheme for typing variables (Java), or it may not be (Perl). A tendency to confuse OOP with a typing system (as in "Java's too persnickety about what I do with my object variables! OOP must be for framework-writing dweebs!") is another way in which the typing question can lead to ignorant language debates.
Friday, August 10, 2007
abstract algebra sometimes drives me nuts
However, because shapes in the real world are so complex, geometrical abstractions don't break into my thoughts too often. Abstract algebra is worse, perhaps because its abstractions have minimal assumptions or requirements: sets, mappings, operations, groups, rings. Mathematicians and scientific theorists explicitly apply these concepts all the time, if only for classification.
But I don't want to keep thinking along those lines in everyday life. Specifically, I was thinking about changing the days of the week I water a plant, so the same number of days passes in-between. Right now I water on Monday and Thursday, leaving one gap of two days and another gap of three days. (Yes, I realize this problem is most likely not worth the mental expenditure I'm describing--but what are hobbies for?). Mentally starting at Wednesday, I counted in three day increments until I reached Wednesday again...
...and after I was done I went on a long tangent looking up information about the cyclic group Z7, merely one in this list at wikipedia, but also an important group because it is a simple group. By associating each day of the week with an integer in the range 0-6 (like 0 := Sunday and then assigning progressively greater numbers in chronological order), and setting the group "multiplication" operation to be plain addition followed by modulo 7, the days of the week match that group perfectly. Although people might be a bit bewildered if you mention multiplying Monday by Friday (or Friday by Monday, abelian, baby!) to get Saturday [(1 + 5) modulo 7 = 6].
This group has no nontrivial subgroups (the trivial subgroup is just 0 alone, which is useless because I must assume that time is passing and I must water the plant more than once!). The lack of subgroups implies that no matter what interval of days in the range 1-6 I try, I'll end up having to include every day of the week in my watering schedule!
I mentioned before that I tried 3, which is Wednesday according to the mapping (isomorphism, whatever). Three days past Wednesday is Saturday [(3 + 3) modulo 7 = 6]. Three days past Saturday is Tuesday [(6 + 3) modulo 7 = 2]. Three days past Tuesday is Friday [(2 + 3) modulo 7 = 5]. Three days past Friday is Monday [(5 + 3) modulo 7 = 1]. Three days past Monday is Thursday [(1 +3) modulo 7 = 4]. Three days past Thursday is Sunday [(4 + 3) modulo 7 = 0]. Finally, three days past Sunday is Wednesday [(0 + 3) modulo 7 = 3]. The cycle is complete, but all I have is the same group I started with--the entire week!
I'm not sure what's worse: that all this pondering about abstract algebra didn't lead to any useful insights, or that I got caught up in doing it. I don't know how I manage to get anything done. Ghaah, I can be so nerdy even I can't stand it...
Thursday, August 09, 2007
willful ignorance
I doubt I need to cluck my tongue and admonish everyone to ignore the usual ideological conflicts and use what works. (The fallacy of identifying programmers with languages was the topic of a rant a while ago.) My new revelation is recognizing that one of the ingredients of robust technological snobbery is a disturbingly natural human trait: willful ignorance.
Willful ignorance is the compulsion to either 1) avoid learning something which could conflict with a truth someone is personally invested in, or 2) learn something just enough to find valid reasons for dismissing it. Willful ignorance is everywhere, because people not only want something to be true but need it to be true. Once someone's mental and emotional energy are mixed up with some piece of information, employing a mechanism to defend it is as instinctual as tensing up when a toothy predator pounces at him or her.
Interestingly, this constant also applies to people who pride themselves on their rationality; they need it to be true that rationality matters and works. For instance, someone who has the grating habit of correcting others' grammar to make it internally consistent may also be someone who both desperately wants to consider language as a logical system and to believe in the importance of logic for valid thinking. Such a person would be well-advised to keep willfully ignorant of the notion of language as a creative act of communication. Starting from the wrong axioms more or less guarantees the failure of rationality to reach the right conclusion. I remember a time when I couldn't figure out why some code under test never produced the expected test result. I took a short break after realizing the test had been wrong.
Therefore the proper antidote to willful ignorance is not rationality alone (rationality can mislead someone into excluding too many possibilities). But a wide-open mind isn't the antidote, either. Accepting information without discrimination leaves one open to each passing fad, or paralyzed with indecision. The best strategy is to make one's mind a centrifuge. Pour in whatever ideas are lying around, then stir and process and compare until the useful portions are separated out. If one of the challengers to your beliefs has value, isn't it better to figure that out as soon as possible? At the risk of getting off point, I respect the Java thinkers who heard about Rails and worked to apply some of its ideas to the Java world, instead of willfully ignoring Rails altogether or trying to immediately convert to it (thereby incurring significant costs).
The above was about willful ignorance. It doesn't apply to ignorance born out of a lack of time/energy. I haven't been able to look into Seaside at all, for that very reason.
Wednesday, August 08, 2007
design patterns: just relax
I suppose I just don't understand the hubbub. As I see it, a design pattern is a technique to solve one or more problems, by trading complexity for flexibility. The tricky part isn't recognizing a problem in existing code, and then refactoring it to use a relevant design pattern; the tricky part is deciding when (and if) it makes sense to apply a design pattern preventively. Maybe part of the reason some people hold their noses around Java code is because of their dislike for design pattern overuse.
Given that the goal of software development is to solve information problems, design patterns are organized descriptions of solutions. Solving a problem with a design pattern is fine. Solving a problem without a design pattern is even better, because that means the problem isn't one of the harder ones. Solving a problem inside a platform, framework, or language that makes the design pattern unnecessary (at least in some common cases) is the best, because the problem is solved on behalf of the programmer. Lastly, solving a difficult problem at runtime using some wild black-magic, while enjoyable, may demand careful organization and documentation to keep it maintainable (stuffing that genie into its own separate bottle, behind a good API, may be a good idea).
Design patterns: use them if you need to. In any case, learn them just in case they may be handy later, and also so you can communicate with other design pattern aficionados. A name for the "callbacks within callbacks design pattern" would have allowed me to end that post with a sentence that wasn't so lame.