Showing posts with label Blog Responses. Show all posts
Showing posts with label Blog Responses. Show all posts

Friday, August 02, 2013

my reactions and advice to the millennials leaving the church

The religion section of the CNN website published an attention-grabbing article recently: "Why millennials are leaving the church". I'm compelled to respond to its theses. These statements are fairly admirable in intent, but call for some more prodding...
  • "I point to research that shows young evangelicals often feel they have to choose between their intellectual integrity and their faith, between science and Christianity[...]" I understand firsthand how frustrating that can be. But my advice is to face this choice rather than ignore it. In particular, ask yourself how each of these domains reaches conclusions. How does each one resolve controversies definitively? Which one is less concerned about the distortion of personal bias? Which one is more prone to search for justifications of preexisting ideas? Which one is more prone to first gather information and then develop the ideas that best fit? What supports and defends the ideas in each domain? What is the history of how those ideas originated?
  • "[...]millennials long for faith communities in which they are safe asking tough questions and wrestling with doubt." Your longing is valid! Whenever questions and doubts are muted, it's far too easy for inaccurate ideas to proliferate. Therefore, when a community follows such a policy, the ideas it spreads merit additional levels of scrutiny for that very reason.
  • "Many of us, myself included, are finding ourselves increasingly drawn to high church traditions – Catholicism, Eastern Orthodoxy, the Episcopal Church, etc. – precisely because the ancient forms of liturgy seem so unpretentious, so unconcerned with being 'cool,' and we find that refreshingly authentic." I know taste is subjective, but do you also realize that conservative and traditional styles almost by definition are symptomatic of faith communities that are customarily rigid and unwilling to accept dissent? Enforced doctrine is hardly compatible with the individualized thinking that you said you longed for.
  • "We want an end to the culture wars. We want a truce between science and faith. We want to be known for what we stand for, not what we are against." Peace and positive thinking are great. However, calling for an end to these ideological conflicts raises a root question: why exactly do these conflicts exist in the first place? Assuming the ideas of the faith community are right, then why would those ideas not already be identical to the highest moral impulses and to the most prevalent empirical discoveries? 
  • "We want to ask questions that don’t have predetermined answers." That's understandable; those questions are certainly more interesting. Do you wish to try to find reliable answers to such questions? If so, how does your faith community recommend finding those answers? What is its process for obtaining and checking unanticipated truths? Who is permitted to participate in that process?
  • "We’re not leaving the church because we don’t find the cool factor there; we’re leaving the church because we don’t find Jesus there." This is an excellent insight. Now dig deeper. What specific indications show that Jesus is really somewhere? Are those expected indications in fact found? Also, how would someone surely distinguish between indications of the real presence of Jesus and the many people who only mistakenly think that Jesus is present? And what indications of the presence of Jesus cannot be explained in any other way whatsoever? Are those indications equally convincing to someone who isn't on the lookout for Jesus?

Wednesday, July 20, 2011

"feels dynamic" is a ridiculous expression

I should start out by saying that most of the actual information content in Scala: The Static Language that Feels Dynamic is great and worth your time. Scala deserves any attention it receives. And obviously any complaints I advance about what Eckel writes are gadflies to his cow.

Regardless, my reaction to "feels dynamic" is extreme annoyance. Dynamic typing has no feel. Dynamic typing is merely one of many aspects to a language. I might agree that languages have a "feel", although it still sounds risible in a serious technological discussion. Go ahead and mention "feel", but at least trot out numerous examples that partially convey your meaning. Fortunately, Eckel does go on to do this in the article, which as a communicator puts him above the typical Rubyist/Pythonista/whatever-stupid-name-they-enjoy.

I'd suggest an alternative expression "feels Pythonic". Still better, "is as succinct as Python syntax". Based on the text, that's almost exactly what Eckel intends. Distinguishing a "dynamic feel" from a "static feel" just sounds like someone has an astounding lack of programming language breadth. For instance, ML-family or Lisp-family languages, not to mention Prolog or Forth or J, probably have different "feels" than that simplistic two-state viewpoint. For frak's sake, a hypothetical Java "replacement" more or less identical to Java, but with a type-inferring compiler and a much more convenient standard library, would "feel dynamic" according to some language cheerleaders. Get thee out to read about not only Scala but Groovy++.

Sunday, May 23, 2010

Calvin and Hobbes on Ebert's opinion of video games as art

I'd imagine readers already know about it, but Ebert posted another article about his opinion of video games as art. His stance reminded me of a Calvin and Hobbes strip about the difference between high and low forms of art. I'll repeat it here, substituting "short film" for "painting", "video game" for "comic strip", and other words as appropriate.


A short film. Moving. Spiritually enriching. Sublime. "High" art!

The video game. Vapid. Juvenile. Commercial hack work. "Low" art.

A short film spoofing a video game. Sophisticated irony. Philosophically challenging. ..."High" art.

Suppose I make a video game that contains a short film spoofing a video game?
Sophomoric. Intellectually Sterile. ..."Low" art.

Sunday, November 02, 2008

stuff white people like, meta-midwest edition

After a recent conversation, I've discovered that white people based in the midwest U.S. enjoy Stuff White People Like. They appreciate reading wry observations about white people who differ from them. It allows them to feel better about not living like the people they see on TV, not raising awareness for popular causes, not buying the right devices, and not listening to public radio or appearing to enjoy classical music.

On the other hand, they fully relate to a fondness for shorts, dogs, apologies, living by the water (been to Michigan?), sweaters, scarves, outdoor performance clothes, etc.

Of course, midwest people in thriving urban areas or college towns just read Stuff White People Like because it is about them.

Saturday, October 18, 2008

maybe humans just swap out their thinking selves?

In the previous entry I questioned how much humans actually apply thought to their behavior. "First Person Plural", over at the Atlantic, offers another paradigm: humans switch between multiple selves that think differently. This very concept is reminiscent of I am a Strange Loop, which stated that the brain's many networked nodes commonly can and do house more than a solitary strange loop, which is the book's definition of a consciousness.

As I wrote previously in my response to the book, in my opinion the unitary self is an abstraction. It's a construct or a narrative. However, unlike the article I'm disinclined to replace the individual-self with a crowd-self, which to me seems like yet another helpful but ultimately imprecise metaphor for the reality within the functioning brain.

Instead of imagining a pair of rational and irrational selves, or the article's mention of the iconic angel and devil selves on one's shoulders, I imagine ephemeral electric storms competing to light up the brain. The brain's state is in constant flux. Observing our own brain state through the slim (censored) window of consciousness or others' through interpretation of their behavior, we categorize those states as selves. One self might be grumpy. A second, bashful. A third, happy. So these selves are no more than frames of reference for sets of associated behaviors. If a person's impatient self taps fingers or feet, then is tapping a habitual action performed by the impatient self or is tapping part of what defines the person's impatient self? It should be kept in mind that all of these supposed selves are exhibited by the same person.

Moreover, it then makes little sense to argue that all these selves are dormant or continually fighting for control. The described inner conflicts and the swapping out of selves are analogies for when some previously idle brain regions start firing more energetically. For instance, the tendency to suddenly start concocting inventive excuses when the time comes to carry out a presently unpleasant long-term plan isn't truly the substitution of one self for another. Perhaps nor is it the person reverting to a natural state of nonthinking, as I wondered in the previous entry. It's a concrete and immediate experience of displeasure galvanizing reactionary thoughts more strongly than the intellectually pictured goal.

In short, differing circumstances activate differing portions of the brain. Someone who has associated inappropriate actions with anger, maybe for many years, will have those actions prompted by anger and therefore appear to have a momentary angry self who acts differently than is typical. Since we are humans who are aware of our awe-inspiring capacity to store and retrieve a range of attitudes and impulses and reflexes, we should accept responsibility for them and attempt to mold them. Dividing up oneself into simplistic good and bad sides, then identifying with the good and striving against the bad is a faulty approach. There is only you, more specifically your brain and the networks and strange loops residing inside it, to blame for what you do. The road to freedom is taking charge of, exerting control over, your own brain.

Wednesday, May 07, 2008

a deeper reason why programmers aren't reading books

The Coding Horror blog managed once again to excite the echo chamber of software development blogs with the entry Programmers Don't Read Books - But You Should. (To paraphrase a moldy joke: when Jeff Atwood wants to screw in a lightbulb, he humbly holds it in place as the virtual world revolves around him. He's not arrogant, he just has every reason to be.) I mostly agree with him, which is unsurprising since I wrote a programming book review just a few posts ago. However, I disagree with his blanket disregard for "how" books because I have found that an up-to-date, well-organized, authoritative, understandable "how" book is better for teaching or reference than a time-consuming Web search whose results can be quite questionable. I'll certainly concede that many "how" books fail one or more of those criteria.

My slant is that the phenomenon of programmers not reading books could in some cases be a symptom of a deeper shortcoming: the illusion that busyness is equivalent to worthwhile accomplishment. This mentality is exemplified by mottos like "just do it [write code]" and "you're paid to do your job, not think". Its primary activity is scouring the Internet for example code to copy. To the extent it has an associated training regimen, the training happens at the same time as the work happens or perhaps after-the-fact. Its top goal is to produce more lines of code, not to aid the users. It has no inherent notion of a systemic approach to performance, security, exception-handling, and logging. Its style of thought is nonlinear and incoherent, i.e., absorbing ideas that may be excellent individually but nevertheless lack context and organization. Its medium of choice is the Internet (or as the old book Amusing Ourselves to Death alleges, TV). It not merely prefers prose that is broken up into little chunks, it actively rejects prose that isn't. It admittedly excels at the tasks of searching, skimming, and filtering data, yet such tasks are external to the notion that programming should proceed logically.

On the other hand, someone shouldn't respond by slipping into a false dilemma here. The Internet is a fantastic resource for programmers, especially on Web-related topics; programmers who don't cultivate a collection of browser bookmarks are missing out, as are those who don't communicate with the rest. The Internet is an excellent way to keep one's knowledge current, although it's naive to assume that Web trends are necessarily indicative of actual software development trends, which shift direction slowly like a huge ship. The narrow focus that typifies the Internet complements the comprehensive context that a book provides. And the distinction between the two can be murky. I've read online tutorials that are effectively unprinted books (or true unprinted books like Dive Into Python), and I've used some "quick-reference" books whose entries are so terse that I had to supplement the information with an Internet lookup.

Programmers should have the patience (prudence, really) to learn and ponder the entirety and meaning what they do. They should be mindful. Good books aid in that. The work of programmers shouldn't fit the metaphor for life from Macbeth:
Life's but a walking shadow, a poor player
That struts and frets his hour upon the stage
And then is heard no more: it is a tale
Told by an idiot, full of sound and fury,
Signifying nothing.

Thursday, March 13, 2008

not only n00bs model data

Prelude

It seems to me that there are two measures for how "successful" a blogger is. One is regular readership. Like the ratings for a TV show, the box office receipts for a movie, or the number of copies for a book, the size of the audience is solid proof of a work's attractiveness. Second is the amount of discussion or "buzz" a blog provokes, which is more visibly obvious for a blog than for other media due to being part of a network of hyperlinks. (Of course, sometimes works in other media primarily aim for the second measure, like "Oscar bait" movies that contain few of the usual elements most casual moviegoers want.) These two measures may or may not be orthogonal: lots of comments and trackbacks might expose a blog to a broader number of regular readers, but a blog with one resonant post might not maintain any lasting popularity unless it routinely delivers similar posts, while a "comfortably inane" blog might combine a set of loyal subscribers with a competent focus on uncontroversial topics. I don't think one of the two measures is necessarily better, but then again I like to both learn new ideas and let my old ideas be challenged.

Yegge's blog seems to succeed based on either measure, despite breaking one of the Laws of Blogging: posts should be short, cohesive, and maybe even a bit brusque (well, I suppose he nevertheless has "brusque" down pat). He has a gift for taking the usual programming arguments that blogs beat to death, expressing each side in colorful metaphors, then working step-by-step to his own conclusion through a conversational writing style that emphasizes narratives and examples over premises. I've been provoked into linking and responding to his blog several times before, but am I done? Oh, no. Here I go again, with Portrait of a Noob.

Portrait of a Data-Modeling Vet

I'm using "vet" because that's referred to as the opposite of a n00b. I'm using "data-modeling" because that seems to be the term settled on towards the end to represent a complex of concepts: code comments, code blank lines, metadata, thorough database schemas, static typing, large class hierarchies. By the way, I doubt that all those truly are useful analogues for each other. In my opinion, part of being a vet is the virtue of "laziness" (i.e., seeking an overall economy of effort), and part of "laziness" is creating code in which informative comments and blank lines lessen the amount of time required to grok it. Compare code to a picture that's simultaneously two faces and a vase. Whitespace can express meaning too, which is partly why I believe it makes perfect sense for an organization or project to enforce a style policy regardless of what the specific policy is. Naturally I still think a code file should contain more code lines than comment and whitespace lines, though the balance depends on how much each code line accomplishes--some of the Lisp-like code and Haskell code I've seen surely should have had more explanatory comments.

But my reason for responding isn't to offer advice for commenting and spacing or to dispute the accuracy of the alleged "data-modeling" complex that afflicts n00bs. My objection is to the "portrayal" that n00bs focus too heavily on modeling a problem in an attempt to tame its scariness and avoid doing the work, and vets "just get it done". Before proceeding further I acknowledge that I have indeed had the joy of experiencing someone's fanatic effort to produce the Perfect Class Collection for a problem, with patterns A-Z inserted seemingly on impulse and both inheritance and interfaces stacking so deeply that UML diagrams aren't merely helpful but more or less required to grasp how the solution actually executes. Then a user request comes in for data that acts a little like X but also like Y and Z, and the end result is modifying more than five classes to accommodate the new ugly duckling class. As Yegge calls for moderation, so do I.

However, the n00b extreme of Big Design Up Front (iterations that try to do and assume too much) shouldn't overshadow the truth that data modeling isn't a distraction from or the "kidney" of software. We can bloviate as long as we want about code and data being the same, particularly about the adaptability that equivalence enables, but the mundane reality is programs with instruction segments, data segments, call stacks, and heaps. Computers shlep data values, combine data values, overwrite data values. The data model is the bridge between computer processing and the problem domain that can be seen, heard, felt (smelt and tasted?). A math student who solves a word problem but forgets the units label, like "inches" or "degrees" or "radians" or "apples", has failed. The data model is part of defining the problem, and also part of the answer.

Yet the difference between the data model of the n00b and the data model of the vet is cost-effectiveness: how complicated must the data model be to meet the needs of customers, future code maintainers, and other programmers who may reuse the objects (and if you don't want or expect others' code to call your object methods, ponder why you aren't putting the methods in package visibility rather than public). Yegge makes the same point, but I want to underscore the value and importance of not blindly fleeing from one extreme to the other. Explicit data modeling with static classes is not EVIL. By all means, decompose a program into more classes if that makes it more maintainable, understandable, and reusable; apply an OO design pattern if the situation warrants it, since a design pattern can make the data model more adaptive than one might guess.

N00bs and vets data-model, but the vet's data model is only as strict as it must be to meet everyone's needs. According to his usual form, Yegge's examples of languages for lumbering and pedantic n00bs are Java and C++, and his examples of languages for swift and imaginative vets are Perl and Python and Ruby. (And how amusing is it to read the comments that those languages "force you to get stuff done" and "force you to face the computation head-on" when the stated reason people give for liking those languages is that they don't force anything on the programmer-who-knows-best?) My reply hasn't changed. A language's system for typing variables and its system for supporting OOP don't change the software development necessities already mentioned: a data model and code to process that data model. Change the data model, and the code may need to change. Refactor how the code works, and the data model may need to change. Change the data stored in the data model, e.g. an instance with an uninitialized property, and the code may break at runtime anyway. The coupling between code and data model is unavoidable whether the data model is implicit (a hashmap or a concept of an object being more or less a hashmap) and the code contains the smarts OR the data model is explicit (a class hierarchy or a static struct/record) and the code is chopped into little individually-simple snippets encapsulated across the data model.

Having said that, I see Yegge's comments about "Haskell, OCaml and their ilk" as a bit off. (Considering I self-identify as a F# cheerleader, probably nobody is surprised that I would express that opinion.) I disagree with the statement that the more sound (more complete and subtle, less loopholes) a type system is, the more programmers perceive it as inflexible, hated, and metadata-craving. OK, it may be true that's how other programmers feel, but not me. I admit that at first the type system is bewildering, but once someone becomes acquainted with partial evaluation, tuples, unions, and function types, other systems seem lacking. A shift in orientation also happens: the mostly-compiler-inferred data types in a program aren't a set of handcuffs any longer but a crucial factor of how all the pieces fit together.

When a data structure changes and the change doesn't result in any problems at compile time, I can be reasonably certain all my code still runs. When the change causes some kind of contradiction in my code, like a missing pattern-match clause, the compiler shows what needs to change (though like other compiler errors, interpretation takes practice). I change the contradicting lines, and I can be reasonably certain the modified code will run.

I can be reasonably certain not that it works, but that it runs. Using its types right is necessary to ensure the program runs well, not sufficient to ensure the program runs well. As many people would rightly observe, a program with self-consistent types is useless if it doesn't solve the problem right, or solves the wrong problem, or doesn't yield the correct result for some edge condition, or blows up due to a null returned from an API written in a different language (gee, thanks!). Unit and integration tests clearly retain positions as excellent tools apart from the type system.

I can't help coming to the conclusion that when a type system has a rich feature set combined with crisp syntax (commas for tuples, curly braces and semicolons for struct-like records, square brackets and semicolons for lists) and type inference (all the checking, none of the bloat!), the type system isn't my enemy. And I should include classes and interfaces when lesser constructs aren't enough or I need larger explicit units of code organization. Don't feel bad for modeling data. We all must do it, in whatever language, and our programs are better for it.

Monday, December 17, 2007

why "Art Vandalay" writes Rippling Brainwaves

I was reading On The Meaning of "Coding Horror" (short version: the blogger's a coding horror because all professional coders are--and the best coping tactic is to be humble and teachable), when I followed a link to a past entry, Thirteen Blog Clichés. I agree with most of the list; fortunately, Rippling Brainwaves hasn't had many violations, although that's partly because its basis is unmodified Blogger (items 1, 4, 6, 13). Avoidance of randomly-chosen inline images (2), personal life (8), the topic of blogging (10), mindless link propagation (11), top-n lists (12) happened naturally because I try to publish using two fundamental guidelines: 1) only include content I would honestly find interesting if it was on someone else's blog (content Golden Rule), 2) only link elsewhere if I supplement that link's content with substantial and original commentary. The first exception to guideline two is when I judge that the link must be shared, but I don't have anything to add because the link is just that good or important.

However, the third item in the list, "No Information on the Author", needs further mention. The item's word choices are emphatic, so I'll quote in entirety.

When I find well-written articles on blogs that I want to cite, I take great pains to get the author's name right in my citation. If you've written something worth reading on the internet, you've joined a rare club indeed, and you deserve proper attribution. It's the least I can do.

That's assuming I can find your name.

To be fair, this doesn't happen often. But it shouldn't ever happen. The lack of an "About Me" page-- or a simple name to attach to the author's writing -- is unforgivable. But it's still a problem today. Every time a reader encounters a blog with no name in the byline, no background on the author, and no simple way to click through to find out anything about the author, it strains credulity to the breaking point. It devalues not only the author's writing, but the credibility of blogging in general.

Maintaining a blog of any kind takes quite a bit of effort. It's irrational to expend that kind of effort without putting your name on it so you can benefit from it. And so we can too. It's a win-win scenario for you, Mr. Anonymous.


These are excellent points excellently expressed. I hope the rebuttals are worthy:
  • Like any blogger, I'm always glad to discover that my precious words are connecting to someone rather than going softly into the Void. However, I think it's up to me to decide what I "deserve". Attribution to my proper name isn't one of my goals, because becoming a celebrity is not one of my goals. Crediting the blog and linking back is thanks enough, because that action has the potential to prevent future precious words from going softly into the Void. And by all means, be ethical enough to not plagiarize.
  • This blog is of my ideas, the titular Rippling Brainwaves. It isn't about (original) news. It isn't about (original) research or data, unless you count the anecdotal set of personal experiences I use for reference. The information in this blog, which is mostly opinions anyway, can and should be evaluated on its own merits. In this context lack of a byline should have no bearing on the credulity of the author, the writing itself, or "blogging in general". Here, the ideas reign or fall apart from the author. Think of it like a Turing Test: if a computer was writing this blog, or a roomful of monkeys at typewriters, etc., would that make any difference, in this context? Part of what makes the Web special, in my opinion, is the possibility that here, as in the World of Warcraft, everyone can interact apart from the prejudices and spacetime limitations implicit in face-to-face/"first life". Objectivity++.
  • Given that credibility is not in question, and I function as the writer of this blog, not the topic (item 8, "This Ain't Your Diary"), I'm not convinced that my identity is of interest. I don't mind stating that I am not a famous individual, I do not work at an innovative software start-up or even at one of the huge software corporations, I am not a consistent contributor to any FLOSS projects, and in almost all respects my often-uneventful but quietly-satisfying life is not intriguing. More to the point, I don't want anyone who reads to "color" my words with knowledge of the writer: don't you dare belittle my ideas because of who I am.
  • Probably the most childish or indefensible reason I have for being a "Mr. Anonymous" is anxiety about real-life repercussions from what I write, to both career and personal relationships. I also appreciate the freedom of expression which comes from disregarding such repercussions, and I believe uninhibitedness benefits the readers, too. Who wouldn't like the chance to say whatever one wants? I'm nicer in person, yet I'm guessing that's the typical case. It's rare to hear someone comment, "I don't have road rage. I only have hallway rage."
  • Lastly, I won't dispute the assertion that "Maintaining a blog of any kind takes quite a bit of effort". Perhaps I should be putting in more, at least in order to achieve more frequent updates. Currently, and for the foreseeable future, I will post as I wish, and not until I judge the post is worthy and ready. I started blogging for the same reasons as many others: 1) I enjoy doing it, 2) I feel that I (at times) have valuable thoughts to share, 3) I hunger for greater significance, 4) I wish to give back to the Web, 5) I like communicating with people who have common interests, 6) I relish correcting and counterbalancing what others write. Those reasons are how I "benefit" from the blog. Are those reasons "irrational"? Eh, if you say so. Either way, I can't stop/help myself. As I keep repeating, this blog isn't about me, my career, or my finances. Ad income wouldn't earn much anyhow, and even it did, I wouldn't want to look like someone who's trying to drive Web traffic for profit. The blog is about what I'm compelled to publish. (And inserting ads would morally obligate me to use my real name. Hey, if you're planning to potentially profit off people, you should at least have the decency to tell them where the money's going.)
I'll end by saying that, notwithstanding the above, I'm certainly willing to consider revealing more about my identity. But what information, why, and how is it worth forgoing the benefits of anonymity?

Thursday, October 11, 2007

hey Microsoft - fix THIS bug

Greatest of thanks to the Ran Davidovitz blog for investigating and publishing about this confounding IE bug. removeChild in certain cases will cause a secured (https) page in IE to toss up the security warning about mixed content (which, on some pages I've been to, actually means "the ad banners on this page are susceptible to traffic sniffing by third parties! oh noes!").

Obviously, IE isn't the only browser that contains bugs, and its CSS rendering alone should be addressed first, but my teeth hurt a little from grinding over an obtrusive security warning from the browser about normal execution of code on the page.

Thursday, August 09, 2007

willful ignorance

This account of people at OSCON dumping scorn on MySQL and PHP hit a chord with me, especially when the writer, a prominent Perl user, had the insight to compare it to when a Java fan scoffs "People still use Perl?" The "grass is always stupider on the other side" perspective can afflict anyone, going in either direction.

I doubt I need to cluck my tongue and admonish everyone to ignore the usual ideological conflicts and use what works. (The fallacy of identifying programmers with languages was the topic of a rant a while ago.) My new revelation is recognizing that one of the ingredients of robust technological snobbery is a disturbingly natural human trait: willful ignorance.

Willful ignorance is the compulsion to either 1) avoid learning something which could conflict with a truth someone is personally invested in, or 2) learn something just enough to find valid reasons for dismissing it. Willful ignorance is everywhere, because people not only want something to be true but need it to be true. Once someone's mental and emotional energy are mixed up with some piece of information, employing a mechanism to defend it is as instinctual as tensing up when a toothy predator pounces at him or her.

Interestingly, this constant also applies to people who pride themselves on their rationality; they need it to be true that rationality matters and works. For instance, someone who has the grating habit of correcting others' grammar to make it internally consistent may also be someone who both desperately wants to consider language as a logical system and to believe in the importance of logic for valid thinking. Such a person would be well-advised to keep willfully ignorant of the notion of language as a creative act of communication. Starting from the wrong axioms more or less guarantees the failure of rationality to reach the right conclusion. I remember a time when I couldn't figure out why some code under test never produced the expected test result. I took a short break after realizing the test had been wrong.

Therefore the proper antidote to willful ignorance is not rationality alone (rationality can mislead someone into excluding too many possibilities). But a wide-open mind isn't the antidote, either. Accepting information without discrimination leaves one open to each passing fad, or paralyzed with indecision. The best strategy is to make one's mind a centrifuge. Pour in whatever ideas are lying around, then stir and process and compare until the useful portions are separated out. If one of the challengers to your beliefs has value, isn't it better to figure that out as soon as possible? At the risk of getting off point, I respect the Java thinkers who heard about Rails and worked to apply some of its ideas to the Java world, instead of willfully ignoring Rails altogether or trying to immediately convert to it (thereby incurring significant costs).

The above was about willful ignorance. It doesn't apply to ignorance born out of a lack of time/energy. I haven't been able to look into Seaside at all, for that very reason.

Friday, June 29, 2007

Oh, Yegge...

I followed one of the myriad links to Stevey's Blog Rants for the Rhino on Rails explanation (long story short: for the rest of us, JRuby on Rails or Grails probably makes much more sense, even if the project wasn't internal at this point). On the other hand, if it leads to enhancing/updating Rhino, go for it, guys!

I was a little tickled by a different entry on Rich Programmer Food (or, "Why Stevey thinks programmers should be at least familiar with how actual compilers work"). Stevey gleefully pokes fun at C++, Java, and Perl as a matter of course (especially Java--have you read about the Kingdom of Nouns?). But do you suppose he knows that part of Perl 6's design is the ability to write grammars? Moreover, the Parrot work has led to a set of tools that make implementing languages for Parrot easier. Compiler theory and DSLs, indeed!

Friday, June 22, 2007

software development ghost story: one guy vs. the Company

Creating My Own Personal Hell is one freaky read about someone who fills every role in a project, yet can't seem to make "the customer" understand the constraints of time/effort on the end product. That is, cutting testing time will be counterbalanced by increasing bug-fixing time later. Increasing time on bug-fixes will be counterbalanced by decreasing the time available for implementing new features/builds/releases. And on and on...all the common tradeoffs that seem to be integral to any project, but magnified.

To some degree I can relate to the frustration of filling multiple roles, working as I do in small-shop, in-house software development, but I feel immensely grateful for the others on my team (not to mention the folks who man customer support).

Tuesday, June 19, 2007

Python 3000 Status Update

Link here. I admit to not currently using any implementation of Python for work or play. Many months ago I experimented with Jython for some job tasks, but since then the night-and-day difference in momentum between the Jython and Groovy projects has led the decision-makers to prefer Groovy. Either way, most of the code I work on remains Java or C#.

Nevertheless, I'm glad to see Python 3000 is almost there, and just as glad to see that some of Python's little quirks will be corrected. As Guido explains, the changes are intended to make Python more pythonic! Die, '<>' operator, die!

Yay for Unicode support, with bonus points for properly crediting Java as the design inspiration. I wonder how restless the natives may become when they notice that Python 3000 has separate object hierarchies for streams and bytes, abstract base classes, "annotated" function signatures, "print" as a function...

Monday, May 21, 2007

DSL or API?

Consider this post to have the second function of a "pulse post", a post that proves the blog isn't dead yet. I could trot out numerous unconvincing excuses for my neglect, but I think I have the ultimate one: I haven't had anything sufficiently interesting, both to me to motivate the writing and, I daresay, to readers to motivate the reading.

Moving along...this ten question checklist for distinguishing a DSL from an API made me chuckle repeatedly. Is "DSL" a byword I missed? Hey, if Rails proponents can post "Hi, I'm Ruby on Rails" clips (spoofing the way Java development was done a few years ago), I think this is fair game.

Wednesday, May 02, 2007

stating the obvious about digg and HD-DVD encryption

Here's the reply from Digg to all of the users who protested the removal of stories containing the HD-DVD encryption key.

Maybe I'm out of touch with the majority on this topic, but I hope the following points are obvious or at least clearly need refuting:
  • The law is on the side of those who requested the removal of the key. Even if this specific situation is merely iffy in legal terms, the best course is to not flirt with it. Not liking a law doesn't give one the right to break it or find creative loopholes. (On the other hand, laws that really are unjust may demand such drastic action in service to a just cause.) Changing the system from within, instead of acting outside it altogether, is one of the characteristics of a responsible citizen.
  • Digg isn't obligated to let anything onto the site. Digg isn't an arm of the government, which means it can suppress speech as much it wants. Moreover, Digg can demand that users agree to its terms. Digg is not a public forum.
  • Censorship is one of those curious entities that change shape when each person considers it. Someone who complains about censorship is likely complaining about the suppression of speech he or she likes, and someone who doesn't complain about censorship is likely not complaining about the suppression of speech he or she dislikes. I don't know that this observation implies anything, but I thought I should mention censorship's relativity.
  • It's ridiculous to argue that since it's futile to keep an easily-copied secret absolutely hidden once it's revealed, then the secret can and should be published and shared wherever and whenever. "Oh, some toxic gas has escaped. Might as well turn on a fan!" In general, I admit to being confused when people assert that (digital) information, whether that information encodes works of art or engineering, should never have an enforced, associated price. If the work encoded by the information had a cost of production, and the producer intends to earn a profit (arguing that people who produce information should never earn a profit is a separate issue), doesn't it make economic sense to reflect that value in the price? Now, when the same producer charges me twice for the same work, merely so I can obtain a different digital encoding of the work...I don't agree with that.

Friday, April 20, 2007

Ajax not AJAX

See here.
Apparently Ajax really is just a short, simple buzzword, and not an acronym, which is what I thought. Oops. Luckily many people are case-insensitive (those clods!).

Hey, it's still a step above writing PERL or JAVA.

Friday, March 30, 2007

IRC channel as mutual admiration society

This entry by "somian" from the use.perl.org feed is a fascinating read. It analyses his experiences on #perl on Freenode. I've never been there, although I did pop in on #gentoo, when I had a vexing question for which RTFM wasn't possible because I didn't know which FM T R. Anyway, somian's thoughts apply to many other online chats and forums, and I'm not familiar with his specific situation.

On one hand, I see what he means. I've felt similar disappointment with the "groupthink" I've found - especially annoyance at the assumption that my nerd mojo somehow implies that I hold a whole set of opinions or behave a particular way.

But the tendency of people in a group to mercilessly attack outsiders is a deep, human-naturey thing (see my post on mockery). Part of the reason people form into groups at all is out of a common thread between them. Frankly, they didn't form the group to talk (in a non-dismissive way) about people who are different than them - they formed the group to talk about their interests.

Practical upshot: nerds aren't immune from mob mentality, regardless of whether they happen to be "right" about something or not.

Thursday, March 08, 2007

lightweight bout

Hey, remember when Spring was lauded as being the light/agile/simple/pick-your-buzzword alternative to vanilla J2EE/JEE technology? Oh, right, that was just yesterday. My RSS blog feeds did a lot of pointing at Guice, which apparently (I haven't used Guice) does simple Spring tasks better than Spring, although the linked comparison says the two are interoperable. So you can possess your frosted baked good and consume it concurrently.

Bonus link: I can't resist sharing this, because no one should miss out on learning the SOA facts.

Monday, January 22, 2007

someone else explained it better

A while back, after reading some blog posts by people on either side of the perceived Java/Ruby rift, I reached the insight that they can't find common ground because they have different values for the choice of programming language. Thus, using the flashiest title possible, what we have here is a clash of programming language civilizations.

Kevin Barnes' Code Craft blog, which I have favorable first and second impressions of, seems to express the same conclusion in the entry Freedom languages. But rather than two civilizations with two sets of values, there are two kinds of languages: freedom and safety (but note the section at the end entitled "Safety isn't safe and freedom isn't free"). Freedom languages encourage the programmer to do whatever he or she wants, and safety languages restrict the programmer or code in some way, whether to enforce contracts or teamwork or performance. I like the distinction because it's more general than the usual litmus test of static-vs-runtime data typing, which falls down in cases when a language offers both options.

Bonus link: I commented about Web continuations and Rife. Read here about why the idea may not be as spiffy as advertised.

Wednesday, September 13, 2006

commentary on The Problem Is Choice

The Problem Is Choice considers how choice, or change, makes a DSL inappropriate for many situations. Each section starts with an apt quote from the Architect's scene in Matrix Reloaded (underrated movie, in my opinion, but then I'm probably the kind of pseudo-intellectual weenie the movie was meant for).

It's always seemed to me that, most of the time, modeling a specific problem with a DSL, then using the DSL to solve it, was looking at the whole thing backwards. The problem may be easy inside the DSL, but if you had to implement the DSL first, have you actually saved any labor? This is much less of an issue if you implement the DSL in a programming language that makes it easy (Lisp being the usual example). I think DSLs are more for reusing code to solve a group of very similar problems. For instance, a DSL seems to work well as part of a "framework" that regular, general programs can call into for particular purposes. Consider HTML templating languages, or Ant, which is the exact example that defmacro uses to illustrate to Java programmers the utility of a DSL.

Any program has to model its problem domain in order to solve it, because the problem domain has information which must map onto computer data. This is the ultimate "impedance mismatch", you might say--the necessity of translating an algorithm which is both theroretically and practically computable into an equivalent algorithm, for a Turing Machine, but expressed in your language of choice. In some problem domains a DSL may work well as a model, but according to this blog entry, apparently many problem domains evolve in too messy a fashion to fit into a formal grammar without repeatedly breaking it, requiring new constructs, etc. As the AI guys have discovered, not even human languages are a good fit for DSL modeling.

All this talk about modeling is not the real point, because no matter how a program models the problem domain, it should not mimic the domain too much. Remeber the Flyweight pattern. Code entities should match the problem domain just enough to enable effective, realistic processing with true separation of concerns. As The Problem of Choice says, the effectiveness of a programming language is not how well it can model a problem. I would say that the important criterion is how well it can flexibly but simply reuse/modularize code when achieving that model, whether with a DSL, a set of objects, or the higher-order manipulation of functions. Other languages, or even mini-languages like SQL, may not have as much of this Power, but that may be on purpose. (And no business logic in your stored procedures, buster!).