- I remembered a few other metaprogramming techniques. My quick reference to a "preprocessing step that applies templates/macros to the source" doesn't give due recognition to the (sometimes headache-inducing) tricksy acrobatics performed by template metaprogramming. The general category of compiler-compilers was a gaping omission, considering how many of them that Wikipedia lists and how frequently people use these great tools. Camlp4 is a snazzy way to create parsers in OCaml, even for extending OCaml syntax (see the slides for the One-Day Compilers presentation). And F# has a "quotations" feature for treating code like data.
- Perl 6 isn't a DSL, of course. When I wrote that "Parsec can implement external DSLs, even an external DSL like Perl 6", I meant something more along the lines of "Parsec can implement languages that are much different from the host language Haskell, even a language as nontrivial as Perl 6". My intent was merely to comment in passing that "monads in the form of a parser combinator like Parsec" can and do function as an effective technique for external DSLs. Those of us who're familiar with Higher-Order Perl (the later chapters...) aren't (much) fazed by the concept of parser combinators...
- Monads aren't DSLs. Actually, I said the two were opposites. The line "A monad is an inside-out DSL" (and in bold, no less) is using DSLs somewhat as a metaphor or counterpoint for monads, although as I wrote a while ago my preferred monad metaphor is an abstract algebra group or ring (and if you see what the origin for monads was, the suitability of this metaphor isn't surprising).
- However, I did write "In fact, as monad tutorials instruct, Haskell's 'do-notation' is syntax sugar, i.e. a DSL, for using monads..." My intent was merely to comment in passing that do-notation is like a DSL within Haskell syntax for the >>= operator, through which the domain "using monads" is made easier to work in. It reminds me of how LINQ's SQL-ish syntax, with its keywords like "from" and "where", acts like a DSL for data queries within applicable languages. As parts of a language's intrinsic syntax, are "do-notation" or extra-special LINQ syntax true stand-alone DSLs? No. Do those language features, like DSLs, make it easier to program for a specific purpose, a specific task's "domain"? Darn tootin' (yes).
- Are monads a genuine example of metaprogramming, or programming in a language and to an API? Given that a monad is a feature of the language syntax, a monad sure doesn't seem like metaprogramming to a Haskell programmer any more than defmacro does to a Lisper. However, think of how a language without monad capabilities would achieve monad-like effects (consider the "before" and "after" advice of AOP). On the whole, monads seem like a sufficiently sophisticated mechanism for code transformation, regardless of the language, to qualify for "metaprogramming". Someday, when monads aren't considered fancy wizardry, maybe the "plain programming" label will be the consensus. (As an aside, one of my personal syntactical/semantic wishes is that the freedom to choose dynamic or static variable type checking becomes commonplace.)
- reddit comments: Is my only point that do-notation is a DSL, and I'm just trying to be clever about it? 1) Well, I'm always trying to be clever on this blog, as a general rule. It's one of my motivations for writing at all. But I'm sure everyone realizes that my motivation has nothing to do with whether what I'm writing is useful or not to each individual reader. Applying what you read is your responsibility. I'm not one for writing how-tos. 2) As I already admitted, calling the do-notation a DSL is shaky (the "internal" qualifier notwithstanding). That isn't at all the main point. The main point is metaprogramming. The later paragraphs are about contrasting two applications or approaches of metaprogramming, monads and DSLs. If this "Grand Unified Theory" isn't as interesting or persuasive to anyone else, that's okay. It might offer some answers for people who ask, "Can I make DSLs in Haskell?"
- Woo-hoo! As of right now, not only is the blog the top Google result for "rippling brainwaves", but the discussed blog entry is the top Google result for "metaprogramming monads DSLs". (Using Blogger probably helps. You Google searchers have got to ask yourselves a question: "Do I feel lucky?" Well, do ya, punks?) But I didn't write the title or the entry to "bait" readers, honest. That's partly what I was referring to in the self-deprecating remark, "This must be what happens to your mind after reading too many blogs."
- Thanks to the commentators who said they grasped what I was struggling to communicate! I appreciate the occasional reassurance that I'm being lucid, rather than gibbering...
- Update (Feb 18, 2008): I noticed that Writing a Lisp Interpreter In Haskell seems to match some of my conclusions. Nice to see that I finally caught on to what I read many months ago... First a short quote:"Haskell comes standard with a parsing library called Parsec which implements a domain specific language for parsing text." Now a long quote:
A common misconception among Haskell beginners is that monads are an unfortunate evil necessary in a a lazy, side-effect free language. While it is true that monads solve the lazy IO problem, it is only one example of their power. As I plunged deeper and deeper into Haskell I began to realize that monads are a desirable feature that allows the programmer to build otherwise impossible abstractions. Monads, together with higher order functions, provide excellent facilities for abstracting away boilerplate code. In this sense Haskell's monads serve a similar purpose to Lisp's macros (though the concept is entirely different).
Saturday, February 16, 2008
clarifications of metaprogramming, monads, DSLs
The last post led to some interesting feedback that has prompted me to make some clarifications. Specifically, I'm responding mostly to the entry on chromatic's use.perl.org Journal (I often enjoy pondering what chromatic writes there and over at O'Reilly Network Blogs). I'm also including a short response to the reddit comments, and tossing in a couple other thoughts I had after posting.
Thursday, February 14, 2008
of metaprogramming, monads, DSLs
Update: I wrote a follow-up as feedback to the feedback.
This must be what happens to your mind after reading too many blogs.
Like the writer of "I'm not tired of Java yet", I'm one of those many people who still (must) predominantly use Java at work for all the usual boring reasons, like platform stability and understandable syntax and 3rd-party APIs, but who also enjoy learning about other ways of doing things. The functional programming paradigm is one I've covered before (e.g. in this comparison of FP's and other paradigms' answers to the fundamental concerns of programming). Much of functional programming still appeals to me (e.g. when I implemented an Ajax callback that, if necessary, would make a data-specific number of additional Ajax calls before doing its original task(s) such as updating the DOM). However, over time I'm gaining more appreciation for another paradigm: metaprogramming, programs that manipulate programs, especially when the manipulating and manipulated program are in the same language and the manipulation happens at run-time. Metaprogramming can enable a higher level of abstraction and drastically reduce the need for code to repeat itself, though I'd advise first exhausting the plain reuse mechanisms of the language: namespaces, modules, classes, traits, functions, etc. (On the other hand, in cases in which a complex OO design pattern's careful balance between rigor and flexibility isn't vital and metaprogramming isn't absurdly difficult or error-prone, I prefer the metaprogramming option...)
A broad collection of techniques fall into the metaprogramming category. In LISP-like languages, homoiconicity languages in which the language syntax is a language data structure, read-write metaprogramming is natural. In other languages, metaprogramming could be supported through implementation-provided reflection objects. If the language implementation supports loading/linking additional code at run-time, a program could generate source, prepare it, and load it. With some limitations, metaprogramming might be possible through byte-level operations performed by a library, like ASM. When the language (or the platform it runs on, such as .Net 3.5) offers "expression tree" objects, the creation of new code at run-time is less messy but not necessarily easier. In still other languages that as a matter of course interpret/compile source at run-time (or immediately before), a program could just synthesize source as a string and run an "eval" function on it. Finally, a comparatively primitive yet widely-used technique is a preprocessing step that applies templates/macros to the source to produce the complete source.
One language characteristic blurs the line between metaprogramming and programming somewhat like homoiconic languages: classes and objects that are modifiable without restriction. Learning some ways to exploit this characteristic has been called "Basic Training" for Ruby, but other languages with the characteristic are applicable. Its relationship to metaprogramming is clear. Given that classes and objects combine code and data, and classes and objects constitute the structure of program B (so OOP is a prerequisite), and program A can freely modify the classes and objects of program B (and vice-versa), then program A can manipulate program B--the definition of metaprogramming, in a limited sense. Nevertheless, the modification of classes and objects can accomplish much of the effect of metaprogramming rather simply. (Freedom languages with unrestricted object modifications seem to usually have an "eval", too.)
Regardless of the specific metaprogramming technique, two applications of the paradigm have been getting publicity: monads (Monads in Python is a good example) and DSLs (find an example by your preferred Ruby cheerleader). The DSLs under discussion are internal, not external, which means the DSLs act as an API short-hand that uses the syntax and semantics of a host general-purpose programming language. (But monads in the form of a parser combinator like Parsec can implement external DSLs, even an external DSL like Perl 6...) Monads and DSLs are both applications of metaprogramming because each of the two effectively transforms minimal code into a complete form for execution. In fact, as monad tutorials instruct, Haskell's "do-notation" is syntax sugar, i.e. a DSL, for using monads...
The surprising revelation I came to a short while ago is that these two applications of the metaprogramming paradigm are opposites. In a DSL, statements and expressions of the source code "expand out" into larger blocks of code. The DSL implementation translates the source code and extrapolates a context of other code around it and from it. A monad implementation, by contrast, constitutes a context of other code around the source code that translates the source code and makes the monad's purpose "focus in" on the source code. A monad is an inside-out DSL. In the DSL application of metaprogramming, one observes and implements "operations" and "entities" in a domain, then creates a DSL--a language for those operations and entities. In a monad application of metaprogramming, one observes and implements "operations" and "entities" in a domain, figures out how to interweave those operations and entities with code in general (via the monad "return" and "bind" definitions), then creates a monad--a "domain" for those operations and entities, in which code has extra significance/side-effects. Obviously, the opposite function of monads and DSLs also reflects opposite purposes for the host programming language: an internal DSL acts as a highly-specific dialect and a monad acts as an extension. DSLs could perhaps be friendly to non-programmers (presuming a minimum level of aptitude), but monads would be definitely unfriendly to them.
I'd like to try employing a monad application of metaprogramming to make certain tasks easier. Of course, outside of the pure functional paradigm, where both mainstream languages and reality have state, monads are less essential. Monads could still be beneficial whenever I'll need to model a complicated commonality among several statements, without repeating myself. Those who think monads are still a long way off should note that F# is on its way to becoming an officially-supported .Net language (it also works on Mono), and F# has "computation expressions" whose similarity to monads and list comprehensions is no accident...
This must be what happens to your mind after reading too many blogs.
Like the writer of "I'm not tired of Java yet", I'm one of those many people who still (must) predominantly use Java at work for all the usual boring reasons, like platform stability and understandable syntax and 3rd-party APIs, but who also enjoy learning about other ways of doing things. The functional programming paradigm is one I've covered before (e.g. in this comparison of FP's and other paradigms' answers to the fundamental concerns of programming). Much of functional programming still appeals to me (e.g. when I implemented an Ajax callback that, if necessary, would make a data-specific number of additional Ajax calls before doing its original task(s) such as updating the DOM). However, over time I'm gaining more appreciation for another paradigm: metaprogramming, programs that manipulate programs, especially when the manipulating and manipulated program are in the same language and the manipulation happens at run-time. Metaprogramming can enable a higher level of abstraction and drastically reduce the need for code to repeat itself, though I'd advise first exhausting the plain reuse mechanisms of the language: namespaces, modules, classes, traits, functions, etc. (On the other hand, in cases in which a complex OO design pattern's careful balance between rigor and flexibility isn't vital and metaprogramming isn't absurdly difficult or error-prone, I prefer the metaprogramming option...)
A broad collection of techniques fall into the metaprogramming category. In LISP-like languages, homoiconicity languages in which the language syntax is a language data structure, read-write metaprogramming is natural. In other languages, metaprogramming could be supported through implementation-provided reflection objects. If the language implementation supports loading/linking additional code at run-time, a program could generate source, prepare it, and load it. With some limitations, metaprogramming might be possible through byte-level operations performed by a library, like ASM. When the language (or the platform it runs on, such as .Net 3.5) offers "expression tree" objects, the creation of new code at run-time is less messy but not necessarily easier. In still other languages that as a matter of course interpret/compile source at run-time (or immediately before), a program could just synthesize source as a string and run an "eval" function on it. Finally, a comparatively primitive yet widely-used technique is a preprocessing step that applies templates/macros to the source to produce the complete source.
One language characteristic blurs the line between metaprogramming and programming somewhat like homoiconic languages: classes and objects that are modifiable without restriction. Learning some ways to exploit this characteristic has been called "Basic Training" for Ruby, but other languages with the characteristic are applicable. Its relationship to metaprogramming is clear. Given that classes and objects combine code and data, and classes and objects constitute the structure of program B (so OOP is a prerequisite), and program A can freely modify the classes and objects of program B (and vice-versa), then program A can manipulate program B--the definition of metaprogramming, in a limited sense. Nevertheless, the modification of classes and objects can accomplish much of the effect of metaprogramming rather simply. (Freedom languages with unrestricted object modifications seem to usually have an "eval", too.)
Regardless of the specific metaprogramming technique, two applications of the paradigm have been getting publicity: monads (Monads in Python is a good example) and DSLs (find an example by your preferred Ruby cheerleader). The DSLs under discussion are internal, not external, which means the DSLs act as an API short-hand that uses the syntax and semantics of a host general-purpose programming language. (But monads in the form of a parser combinator like Parsec can implement external DSLs, even an external DSL like Perl 6...) Monads and DSLs are both applications of metaprogramming because each of the two effectively transforms minimal code into a complete form for execution. In fact, as monad tutorials instruct, Haskell's "do-notation" is syntax sugar, i.e. a DSL, for using monads...
The surprising revelation I came to a short while ago is that these two applications of the metaprogramming paradigm are opposites. In a DSL, statements and expressions of the source code "expand out" into larger blocks of code. The DSL implementation translates the source code and extrapolates a context of other code around it and from it. A monad implementation, by contrast, constitutes a context of other code around the source code that translates the source code and makes the monad's purpose "focus in" on the source code. A monad is an inside-out DSL. In the DSL application of metaprogramming, one observes and implements "operations" and "entities" in a domain, then creates a DSL--a language for those operations and entities. In a monad application of metaprogramming, one observes and implements "operations" and "entities" in a domain, figures out how to interweave those operations and entities with code in general (via the monad "return" and "bind" definitions), then creates a monad--a "domain" for those operations and entities, in which code has extra significance/side-effects. Obviously, the opposite function of monads and DSLs also reflects opposite purposes for the host programming language: an internal DSL acts as a highly-specific dialect and a monad acts as an extension. DSLs could perhaps be friendly to non-programmers (presuming a minimum level of aptitude), but monads would be definitely unfriendly to them.
I'd like to try employing a monad application of metaprogramming to make certain tasks easier. Of course, outside of the pure functional paradigm, where both mainstream languages and reality have state, monads are less essential. Monads could still be beneficial whenever I'll need to model a complicated commonality among several statements, without repeating myself. Those who think monads are still a long way off should note that F# is on its way to becoming an officially-supported .Net language (it also works on Mono), and F# has "computation expressions" whose similarity to monads and list comprehensions is no accident...
Thursday, February 07, 2008
peeve no. 256 is overuse of the word 'powerful'
Like the other word overuse peeves ("random" and "satire"), I have no issue with the word "powerful" when it's used as I think it should. For instance, when something is full of power, "powerful" is a fine description. In other cases, when the definition and/or quantification of the relevant "power" is debatable, a better word choice would be more interesting and more accurate.
- An experience provoked strong emotions, so therefore it was powerful, Mr. Critic? Fascinating. How many watts was it?...Oh, the experience had emotional or "psychic" power. How much emotional work did this power achieve? Enough to form one semi-vivid, long-term memory?...Would it do the same to any person? If not, does that imply each person has a different degree of emotional inertia? Hmm? Stop walking away from me...
- This software is powerful, Mr. Marketer? Fascinating. Where is the power lever? If I turn up the juice, will it use up more electricity? Does it have a "turbo boost" button I can press to make it run faster?...Yes, I know about nice and renice, but wouldn't those affect powerless software the same?...Oh, I see, the number of features is what makes it powerful. If I don't plan to ever use those features, will the software still be powerful for me? Eh?...Of course, how silly of me, what makes the software powerful is its set of sophisticated commands. When those coarse-grained commands make too many assumptions about what I expect, are those commands still powerful? Don't get angry, I just want to know what you mean...
- This programming language is powerful, Mr. Advocate? Fascinating. Does that mean it can do more than existing general-purpose languages?...No? Does that mean it has all the same features as language X, and more?...What? Cramming every feature and possibility into a language would be awful? Well then, does the powerfulness of the programming language mean that it runs fast?...I suppose you're right to say that's an implementation detail, and in any case the algorithm design can be far more important than the language for execution performance. What about the design?...You're saying succinctness is power. I recall reading about that once...
- Politician X is powerful, Mr. Pundit? Fascinating. He must be a fairly good athlete, then?... Ah, I see, you mean he's well-connected. So it isn't him that's powerful, but a network of people doing favors for each other?...Right, in a democracy his power would have come from the citizens who elected him. He's powerful because at election time he was popular enough?...Huh, he's powerful now, because he's good at persuasion. His powerfulness is really power redirected from everyone around him?...Naturally, he would also have official government powers according to his post. His powers allow him to command government employees. But if they consent because of his position in government, isn't the position the part that is powerful? Stop interrupting my questions...
Subscribe to:
Posts (Atom)