- Programming metaphor. FP's metaphor seems to be math. People in FP talk about "provable" programs. Functions in FP are more like mathematical functions that always map the same arguments to the same result, rather than procedures that happen to return a value. In contrast, OOP's metaphor is objects passing messages, and (high-level) imperative programming's metaphor is...well, a simplified computer architecture, in which memory locations (variables) have names and operators can be used to represent the end goal of several machine instructions. Declarative's metaphor is a document of assertions written in terms of the problem domain.
- Code modularization. Code is easier to handle when the humble programmer can divide a program into parts. Of course, FP uses functions for this, but functions have values and types similar to any other program entity ("first-class"). OOP uses objects. Imperative uses functions/procedures that group a series of statements, but the functions have no presence in the code aside from program flow control. Declarative might use an 'include' to designate one document as a part of another document. In practice, no matter the paradigm, there's a higher level, the module or package level, for collecting several units of modularized code into a single reusable chunk or library.
- Localized code effects. Closely related to code modularization. Code should not have much of an "action-at-a-distance" quality. Code units must be connected in some way, but this connection should be as minimal and well-defined as possible to make unit changes less hazardous and the units' execution more easily understood. In particular, program state over there should not affect program state here. In FP (only the paradigm in the "pure" abstract is under discussion here) there is no modifiable program state, because that might cause functions to not always map the same arguments to the same value. In effect, an FP program is a series of independent, generalized data transformations. FP accomplishes iteration by recursion, but a function that ends just by returning the result of another function, known as a "tail call", does allow the compiler/interpreter to save space simply by tossing out the intermediate function call (cut out the useless middle man whose only remaining purpose is forwarding the tail-call result) and returning the result of the tail call to the function that called the intermediate function in the first place. In OOP, code's effect is localized by encapsulating or hiding data inside objects' private scopes. Imperative has local function variables.
- Deferred code execution. Often, code needs to be manipulated but nevertheless deferred in execution, such as handler code that must be connected up to an event-processing framework (like a GUI toolkit). In FP, functions are values, so functions can be passed as arguments to other, "higher-order" functions. An object in OOP would send a message to another object to identify itself as a Listener or Observer of that object's events. Or, alternatively, an object might always delegate or forward certain messages to other designated objects. In a more general context in FP, if one wishes to delay the execution of a particular operation at runtime, one way is to construct a function with no arguments, known as a "thunk", around the operation, and then pass the thunk around until the operation is really needed, at which point the code just runs the thunk and does the operation.
- Reconfiguring code. To avoid marginal effort, code should be reusable. In some happy cases, the code can be reused as is because the requirement is the same (actually, the problems themselves can sometimes be mapped onto each other to accomplish this - one could perform a sort on a set of expensive-to-copy data by creating a parallel array of references to the data and then sorting the references array using any code that can sort arrays by dereferencing the values). But many times a requirement will be similar but distinct, and the code must be reconfigured to be reused. The easiest solution is to add a parameter to the code. For FP-style functions, that have values and types of their own, this can take the form of currying: evaluating a function with an incomplete number of parameters returns not the function's result but a new function that takes the remaining parameters. Currying add(a,b) by evaluating add(3) would return a new function that could be named add3 - a function that returns the result of adding 3 to its single parameter. Moreover, the fact that functions can be parameters to functions means that FP makes it easy to create "light-weight frameworks": the parts of a generalized function or "framework" that vary can be smaller, distinct functions that are passed to it like any other parameters, and then are called within its body. In OOP, a slew of patterns might apply to reconfiguring code to a new situation: Adapter, Mediator, Decorator, Strategy, among others. Declarative might have a technique known as an "overlay" for merging a new document fragment with another named fragment.
- Metaprogramming. One of the great epiphanies of programming that set it apart from other disciplines is metaprogramming - programs manipulating programs. Here, comparing programming to writing would imply that a book that could write books, and comparing programming to construction would imply that a house could build a house (although I suppose one could say metaprogramming is like building a robot that can build robots - er, Skynet?). FP's enhanced notion of functions greatly pays off in this context, because functions can now create other functions. "Lambda" is the traditional name for the magic function, keyword, or operator that creates a new function (value) from a code block and parameter list. Any outside variables referenced by the new function in lexical scope are captured at creation time and "ride along" with the function - this is a closure because it "closes over" the scope. Since closures are related to variables, closures pertain more to "impure" FP than FP per se. In OOP, metaprogramming support refers to constructing new original objects (as opposed to instantiating an existing class), especially when the OOP is prototype-based rather than class-based - class-based OOP can still do it if the language supports metaclasses. Metaprogramming can also be done with macros/templates within any paradigm, but unless this facility is part of the language (like the Lisp-y ones), the macros may be part of a messy pre-processing step that is notoriously tricky.
Note that a paradigm is mostly independent from a type system, and also the question of compilation vs. interpretation vs. JIT from bytecode. Comparing paradigms is about how each one enables differences in coding styles, not how the paradigms and code happen to be implemented. Comparing languages and language implementations, well, that's a different question.
I should also point out that FP, or any other paradigm, blends with other paradigms in the wild. A common tactic, that has the virtue of being more familiar to the OOP-trained, is to make an FP-style function an object that happens to be callable with arguments. A different tactic, that may have advantages in underlying flexibility for esoteric tricks, is to use a closure like an object, where the variables in the closure's scope are like an object scope. Yet another tactic is to rely on conventions and practices and libraries to use one paradigm like another, like C code that uses structs and functions for object-oriented "lite" - in an early stage C++ worked by translating C++ code to backend C code. The quite permeable boundaries between paradigms is why knowledge of FP can come in handy in contexts outside languages like Haskell.
I count it more as "undoing the damage" than expanding horizons. I recently did a tiny bit of C# tutoring, which was quite revealing, as the student genuinely disliked the imperative paradigm. He said something along the lines of "I hate stuff like x = x + 2".
ReplyDeleteThunks actually represent unevaluated functions. They actually provide a pretty good segway into lazy evaluation. Each thunk stores a pointer to the actual function, and a list of values and other thunks for parameters. Once the evaluation is required, it evaluates the function using this list. The way in which you make out thunks, they just sound like IO function pointers.
Also, I've always thought of compilers as robots which create robots based on your plans. So really, metaprogramming is already quite built in to the basics of programming.